Skip to content

ustutt-ipvs-vs/eval_wired_sporadic_MKFirm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Evaluation in Wired TSN with Sporadic Release Times

This document describes how to reproduce the results Section VI.B Evaluation in Wired TSN with Sporadic Release Times of the paper An (m, k)-firm Elevation Policy for Weakly Hard Real-Time in Converged 5G-TSN Networks. For more context please refer to this document: https://doi.org/10.5281/zenodo.19224732

We provide the results of all intermeduate steps so each step can be reproduced individually.

Code: The scripts used for the schedulability analysis are provided as part of this repository in the emergency_eval folder.

Datasets: The respective dataset of the (intermediate) results can be found here.

General Prerequisites

The working directory should always be the root folder of this repository. Depending on your environment you may need to adjust the PYTHONPATH using the following command:

export PYTHONPATH="${PYTHONPATH}:emergency_eval:."

Python Packages

We recommend creating a conda environment with the provided environment.yml file:

conda env create -f environment.yml

Otherwise, Python 3.12 with the following dependencies is required:

  • networkx=3.1
  • pygraphviz=1.9 (conda-forge)
  • lxml
  • numpy>=1.26,<2.0
  • matplotlib>=3.8,<4.0

Additional requirements for running the schedulers:

  • docplex (ibmdecisionoptimization)

Additional requirements for running the worst case analysis:

  • graph-tool (conda-forge)
  • natsort
  • pandas

1. Schedulability analysis

This section describes how to reproduce the schedulability analysis of the paper.

Datasets: All (intermediate) results of this section are contained in the dsn26_schedulability.zip file.

After following the Prerequisites, you can jump right into any of the following steps and start from there by using the provided intermediate results from datasets of the previous sections.

Prerequisites

Before executing any code, please adjust the settings in the emergency_eval/settings.py file. For the scheduling phase further repositories and programs are required, please refer the Scheduling Prerequisites Section

Topology Generation

The topology is generated by executing the gen_top.py script.

python3 emergency_eval/gen_top.py

The generated topology file topology.json used in our evaluation is provided in the t_3x4 folder of the dataset.

Streamset generation

The streamsets are generated executing the gen_streams_schedulabilitytest.py script.

python3 emergency_eval/gen_streams_schedulabilitytest.py

The resulting streamsets used in our paper are provided in the p_24 folder (inside of t_3x4) of the dataset with the following structure:

  • Each r_x folder contains one isochronous streamset streams.json and multiple et_y subfolders.
  • Each et_y subfolder contains one sporadic streamset streams_et.json, where y corresponds to the number of sporadic streams.

Scheduling

Scheduling Prerequisites

For the schedule calculation the following projects are required:

  1. Our approch:
    1. The CP-based scheduler for the primary schedule for isochronous streams: https://github.com/ustutt-ipvs-vs/primary_cp_schedule_MKFirm
    2. Our augmentation approach: https://github.com/ustutt-ipvs-vs/schedule_augmentation_MKFirm (needs to be build locally, see README.md)
  2. Our implementation of E-TSN: https://github.com/ustutt-ipvs-vs/etsn_MKFirm

You either clone the repositories manually or execute the following script to clone all required repositories to the root directory of this repo. After this, your filesystem tree should look like this:

workspace/
├─ eval_wired_sporadic_MKFirm (this repo)
│  ├─ dsn26_schedulability (generated in the previous steps or donwloaded from the datasets)
│  │  ├─ ...
├─ etsn_MKFirm
├─ primary_cp_schedule_MKFirm
├─ schedule_augmentation_MKFirm

Clone repositories

git clone https://github.com/ustutt-ipvs-vs/primary_cp_schedule_MKFirm ../primary_cp_schedule_MKFirm
git clone https://github.com/ustutt-ipvs-vs/schedule_augmentation_MKFirm ../schedule_augmentation_MKFirm
git clone https://github.com/ustutt-ipvs-vs/etsn_MKFirm ../etsn_MKFirm

And build the schedule_augmentation_MKFirm project. For more information, please refer to the respective README.md file.

cd ../schedule_augmentation_MKFirm
mkdir release
cd release
cmake ..
cmake --build .
cd ../../eval_wired_sporadic_MKFirm

Furthermore, an installation of the CPLEX optimized needs to be present. We cannot ship CPLEX separately as part of these artifacts, as a license is required. A free academic license including the installer can be found here. After installation please adjust the cplex_path in the settings.py file to the location of the cpotimizer binary file on your system.

Scheduling Execution

To execute the scheduling phase, you can run the emergency_eval/run_scheduler_schedulabilitytest.py. Please note that this requires significant system resources and time to execute.

python3 emergency_eval/run_scheduler_schedulabilitytest.py

In order to run the scheduler for only one streamset, you can use the -f argument, for example:

python3 emergency_eval/run_scheduler_schedulabilitytest.py -f dsn26_schedulability/t_3x4/p_24/r_0/et_1

Scheduling results

The results of the scheduler can be found in the respective folder under files with this prefix:

  1. Our approach:
    1. CP-based primary schedule: cp_out
    2. Augmented schedule: libtsndgm_out
  2. E-TSN: etsn_out

The following files are provided:

  • [prefix].json: The schedule (If this file is missing, the scheduler did not complete successfully).
  • [prefix].log: The output log of the scheduler.
  • [prefix]_meta.json: Further metadata (such as the runtime and exit code).

The results of all schedulers on our system are available in the dataset.

Results

Using the eval_schedulability.py script, the results used in the paper are calculated. It yields two matplotlib figures (which are merged to Fig 8a in the paper) and generates a results_24.json file with the raw numbers shown in the figure.

python3 emergency_eval/eval_schedulability.py

You can either use this script after running the schedulers for all streamsets on your system. Alternatively, when the scheduling results from our systems are available in the dsn26_schedulability folder, this script will yield the exact same results as shown in the paper.

2. Worst-Case Analysis (Simulation)

In this section, we describe how the worst-case analysis can be reproduced.

The streamset with 24 isochronous and 24 sporadic streams used for the simulation-based worst-case analysis is provided in folder p_24/r_9/et_24 of the schedulability analysis dataset.

Datasets: The (intermediate) results of this section are contained in the dsn26_worstcase.zip file.

The worst case analysis is performed using OMNeT++ simulations. Due to the number of runs performed, reproducing the exact results of the paper requires significant system resources and time. Thus, we provide the following options:

  1. A Dockerfile with a single run command (see this section), that:
    • Contains all required prerequisites
    • Performs a few short runs of the simulation
    • Evaluates the results of these few runs
  2. A detailed description listing all requirements and steps to perform the full simulation and evaluation. (See section Manual Execution

Note: In any case, the scheduling results of the previous section for dsn26_schedulability/t_3x4/p_24/r_9/et_24 need to be available (either by self executing or downloaded from the provided datasets)!

Docker

By default, the docker environment is set up to only perform 2 simulation runs of 1s each (instead of 100 runs with 10s each as used in the paper). These settings can also be adjusted using the emergency_eval/settings_docker.py file.

To build and run the docker container and print the result, please execute the following two commands:

docker build --progress=plain -t eval_wired_sporadic_mkfirm .
docker run --rm -v $(pwd)/dsn26_schedulability:/usr/src/workspace/eval_wired_sporadic_MKFirm/dsn26_schedulability -v $(pwd)/dsn26_worstcase_docker:/usr/src/workspace/eval_wired_sporadic_MKFirm/dsn26_worstcase eval_wired_sporadic_mkfirm

Note: Building the docker container still requires some time, as the whole OMNeT++ and INET project is built from source.

After the execution this will print the results to the console. For an interpretation of this please refer to the Evaluation section and the paper.

Manual Execution

This section describes how to reproduce the exact results as presented in our paper. This requires significant time and system resources!

In case you've downloaded the simulation results (dsn26_worstcase.zip from the dataset), you can directly jump to the Evaluation section and perform the evaluation script based on the provided results.pkl file.

Prerequisites

In order to execute the simulations of this section, a working installation of OMNeT++ and INET is required. The versions used for the simulation in this paper are:

  • OMNeT++: v6.1.0
  • INET: v4.5.4

Futhermore, the scheduling results of the previous section for t_3x4/p_24/r_9/et_24 need to be available.

Please make sure the opp_run executable of OMNeT++ is available in the PATH environment variable.

Please also make sure to adjust the other parameters in the settings.py file.

Simulation Generation and Execution

Based on the provided topology and streamset file of the above scenario, an OMNeT++ simulation is generated and then executed using the simulate_single_scenario_long.py file.

python3 emergency_eval/simulate_single_scenario_long.py

The generated simulation files (omnetpp.ini and Scenario.ned) are contained in the libtsndgm for our approach and the etsn folder for E-TSN. Furthermore, when generating the simulation scenario an additional streams_meta.json file is provided which is used in the final evaluation step.

For the paper we evaluated 100 simulation runs each with a duration of 10 seconds in simulation time (on our system 1 simulated second took approximately 100 real seconds). Thus, the simulation can take a significant amount of time and system resources to execute. The number of runs and the duration of each run can be adjusted in the settings.py file.

Data Extraction From the Simulation Results

Due to the size of the simulation results (.vec and .sca files), we cannot publish them in the raw format. However, the provided eval_single_long.py script extracts the important information and stores it in the results.pkl file. This file is provided as part of the dataset.

To extract the simulation results from your own simulation runs, you can execute the following command:

python3 emergency_eval/eval_single_long.py

Evaluation

When executing the eval_single_long.py script and the results.pkl file is present, it reads the data from this file instead of the raw simulation results.

python3 emergency_eval/eval_single_long.py

The script then performs the following actions:

  1. Merge the runs of each approach (will print some statistics) => merged file will be saved to results_merged.pkl and used if the script is executed again.
  2. Perform a sanity check to ensure all deadlines are met with a log per stream containing the following information (green output means sanity check passed):
    • Total number of frames received
    • Number of frames being delayed by a sporadic stream
    • Number of frames arriving outside of the specification (too early or too late). For a correct schedule this is always 0. If any of the numbers should be non-zero, the log is printed in another color besides green.
  3. Calculate and print the worst-case delay for all isochronous streams and sporadic streams and the worst case jitter for all isochronous streams.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors