This document describes how to reproduce the results Section VI.B Evaluation in Wired TSN with Sporadic Release Times of the paper An (m, k)-firm Elevation Policy for Weakly Hard Real-Time in Converged 5G-TSN Networks. For more context please refer to this document: https://doi.org/10.5281/zenodo.19224732
We provide the results of all intermeduate steps so each step can be reproduced individually.
Code: The scripts used for the schedulability analysis are provided as part of this repository in the emergency_eval folder.
Datasets: The respective dataset of the (intermediate) results can be found here.
The working directory should always be the root folder of this repository.
Depending on your environment you may need to adjust the PYTHONPATH using the following command:
export PYTHONPATH="${PYTHONPATH}:emergency_eval:."We recommend creating a conda environment with the provided environment.yml file:
conda env create -f environment.ymlOtherwise, Python 3.12 with the following dependencies is required:
- networkx=3.1
- pygraphviz=1.9 (conda-forge)
- lxml
- numpy>=1.26,<2.0
- matplotlib>=3.8,<4.0
Additional requirements for running the schedulers:
- docplex (ibmdecisionoptimization)
Additional requirements for running the worst case analysis:
- graph-tool (conda-forge)
- natsort
- pandas
This section describes how to reproduce the schedulability analysis of the paper.
Datasets: All (intermediate) results of this section are contained in the dsn26_schedulability.zip file.
After following the Prerequisites, you can jump right into any of the following steps and start from there by using the provided intermediate results from datasets of the previous sections.
Before executing any code, please adjust the settings in the emergency_eval/settings.py file.
For the scheduling phase further repositories and programs are required, please refer the Scheduling Prerequisites Section
The topology is generated by executing the gen_top.py script.
python3 emergency_eval/gen_top.pyThe generated topology file topology.json used in our evaluation is provided in the t_3x4 folder of the dataset.
The streamsets are generated executing the gen_streams_schedulabilitytest.py script.
python3 emergency_eval/gen_streams_schedulabilitytest.pyThe resulting streamsets used in our paper are provided in the p_24 folder (inside of t_3x4) of the dataset with the following structure:
- Each
r_xfolder contains one isochronous streamsetstreams.jsonand multipleet_ysubfolders. - Each
et_ysubfolder contains one sporadic streamsetstreams_et.json, whereycorresponds to the number of sporadic streams.
For the schedule calculation the following projects are required:
- Our approch:
- The CP-based scheduler for the primary schedule for isochronous streams: https://github.com/ustutt-ipvs-vs/primary_cp_schedule_MKFirm
- Our augmentation approach: https://github.com/ustutt-ipvs-vs/schedule_augmentation_MKFirm (needs to be build locally, see README.md)
- Our implementation of E-TSN: https://github.com/ustutt-ipvs-vs/etsn_MKFirm
You either clone the repositories manually or execute the following script to clone all required repositories to the root directory of this repo. After this, your filesystem tree should look like this:
workspace/
├─ eval_wired_sporadic_MKFirm (this repo)
│ ├─ dsn26_schedulability (generated in the previous steps or donwloaded from the datasets)
│ │ ├─ ...
├─ etsn_MKFirm
├─ primary_cp_schedule_MKFirm
├─ schedule_augmentation_MKFirm
Clone repositories
git clone https://github.com/ustutt-ipvs-vs/primary_cp_schedule_MKFirm ../primary_cp_schedule_MKFirm
git clone https://github.com/ustutt-ipvs-vs/schedule_augmentation_MKFirm ../schedule_augmentation_MKFirm
git clone https://github.com/ustutt-ipvs-vs/etsn_MKFirm ../etsn_MKFirmAnd build the schedule_augmentation_MKFirm project. For more information, please refer to the respective README.md file.
cd ../schedule_augmentation_MKFirm
mkdir release
cd release
cmake ..
cmake --build .
cd ../../eval_wired_sporadic_MKFirmFurthermore, an installation of the CPLEX optimized needs to be present.
We cannot ship CPLEX separately as part of these artifacts, as a license is required.
A free academic license including the installer can be found here.
After installation please adjust the cplex_path in the settings.py file to the location of the cpotimizer binary file on your system.
To execute the scheduling phase, you can run the emergency_eval/run_scheduler_schedulabilitytest.py.
Please note that this requires significant system resources and time to execute.
python3 emergency_eval/run_scheduler_schedulabilitytest.pyIn order to run the scheduler for only one streamset, you can use the -f argument, for example:
python3 emergency_eval/run_scheduler_schedulabilitytest.py -f dsn26_schedulability/t_3x4/p_24/r_0/et_1The results of the scheduler can be found in the respective folder under files with this prefix:
- Our approach:
- CP-based primary schedule:
cp_out - Augmented schedule:
libtsndgm_out
- CP-based primary schedule:
- E-TSN:
etsn_out
The following files are provided:
[prefix].json: The schedule (If this file is missing, the scheduler did not complete successfully).[prefix].log: The output log of the scheduler.[prefix]_meta.json: Further metadata (such as the runtime and exit code).
The results of all schedulers on our system are available in the dataset.
Using the eval_schedulability.py script, the results used in the paper are calculated.
It yields two matplotlib figures (which are merged to Fig 8a in the paper)
and generates a results_24.json file with the raw numbers shown in the figure.
python3 emergency_eval/eval_schedulability.pyYou can either use this script after running the schedulers for all streamsets on your system.
Alternatively, when the scheduling results from our systems are available in the dsn26_schedulability folder,
this script will yield the exact same results as shown in the paper.
In this section, we describe how the worst-case analysis can be reproduced.
The streamset with 24 isochronous and 24 sporadic streams used for the simulation-based worst-case analysis is provided
in folder p_24/r_9/et_24 of the schedulability analysis dataset.
Datasets: The (intermediate) results of this section are contained in the dsn26_worstcase.zip file.
The worst case analysis is performed using OMNeT++ simulations. Due to the number of runs performed, reproducing the exact results of the paper requires significant system resources and time. Thus, we provide the following options:
- A Dockerfile with a single run command (see this section), that:
- Contains all required prerequisites
- Performs a few short runs of the simulation
- Evaluates the results of these few runs
- A detailed description listing all requirements and steps to perform the full simulation and evaluation. (See section Manual Execution
Note: In any case, the scheduling results of the previous section for dsn26_schedulability/t_3x4/p_24/r_9/et_24 need to be available
(either by self executing or downloaded from the provided datasets)!
By default, the docker environment is set up to only perform 2 simulation runs of 1s each (instead of 100 runs with 10s each as used in the paper).
These settings can also be adjusted using the emergency_eval/settings_docker.py file.
To build and run the docker container and print the result, please execute the following two commands:
docker build --progress=plain -t eval_wired_sporadic_mkfirm .
docker run --rm -v $(pwd)/dsn26_schedulability:/usr/src/workspace/eval_wired_sporadic_MKFirm/dsn26_schedulability -v $(pwd)/dsn26_worstcase_docker:/usr/src/workspace/eval_wired_sporadic_MKFirm/dsn26_worstcase eval_wired_sporadic_mkfirmNote: Building the docker container still requires some time, as the whole OMNeT++ and INET project is built from source.
After the execution this will print the results to the console. For an interpretation of this please refer to the Evaluation section and the paper.
This section describes how to reproduce the exact results as presented in our paper. This requires significant time and system resources!
In case you've downloaded the simulation results (dsn26_worstcase.zip from the dataset),
you can directly jump to the Evaluation section and perform the evaluation script
based on the provided results.pkl file.
In order to execute the simulations of this section, a working installation of OMNeT++ and INET is required. The versions used for the simulation in this paper are:
- OMNeT++: v6.1.0
- INET: v4.5.4
Futhermore, the scheduling results of the previous section for t_3x4/p_24/r_9/et_24 need to be available.
Please make sure the opp_run executable of OMNeT++ is available in the PATH environment variable.
Please also make sure to adjust the other parameters in the settings.py file.
Based on the provided topology and streamset file of the above scenario, an OMNeT++ simulation is generated and then executed using the simulate_single_scenario_long.py file.
python3 emergency_eval/simulate_single_scenario_long.pyThe generated simulation files (omnetpp.ini and Scenario.ned) are contained in the libtsndgm for our approach and the etsn folder for E-TSN.
Furthermore, when generating the simulation scenario an additional streams_meta.json file is provided which is used in the final evaluation step.
For the paper we evaluated 100 simulation runs each with a duration of 10 seconds in simulation time (on our system 1 simulated second took approximately 100 real seconds).
Thus, the simulation can take a significant amount of time and system resources to execute.
The number of runs and the duration of each run can be adjusted in the settings.py file.
Due to the size of the simulation results (.vec and .sca files), we cannot publish them in the raw format.
However, the provided eval_single_long.py script extracts the important information and stores it in the results.pkl file.
This file is provided as part of the dataset.
To extract the simulation results from your own simulation runs, you can execute the following command:
python3 emergency_eval/eval_single_long.pyWhen executing the eval_single_long.py script and the results.pkl file is present, it reads the data from this file instead of the raw simulation results.
python3 emergency_eval/eval_single_long.pyThe script then performs the following actions:
- Merge the runs of each approach (will print some statistics) => merged file will be saved to
results_merged.pkland used if the script is executed again. - Perform a sanity check to ensure all deadlines are met with a log per stream containing the following information (green output means sanity check passed):
- Total number of frames received
- Number of frames being delayed by a sporadic stream
- Number of frames arriving outside of the specification (too early or too late). For a correct schedule this is always 0. If any of the numbers should be non-zero, the log is printed in another color besides green.
- Calculate and print the worst-case delay for all isochronous streams and sporadic streams and the worst case jitter for all isochronous streams.