The pipeline shows the usage of the bypass model, which is a simple model that does not perform any computation on the
input data. It demonstrates how data is pre-processed before being processed by the model and can be used for
troubleshooting. The demo uses a maintain_aspect_ratio flag to show how pre-processed data can be compared with the
raw data.
Identity model is prepared by converting nn.Identity PyTorch model to ONNX format (using
standard torch.onnx.export). For more details, please refer to the export ONNX model section.
Tested on platforms:
- Nvidia Turing, Ampere
- Nvidia Jetson Orin family
git clone https://github.com/insight-platform/Savant.git
cd Savant
git lfs pull
./utils/check-environment-compatibleNote: Ubuntu 22.04 runtime configuration guide helps to configure the runtime to run Savant pipelines.
The demo uses models that are compiled into TensorRT engines the first time the demo is run. This takes time. Optionally, you can prepare the engines before running the demo by using the command:
# you are expected to be in Savant/ directory
./scripts/run_module.py --build-engines samples/bypass_model/demo.yml# you are expected to be in Savant/ directory
# if x86
docker compose -f samples/bypass_model/docker-compose.x86.yml up -d
# if Jetson
docker compose -f samples/bypass_model/docker-compose.l4t.yml up -dA result video can be viewed:
- in the browser at `http://127.0.0.1:888/stream/video-with-preprocessed-frame/' (LL-HLS)
- in a player using `rtsp://127.0.0.1:554/stream/video-with-preprocessed-frame
The video consists of original video stream and the pre-processed video stream side by side from left to right. The white background highlights the original video frame because its size is different from the size of the pre-processed frame.
The pre-processed frame has a black background to match the aspect ratio of the original frame, i.e. maintain_aspect_ratio flag is set to true. The frame is centered on the background, as the symmetric_padding flag is set to true.
# you are expected to be in Savant/ directory
# if x86
docker compose -f samples/bypass_model/docker-compose.x86.yml down -v
# if Jetson
docker compose -f samples/bypass_model/docker-compose.l4t.yml down -vThe model takes input in the form of dynamic tensor axes batch x 3 x height x width, where batch, height,
and width are defined at runtime.
This allows providing the model with sources of different shapes. If you need to change the shape of the input tensor, you will have to make modifications to the export.py script. After these modifications, you can export the model using the following command:
# you are expected to be in Savant/ directory
docker run --rm \
-v "$(pwd)/samples/bypass_model:/opt/bypass_model" \
-w /opt/bypass_model \
--user "$(id -u):$(id -g)" \
--entrypoint python \
ghcr.io/insight-platform/savant-deepstream-extra \
/opt/bypass_model/export.py
When the command completes, the model definition will be printed in the console output. It must look like this:
graph main_graph (
%input[FLOAT, batchx3xheightxwidth]
) {
%output = Identity(%input)
return %output
}
After running the command, the ONNX model will be saved in the samples/bypass_model directory. To use the model in the
pipeline, you need to change demo.yml to replace remote section with the following:
local_path: /cache/models/bypass_model/identity-localAnd uncomment the volume mount in the module service in docker-compose.x86.yml
or docker-compose.l4t.yml file.
