handbrakecli-nvidia is a Docker image based on Ubuntu, with a self-built HandBrakeCLI inside, designed for NVIDIA GPU hardware transcoding on Linux / NAS / Unraid / Docker environments.
The goal of this image is to:
- Run
HandBrakeCLIinside a container - Support NVIDIA
NVENChardware encoding - Optionally support
NVDEChardware decoding, depending on build options and transcoding parameters - Avoid Flatpak / GUI dependencies and make CLI automation easier
- Docker-based deployment
- Uses
HandBrakeCLI - Supports NVIDIA GPU acceleration
- Suitable for parameter-based CLI transcoding
- Supports using preset JSON files exported from HandBrake GUI
- Good for batch jobs, automation, and NAS use cases
Before using this image, make sure the host system already has:
- NVIDIA driver installed correctly
- Docker installed correctly
- NVIDIA Container Toolkit installed and configured correctly
- Working GPU access with
docker run --gpus all ...
It is recommended to verify on the host first:
nvidia-smiImage name:
sczhengyabin/handbrakecli-nvidiaPull the image:
docker pull sczhengyabin/handbrakecli-nvidia:latestdocker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
sczhengyabin/handbrakecli-nvidia:latest \
--versiondocker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
sczhengyabin/handbrakecli-nvidia:latest \
--help | grep -i -E 'nvenc|nvdec|h264|h265|av1'It is recommended to mount a host directory to /work inside the container:
-v /path/to/videos:/workFor example, on the host:
/path/to/videos/
input.mp4
output.mp4
presets/
mypreset.json
Inside the container it becomes:
/work/input.mp4
/work/output.mp4
/work/presets/mypreset.json
This method is suitable when you want to:
- tune parameters manually
- write batch scripts
- avoid GUI presets
- explicitly control encoder, quality, format, and other settings
docker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
-v /path/to/videos:/work \
sczhengyabin/handbrakecli-nvidia:latest \
--enable-hw-decoding nvdec \
-i /work/input.mkv \
-o /work/output.mp4 \
-f av_mp4 \
-e nvenc_h265 \
-q 24 \
--vfr \
-B 160Explanation:
-i: input file-o: output file-f av_mp4: MP4 container-e nvenc_h265: NVIDIA H.265 hardware encoder-q 24: constant quality mode; lower value usually means higher quality and larger file size--vfr: variable frame rate-B 160: audio bitrate set to 160 kbps
docker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
-v /path/to/videos:/work \
sczhengyabin/handbrakecli-nvidia:latest \
--enable-hw-decoding nvdec \
-i /work/input.mkv \
-o /work/output.mp4 \
-f av_mp4 \
-e nvenc_h264 \
-q 22 \
--vfr \
-B 160This version is usually more compatible with older devices and general playback scenarios.
docker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
-v /path/to/videos:/work \
sczhengyabin/handbrakecli-nvidia:latest \
--enable-hw-decoding nvdec \
-i /work/input.mkv \
-o /work/output.mp4 \
-f av_mp4 \
-e nvenc_h265 \
-q 24 \
-w 1920 \
-l 1080 \
--crop 0:0:0:0 \
--vfrThis method is suitable when you:
- already created a preset in HandBrake GUI on Windows or macOS
- want to reuse that preset in Docker or server environments
- do not want to manually translate every preset setting into CLI options
Export your custom preset from HandBrake GUI as a .json file, for example:
mypreset.json
Place it in your host directory, for example:
/path/to/videos/presets/mypreset.json
docker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
-v /path/to/videos:/work \
sczhengyabin/handbrakecli-nvidia:latest \
--preset-import-file /work/presets/mypreset.json -zThis will list the preset names found in the JSON file.
Assume your preset name is NV28MP4:
docker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
-v /path/to/videos:/work \
sczhengyabin/handbrakecli-nvidia:latest \
--enable-hw-decoding nvdec \
--preset-import-file /work/presets/mypreset.json \
-Z "NV28MP4" \
-i /work/input.mkv \
-o /work/output.mp4- The safest workflow is: use
-zfirst to list preset names, then call the preset by name - If the preset includes encoders or hardware features not available in the current build, transcoding may fail
- Presets exported from HandBrake GUI on Windows usually work with Linux CLI, but hardware-specific features still depend on the current build and host GPU environment
Because --gpus all alone is often not enough.
To make NVIDIA video encode/decode related driver libraries available inside the container, the video capability usually needs to be enabled explicitly.
Recommended:
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,videoPlease check:
- Whether the NVIDIA driver on the host is working properly
- Whether NVIDIA Container Toolkit is installed
- Whether
NVIDIA_DRIVER_CAPABILITIES=compute,utility,videois included - Whether the current build actually contains the required encoders
You can verify with:
docker run --rm -it \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
sczhengyabin/handbrakecli-nvidia:latest \
--help | grep -i nvencHere is a simple example that converts all .mkv files in a directory to .mp4:
for f in /path/to/videos/*.mkv; do
base="$(basename "$f" .mkv)"
docker run --rm \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
-v /path/to/videos:/work \
sczhengyabin/handbrakecli-nvidia:latest \
--enable-hw-decoding nvdec \
-i "/work/${base}.mkv" \
-o "/work/${base}.mp4" \
-f av_mp4 \
-e nvenc_h265 \
-q 24 \
--vfr
donePlease follow the license requirements of HandBrake itself and all related dependencies.
The Docker build scripts and documentation in this repository may be used according to the license declared in the repository.
This image is only intended to make it easier to run HandBrakeCLI in container environments.
Availability of NVIDIA hardware acceleration depends on:
- host drivers
- Docker / NVIDIA Container Toolkit configuration
- GPU model
- current HandBrake build options
- actual transcoding parameters