- Docker Engine with GPU support (NVIDIA Container Toolkit)
- Docker Compose v1.28+ (for GPU support)
- NVIDIA GPU with CUDA support
docker-compose up --builddocker-compose up -ddocker-compose logs -fdocker-compose downThe application runs on port 5000 by default. To change the port, edit docker-compose.yml:
ports:
- "8080:5000" # Change 8080 to your desired portThe Docker Compose file is configured to use all available NVIDIA GPUs. To limit GPU access:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0'] # Use only GPU 0
capabilities: [gpu]The following directories are mounted for persistence:
./workflows- Saved workflows./logs- Application logs
To use a different CUDA version, edit the Dockerfile:
FROM nvidia/cuda:12.8.0-runtime-ubuntu22.04 # Change CUDA versionFor CPU-only deployment, use the CPU Dockerfile:
docker build -f Dockerfile.cpu -t pynode-cpu .
docker run -p 5000:5000 pynode-cpuOnce running, access the web interface at:
Verify NVIDIA Container Toolkit is installed:
docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu22.04 nvidia-smiEnsure the mounted directories have proper permissions:
mkdir -p workflows logs
chmod 777 workflows logsdocker-compose down
docker-compose build --no-cache
docker-compose up