Turn any NVIDIA GPU into a local AI platform. Inference + fine-tuning in your browser. One command to start, automatic clustering.
-
Updated
Apr 18, 2026 - Python
Turn any NVIDIA GPU into a local AI platform. Inference + fine-tuning in your browser. One command to start, automatic clustering.
NVML unified memory shim for NVIDIA DGX Spark Grace Blackwell GB10 - enables MAX Engine, PyTorch, and GPU monitoring
Complete setup guide for a 2-node NVIDIA DGX Spark cluster — distributed training, CUDA inference with EXO, NCCL tuning for Grace Blackwell, NVMe-TCP shared storage, and 200 Gb/s direct fabric networking.
Pre-built PyTorch wheels and build scripts for NVIDIA DGX Spark (GB10, sm_121, Blackwell, CUDA 13.0, ARM64)
NXO — Distributed AI inference for NVIDIA/Linux. Fork of EXO focused on CUDA, tinygrad, and DGX Spark clusters.
Add a description, image, and links to the grace-blackwell topic page so that developers can more easily learn about it.
To associate your repository with the grace-blackwell topic, visit your repo's landing page and select "manage topics."