|
1 | | -# AI pre-decoder for surface-code memory circuits |
| 1 | +# Ising Decoding |
2 | 2 |
|
3 | 3 | [](./LICENSE) |
4 | 4 | [](https://github.com/NVIDIA/Ising-Decoding/tree/releases/v0.1.0) |
5 | 5 | [](https://research.nvidia.com/publication/2026-04_fast-ai-based-pre-decoders-surface-codes) |
6 | 6 | [](https://huggingface.co/nvidia/Ising-Decoder-SurfaceCode-1-Fast) |
7 | 7 | [](https://huggingface.co/nvidia/Ising-Decoder-SurfaceCode-1-Accurate) |
8 | 8 |
|
9 | | -This repo implements a **pre-decoder** for surface-code memory experiments: |
| 9 | +This repo offers AI training frameworks and recipes to build, customize and deploy scalable quantum error correction **decoders**: |
10 | 10 |
|
11 | 11 | - A neural network consumes detector syndromes across space **and** time |
12 | 12 | - It predicts corrections that reduce syndrome density / improve decoding |
@@ -100,8 +100,8 @@ pip install -r code/requirements_public_inference.txt |
100 | 100 |
|
101 | 101 | 2. **Get the pre-trained models** |
102 | 102 | This repo ships two pre-trained model files (tracked with Git LFS): |
103 | | - - `models/PreDecoderModelMemory_r9_v1.0.77.pt` (receptive field R=9, checkpoint 77) |
104 | | - - `models/PreDecoderModelMemory_r13_v1.0.86.pt` (receptive field R=13, checkpoint 86) |
| 103 | + - `models/Ising-Decoder-SurfaceCode-1-Fast.pt` (receptive field R=9) |
| 104 | + - `models/Ising-Decoder-SurfaceCode-1-Accurate.pt` (receptive field R=13) |
105 | 105 |
|
106 | 106 | Clones get the files via `git lfs pull`. Optionally, set `PREDECODER_MODEL_URL` to the LFS/raw URL to fetch files when not in the working tree (e.g. in a minimal checkout or CI). |
107 | 107 |
|
@@ -146,8 +146,16 @@ The pre-trained public models use `--model-id 1` (R=9) and `--model-id 4` (R=13) |
146 | 146 | After training (or starting from the shipped `.safetensors` files), you can export the model to |
147 | 147 | ONNX and optionally apply INT8 or FP8 post-training quantization for deployment. |
148 | 148 |
|
149 | | -Set the `ONNX_WORKFLOW` and (optionally) `QUANT_FORMAT` environment variables before running |
150 | | -inference with `local_run.sh`: |
| 149 | +You may also change the surface code distance and number of rounds at inference |
| 150 | +time. That is - you are not required retrain a new model when changing either |
| 151 | +one of these parameters; since the model is a 3D convolutional neural network, |
| 152 | +the model will simply be run over a new decoding volume. |
| 153 | + |
| 154 | +- To run with a new distance, simply add `DISTANCE=<your distance>` to the commands below. |
| 155 | +- To run with a new number of rounds, simply add `N_ROUNDS=<your number of rounds>` to the commands below. |
| 156 | + |
| 157 | +Set the `ONNX_WORKFLOW` and (optionally) (`QUANT_FORMAT`, `DISTANCE`, |
| 158 | +`N_ROUNDS`) environment variables before running inference with `local_run.sh`: |
151 | 159 |
|
152 | 160 | | `ONNX_WORKFLOW` | Behavior | |
153 | 161 | |---|---| |
@@ -177,7 +185,16 @@ ONNX_WORKFLOW=3 WORKFLOW=inference bash code/scripts/local_run.sh |
177 | 185 | | `QUANT_FORMAT` | unset | `int8` or `fp8`. Unset means no quantization (FP32 ONNX). | |
178 | 186 | | `QUANT_CALIB_SAMPLES` | `256` | Calibration samples for INT8/FP8 post-training quantization. | |
179 | 187 |
|
| 188 | +**Circuit variables:** |
| 189 | + |
| 190 | +| Variable | Default | Description | |
| 191 | +|---|---|---| |
| 192 | +| `CONFIG_NAME` | `config_public` | Use the defaults from the `conf/$CONFIG_NAME.yaml` file | |
| 193 | +| `DISTANCE` | Use the distance specified in the `conf/$CONFIG_NAME.yaml` file | surface code distance | |
| 194 | +| `N_ROUNDS` | Calibration samples for INT8/FP8 post-training quantization. | number of rounds in memory experiment | |
| 195 | + |
180 | 196 | Notes: |
| 197 | + |
181 | 198 | - TensorRT workflows (`ONNX_WORKFLOW=2` or `3`) require `tensorrt` and `modelopt`. |
182 | 199 | - FP8 quantization failure is fatal. INT8 failure falls back to the FP32 ONNX model silently. |
183 | 200 | - ONNX and engine files are written to the current working directory. |
@@ -223,7 +240,7 @@ Results are written to `outputs/<EXPERIMENT_NAME>/plots/`. |
223 | 240 | | Decoder | Source | Notes | |
224 | 241 | |---|---|---| |
225 | 242 | | No-op | — | Pre-decoder output only, no global correction | |
226 | | -| Union-Find | `ldpc` | Fast, sub-optimal | |
| 243 | +| Union-Find | `ldpc` | Fast, sub-optimal LER (Logical Error Rate) | |
227 | 244 | | BP-only | `ldpc` | Belief propagation, no OSD | |
228 | 245 | | BP+LSD-0 | `ldpc` | BP with localized statistics decoding | |
229 | 246 | | Uncorr-PM | PyMatching | Uncorrelated minimum-weight perfect matching | |
@@ -573,4 +590,4 @@ Presence of these headers is enforced automatically by the `spdx-header-check` C |
573 | 590 | `.github/workflows/ci.yml`). |
574 | 591 |
|
575 | 592 | Third-party open source components bundled with or required by this project are listed with their |
576 | | -respective copyright notices and license texts in [NOTICE](NOTICE). |
| 593 | +respective copyright notices and license texts in [NOTICE](NOTICE). |
0 commit comments