🌐 Languages: English · العربية · Español · Français · 日本語 · 한국어 · Tiếng Việt · 中文 (简体) · 中文(繁體) · Deutsch · Русский
Note
i18n status in this checkout: all linked translation files are present under i18n/ (ar, de, es, fr, ja, ko, ru, vi, zh-Hans, zh-Hant) with English as the root canonical README.
A comprehensive pipeline for reconstructing spectra from event cameras with dispersed light illumination (e.g., diffraction grating). The system records intensity change events
Important
This README is the canonical technical source for the repository root. Localized files under i18n/ should mirror section and header evolution while keeping exactly one language-options line at the top.
| Need | Jump to |
|---|---|
| Start in ~5 minutes | Quick Start (5-Min Path) ⚡ |
| Run the full pipeline wrapper | scripts/run_scan_pipeline.sh |
| Understand script flow | Overview 🔭, Core Scripts 🧠 |
| Tune parameters | Configuration 🎛️, Configuration Examples 🧩 |
| Use GUI tools | Additional Tools 🛠️ |
| Hardware docs (BOM, PCB, 3D, firmware) | Repository Map 🗺️ |
| Multilingual maintenance rules | Internationalization 🌍 |
| Sponsor and support links | Support / Sponsor 💖 |
| Item | Details |
|---|---|
| Core idea | Self-calibrated hyperspectral derivative imaging from event streams |
| Main stages | segment_robust_fixed.py -> compensate_multiwindow_train_saved_params.py -> visualization scripts |
| Hardware docs in repo | 3D/, PCB/, firmware/, BOM/ |
| Desktop tools | scan_compensation_gui_cloud.py, ImagingGUI/DualCamera_separate_transform.py |
| Canonical paper | Optica article (DOI: 10.1364/OPTICA.585766) |
| i18n in this checkout | README.ar.md, README.de.md, README.es.md, README.fr.md, README.ja.md, README.ko.md, README.ru.md, README.vi.md, README.zh-Hans.md, README.zh-Hant.md |
| Area | Current repository reality |
|---|---|
| Python baseline | 3.9+ recommended (some ImagingGUI/ utilities note 3.10+) |
| Main pipeline launcher | scripts/run_scan_pipeline.sh |
| Core training script | compensate_multiwindow_train_saved_params.py |
| Hardware collateral | Present under 3D/, PCB/, BOM/, firmware/ |
| Multilingual docs | i18n/ contains all 10 linked language files |
Left: modular transmission microscope with a motorised grating illumination arm and vertical detection stack. Right: data-acquisition GUI used to monitor segmentation, compensation, and reconstructions in real time.
Tip
Purchase the core development kit (excluding camera, tube lens, and optical table) for the paper Self-calibrated neuromorphic hyperspectral derivative imaging published in Optica:
- https://lazying.art/openhi-kit.html
- Promotion code for 30% off:
OPTICA
- Quick Access ⚡
- At a Glance 📌
- Overview 🔭
- Features ✨
- Repository Map 🗺️
- Project Structure 📁
- Quick Start (5-Min Path) ⚡
- Prerequisites 🧰
- Installation ⚙️
- Usage 🚀
- Internationalization 🌍
- Configuration 🎛️
- Examples 🧪
- Bill of Materials (Core Module) 🧾
- Core Scripts 🧠
- Additional Tools 🛠️
- Turbo Multi-Scan Compensation ⚡
- Parameter Management 💾
- Memory Optimization 🧱
- Output Structure 📦
- Configuration Examples 🧩
- Wavelength Mapping 🌈
- Tips and Best Practices ✅
- Development Notes 🧭
- Troubleshooting 🩺
- Roadmap 🛣️
- Citation 📎
- Acknowledgements 🙏
- License 📄
- Contributing 🤝
- Support / Sponsor 💖
When illumination sweeps across wavelengths over time, the event stream encodes a temporal derivative of the underlying spectrum along the dispersion axis.
RAW event recording
-> scan timing segmentation (F/B passes)
-> multi-window time-warp compensation
-> frame/cumulative/wavelength diagnostics
This pipeline provides three main stages:
| Stage | Purpose | Primary script(s) |
|---|---|---|
| 1. Segment | Find scan timing and split recordings into forward/backward passes | segment_robust_fixed.py |
| 2. Compensate | Estimate piecewise-linear time-warp to remove scan-induced temporal tilt | compensate_multiwindow_train_saved_params.py |
| 3. Visualize | Overlay learned boundaries and compare original vs. compensated time-binned frames | visualize_boundaries_and_frames.py, visualize_cumulative_compare.py |
The repository also includes hardware assets, acquisition GUI code, and archival experiment branches under versions/.
| Icon | Meaning |
|---|---|
| 🧩 | Segmentation and scan splitting |
| 🧠 | Compensation and parameter learning |
| 🖼️ | Visual diagnostics and output inspection |
| 🌈 | Wavelength mapping and spectral rendering |
- This repository is research-oriented and includes active scripts plus archival experiments and results.
- Commands in this README assume execution from repository root unless otherwise noted.
- Several optional workflows depend on external SDKs and local datasets that are not bundled here.
- If a command mentions a historical path that no longer exists, prefer the updated root scripts and comparisons paths documented in this README.
- End-to-end RAW-to-spectrum event processing workflow.
- Auto/manual scan period detection and forward/backward segmentation.
- Multi-window compensation with trainable/fixed parameter modes.
- Parameter save/load in
NPZ,JSON, andCSV. - Multi-scan merge workflow for faster training iterations (
compensate_multiwindow_turbo.py). - Visualization suite for boundaries, binned frames, cumulative curves, and weighted diagnostics.
- Hardware documentation: BOM, PCB, 3D parts, firmware notes.
- Acquisition utilities for synchronized event/frame camera setups.
| Category | Included capabilities |
|---|---|
| Signal processing | Segmentation, period detection, time-warp compensation |
| Optimization | Trainable/fixed parameters, smoothness controls, chunked training |
| Outputs | Visual overlays, cumulative comparisons, wavelength-mapped diagnostics |
| Platform assets | Hardware design files, firmware notes, GUI tooling, historical archives |
Key hardware assets are kept alongside the code for quick access:
| Area | Path |
|---|---|
| 3D-printed parts | 3D/ |
| PCB layouts | PCB/ |
| Microcontroller firmware | firmware/ |
| Acquisition UI (desktop) | ImagingGUI/ |
| Experiment/data references | comparisons/reference_spectrum_2835/, comparisons/reference_spectrum_lumileds/, references/ |
| Alignment analysis | comparisons/align_background_vs_reference_code/, comparisons/alignment_configs/ |
OpenHI/
├── README.md
├── QUICKSTART.md
├── LICENSE
├── versions.md
├── 3D/
├── BOM/
├── PCB/
├── firmware/
├── ImagingGUI/
├── scripts/
├── segment_robust_fixed.py
├── compensate_multiwindow_train_saved_params.py
├── compensate_multiwindow_turbo.py
├── compensate_multiwindow*.py
├── visualize_boundaries_and_frames.py
├── visualize_cumulative_compare.py
├── visualize_cumulative_weighted.py
├── scan_compensation_gui_cloud.py
├── show_envi_spectrum_gui.py
├── simple_raw_reader.py
├── comparisons/align_background_vs_reference_code/
├── align_data_vs_filter_code/
├── comparisons/alignment_configs/
├── versions/05_archive_code_variants/
├── comparisons/outputs_root/
├── comparisons/reference_filters/
├── comparisons/reference_spectrum_2835/
├── comparisons/reference_spectrum_lumileds/
├── references/
├── i18n/
└── versions/
If your environment is already prepared and your dataset folder contains a *event*.raw file:
scripts/run_scan_pipeline.sh /path/to/dataset_dirTo force a specific RAW file:
scripts/run_scan_pipeline.sh /path/to/dataset_dir /path/to/recording_event.rawThis wrapper runs segmentation, compensation training, and visualization using repository-default script paths and CLI flags.
Tip
For first validation, run the wrapper on one dataset directory, then inspect the generated segment NPZ and visualization outputs before tuning PIPELINE_* variables.
- Python 3.9+ (Python 3.10+ for some GUI tooling under
ImagingGUI/). - Core Python packages:
numpy,torch,matplotlib. - Optional but common:
opencv-python,pillow,cellpose. - Metavision SDK / Python bindings for RAW event reading workflows (
simple_raw_reader.py, segmentation from RAW). - CUDA-enabled PyTorch is recommended for faster optimization.
- RAW recordings and/or segmented NPZ files available locally.
No locked environment file is currently provided at repository root. Suggested setup:
# create and activate a virtual environment or conda env
python -m venv .venv
source .venv/bin/activate
# install core dependencies
pip install numpy matplotlib torch
# optional tools often used in this repository
pip install opencv-python pillow
# pip install cellposeIf using Git hooks for large-file hygiene:
bash scripts/setup_hooks.shRecommended quick sanity checks:
python -c "import numpy, torch, matplotlib; print('core deps ok')"
python -c "import torch; print('cuda:', torch.cuda.is_available())"# 1. Segment RAW into 6 scans (Forward/Backward)
python segment_robust_fixed.py \
data/recording.raw \
--segment_events \
--output_dir data/segments/
# 2. Train multi-window compensation
python compensate_multiwindow_train_saved_params.py \
data/segments/Scan_1_Forward_events.npz \
--bin_width 50000 \
--visualize --plot_params --a_trainable \
--iterations 1000
# 3. Visualize results with boundaries
python visualize_boundaries_and_frames.py \
data/segments/Scan_1_Forward_events.npz
# 4. Compare cumulative vs multi-bin means
python visualize_cumulative_compare.py \
data/segments/Scan_1_Forward_events.npz \
--sensor_width 1280 --sensor_height 720scripts/run_scan_pipeline.sh /path/to/dataset_dir [raw_file]| Step | Command entrypoint | Primary output |
|---|---|---|
| Segment scans | segment_robust_fixed.py |
*_segments/Scan_*_{Forward,Backward}_events.npz |
| Train compensation | compensate_multiwindow_train_saved_params.py |
*learned_params_n*.{npz,json,csv} plus diagnostics |
| Boundary and frame diagnostics | visualize_boundaries_and_frames.py |
timestamped visualization folder with overlays and bins |
| Cumulative diagnostics | visualize_cumulative_compare.py, visualize_cumulative_weighted.py |
cumulative and statistical plots for scan quality checks |
| Convenience wrapper | scripts/run_scan_pipeline.sh |
end-to-end segmentation, training, and visualization |
Use this when you want to validate script wiring on an existing segment NPZ before a longer optimization run:
python visualize_boundaries_and_frames.py /path/to/Scan_1_Forward_events.npz \
--sample_rate 0.05 --sensor_width 1280 --sensor_height 720
python visualize_cumulative_compare.py /path/to/Scan_1_Forward_events.npz \
--sensor_width 1280 --sensor_height 720Environment knobs supported by scripts/run_scan_pipeline.sh:
| Variable | Default | Purpose |
|---|---|---|
PIPELINE_ACTIVITY_FRACTION |
0.90 |
Active event window fraction |
PIPELINE_BIN_WIDTH |
50000 |
Training bin width in microseconds |
PIPELINE_SENSOR_WIDTH |
1280 |
Sensor width for visualization |
PIPELINE_SENSOR_HEIGHT |
720 |
Sensor height for visualization |
PIPELINE_SAMPLE_RATE |
0.10 |
Event sampling fraction for plotting |
PIPELINE_TIME_BIN_US |
1000 |
Segmentation activity-bin size |
PIPELINE_SEGMENT_PATTERN |
Scan_1_Forward_events.npz |
Segment file pattern for downstream scripts |
The repository uses a single language-options line at the top of each README to avoid duplicated language bars.
Currently available translated files in i18n/:
README.ar.mdREADME.es.mdREADME.fr.mdREADME.ja.mdREADME.ko.md
| Language link in nav | File in i18n/ |
Status |
|---|
Planned language links are intentionally preserved in the top navigation for forward compatibility.
Important CLI controls used across scripts:
--time_bin_us: activity bin size in microseconds.--round_trip_period: manual period (default1688bins).--auto_calculate_period: period via autocorrelation.--activity_fraction: active event window fraction.--manual_start_shift_ms: manual scan start offset.
--num_params(default13),--temperature(default5000).--a_trainable/--a_fixed,--b_trainable/--b_fixed,--boundary_trainable.--a_default,--b_default.--iterations,--learning_rate,--smoothness_weight.--chunk_sizefor memory control.--load_paramsto reuse learned parameters.
visualize_boundaries_and_frames.py:--sample_rate,--wavelength_min,--wavelength_max, sensor size args.visualize_cumulative_compare.py: sensor size,--output_dir,--sample_label.visualize_cumulative_weighted.py: polarity scales,--step_us,--auto_scale,--exp,--no_comp.
python segment_robust_fixed.py \
led_12v_no_acc_glass/glass/sync_recording_12v_led_no_acc_blank_event_20250804_232556.raw \
--segment_events \
--output_dir led_12v_no_acc_glass/glass/
python compensate_multiwindow_train_saved_params.py \
led_12v_no_acc_glass/glass/sync_recording_12v_led_no_acc_blank_event_20250804_232556_segments/Scan_1_Forward_events.npz \
--bin_width 50000 \
--visualize --plot_params --a_trainable \
--iterations 1000 \
--b_default 0 \
--smoothness_weight 0.001
python visualize_boundaries_and_frames.py \
led_12v_no_acc_glass/glass/sync_recording_12v_led_no_acc_blank_event_20250804_232556_segments/Scan_1_Forward_events.npzpython scanning_alignment_visualization_save.py \
led_12v_no_acc_glass/glass/sync_recording_12v_led_no_acc_blank_event_20250804_232556_segments/Scan_1_Forward_events.npz \
--output_dir led_12v_no_acc_glass/glass/sync_recording_12v_led_no_acc_blank_event_20250804_232556_segments/FIXED_visualization
python scanning_alignment_visualization_cumulative_compare.py \
led_12v_no_acc_glass/glass/sync_recording_12v_led_no_acc_blank_event_20250804_232556_segments/Scan_1_Forward_events.npz \
--sensor_width 1280 --sensor_height 720 \
--output_dir led_12v_no_acc_glass/glass/sync_recording_12v_led_no_acc_blank_event_20250804_232556_segments/cumulative_vs_bin2ms \
--sample_label "led_12v_no_acc_glass Scan_1_Forward"These legacy commands are intentionally preserved for compatibility context; in this checkout, use current root scripts where possible.
python compensate_multiwindow_turbo.py \
--segments-dir path/to/your_segments \
--include all --sort name \
--bin-width 5000 \
-- --a_trainable --iterations 1000 --smoothness_weight 0.001 --chunk_size 250000 --visualize --plot_paramspython compensate_multiwindow_train_saved_params.py segment.npz \
--load_params learned_params.npzSee BOM/core_module.md for the full table with links and notes.
Table S2. Acquisition Time and Cost Comparison Between the Proposed Event-Driven System and a Reference Hyperspectral Camera
| Parameter | Ours | Reference camera |
|---|---|---|
| Acquisition time | ∼585 ms per scan | 300 s per scan |
| Data volume | 18.5 MB | 138 MB |
| Approx. price | ∼3000 USD | 14 000 USD |
(Excluding event camera and optional 4f validation optics)
| Component | Notes | Cost (USD) | Taobao Link |
|---|---|---|---|
| Motion control | NEMA42 + TB6600 + Arduino Uno | 15.00 | https://e.tb.cn/h.7FHgkEvoo6tpKTo?tk=QYRFUPRqazE |
| Optics (grating) | Diffraction grating (education grade) | 3.47 | https://e.tb.cn/h.7Fhj16MkrSDHNnE?tk=3Q8dUPRouNw |
| Illumination | 2835 LED (6 CNY / 10 pcs; 0.6 CNY used) | 0.08 | https://e.tb.cn/h.7uubHIVL5diILHl?tk=tzTAUPRr14K |
| Reflector | Folding mirror | 6.25 | https://e.tb.cn/h.7uu1rNNSbgVdS31?tk=PqsxUPRHb32 |
| Electronics | LED PCB (CNY/board; min order 5 pcs) | 1.67 | |
| Limit switches | Optional, 2 × 8.07 CNY | 2.24 | https://e.tb.cn/h.7FHEKbcgJmc2Ll1?tk=I4FRUP8diRE |
| 3D printing | One-third PLA filament spool (covers all printed parts) | 5.09 | https://e.tb.cn/h.7FhOVWX7SLHvNNf?tk=kOcQUPRJsbo |
| Lens | Plano-convex lens (25.4 mm, 350–700 nm AR) | https://e.tb.cn/h.7FSePNYhqt7ITbh?tk=tH8ZUP8i3cC | |
| Total | core module | 33.99 |
Goal: Extract scan timing from raw events and slice into 6 one-way scans (F, B, F, B, F, B).
Mathematical Description:
-
Activity signal (events binned with
$\Delta t = 1000~\mu\text{s}$ ):$$a[n] = \left|{ i \mid t_{\min} + n\Delta t \le t_i < t_{\min} + (n+1)\Delta t }\right|.$$ -
Active window detection: find the smallest contiguous window containing
$80%$ of events. -
Period estimation: autocorrelation or manual period (default:
$1688$ bins). -
Reverse-correlation (timing structure):
$$R[k] = \sum_{n} a[n], a_{\text{rev}}[n+k]$$ with$$a_{\text{rev}}[n] = a[N-1-n].$$
Usage:
# Automatic period detection
python segment_robust_fixed.py recording.raw --segment_events --output_dir segments/
# Manual period (fixed 1688 bins)
python segment_robust_fixed.py recording.raw --segment_events --round_trip_period 1688Arguments:
--segment_events: Save individual scan segments as NPZ files.--round_trip_period 1688: Use manual period (default).--auto_calculate_period: Override manual period with autocorrelation.--activity_fraction 0.80: Fraction of events for active region.--max_iterations 2: Refinement iterations.
Goal: Learn time-warp parameters to remove scan-induced temporal shear using multi-window piecewise-linear compensation.
Mathematical Description:
-
Boundary surfaces:
$$T_i(x, y) = a_i x + b_i y + c_i,\quad i=0,\ldots,M-1.$$ -
Soft window memberships:
$$m_i = \sigma!\Big(\frac{t - T_i}{\tau}\Big),\sigma!\Big(\frac{T_{i+1} - t}{\tau}\Big),\qquad w_i = \frac{m_i}{\sum_j m_j + \varepsilon}.$$ -
Interpolated slopes (optional):
$$\alpha_i = \frac{t - T_i}{T_{i+1} - T_i},\quad a_i' = (1-\alpha_i)a_i + \alpha_i a_{i+1},\quad b_i' = (1-\alpha_i)b_i + \alpha_i b_{i+1}.$$ -
Time warp:
$$\Delta t(x,y,t) = \sum_i w_i (\tilde{a}_i x + \tilde{b}_i y),\qquad t' = t - \Delta t(x,y,t).$$ -
Loss: variance minimization of time-binned frames with smoothness regularization on parameters.
Usage:
# Train with a-parameters trainable, b fixed
python compensate_multiwindow_train_saved_params.py segment.npz \
--bin_width 50000 --a_trainable --b_default -76.0 \
--iterations 1000 --smoothness_weight 0.001
# Load pre-trained parameters
python compensate_multiwindow_train_saved_params.py segment.npz \
--load_params learned_params.npzKey Arguments:
--a_trainable/--a_fixed: Control a-parameter training (default: fixed).--b_trainable/--b_fixed: Control b-parameter training (default: trainable).--num_params 13: Number of boundary parameters.--temperature 5000: Sigmoid temperature for soft windows.--smoothness_weight 0.001: Regularization weight.--load_params file.npz: Load saved parameters.--chunk_size 250000: Memory-efficient processing chunk size.
Goal: Display learned parameters and show qualitative improvements.
Features:
- Parameter overlays on
$x\text{–}t$ and$y\text{–}t$ projections. - Time-binned frame comparisons (original vs. compensated).
- Sliding window analysis (50 ms and 2 ms bins).
- Wavelength mapping for spectral visualization.
Usage:
python visualize_boundaries_and_frames.py segment.npz \
--sample_rate 0.1 --wavelength_min 380 --wavelength_max 680Goal: Compare cumulative 2 ms-step means with sliding bin means.
Mathematical Description:
-
Cumulative means:
$$F(T) = \frac{1}{HW}\sum_{t < T}\text{events}(t).$$ -
Sliding means: event counts in
$[T-\Delta,,T)$ divided by$H \times W$ . -
Relationship (finite-difference derivative):
$$\Delta F(T) \approx \frac{F(T) - F(T-\Delta)}{\Delta}.$$
Usage:
python visualize_cumulative_compare.py segment.npz \
--sensor_width 1280 --sensor_height 720 \
--sample_label "My Dataset"Complete GUI for scan compensation with 3D spectral visualization.
Features:
- Interactive parameter tuning.
- Real-time optimization progress.
- 3D wavelength-mapped visualization.
- Export results and parameters.
Usage:
python scan_compensation_gui_cloud.pySynchronized recording system for event and frame cameras:
ImagingGUI/DualCamera_separate_transform.py
Features:
- Simultaneous event and frame recording.
- Real-time preview with transformations.
- Always-on-top window controls.
- Parameter adjustment during recording.
The original README referenced this firmware sketch path:
rotor/step42_with_key_int/step42_with_key_int.ino
Current repository layout includes firmware notes at:
firmware/README.md
This path mismatch is preserved here intentionally; if you have the rotor sketch folders in another branch/local checkout, keep using those paths.
Legacy documented capabilities of this sketch include:
- Precise angle control with microstepping.
- Acceleration/deceleration profiles.
- Limit switch integration.
- Auto-centering functionality.
When you have multiple one-way scans (Forward/Backward) of the same sweep, you can merge them and run the proven trainer on a single combined event stream using compensate_multiwindow_turbo.py.
- Accepts one segment, an explicit list, or a whole segments directory.
- For Backward scans, flips polarity and reverses time before merging:
- If polarity
p ∈ {0,1}:p := 1 − p; then reverse time within the scan. - If polarity
p ∈ {−1,1}:p := −p; then reverse time within the scan. - Concatenates scans on a continuous timeline (with a
1 μsgap between scans) and callscompensate_multiwindow_train_saved_params.pyunder the hood.
# Merge all scans (Forward+Backward) from a segments folder and train at 5 ms
python compensate_multiwindow_turbo.py \
--segments-dir path/to/.../_segments \
--include all --sort name \
--bin-width 5000 \
-- --a_trainable --iterations 1000 --smoothness_weight 0.001 --chunk_size 250000 --visualize --plot_params
# Reuse learned params and just render at 10 ms (fast, no training)
python compensate_multiwindow_turbo.py \
--segments-dir path/to/.../_segments \
--include all --sort time \
--bin-width 10000 \
--load-params path/to/learned_params.npz \
-- --visualize --plot_params
# Only Forward scans
python compensate_multiwindow_turbo.py \
--segments-dir path/to/.../_segments \
--include forward --sort time \
--bin-width 5000 \
-- --a_trainable --iterations 1000 --smoothness_weight 0.001 --chunk_size 250000--segment,--segments,--segments-dir: choose your input set.--include {all|forward|backward}: filter by scan direction.--sort {name|time}: natural filename order or NPZstart_timeorder.--bin-width <μs>: forwarded to the base trainer.--load-params: reuse saved parameters (skip training and regenerate outputs quickly at new bin widths).--extra ...after--: any additional flags are forwarded to the base trainer.
If your scan is N× faster than baseline, reduce --bin-width by the same factor (e.g., baseline 50 ms -> 10× faster -> 5 ms: --bin-width 5000). You can train once (e.g., 5 ms), then use --load-params to quickly regenerate results at 10 ms without retraining.
The system supports comprehensive parameter save/load functionality.
- NPZ: Binary format for fast loading.
- JSON: Human-readable with metadata.
- CSV: Excel-compatible for manual inspection.
# Load any supported format
python compensate_multiwindow_train_saved_params.py segment.npz \
--load_params learned_params.npz
# or --load_params learned_params.json
# or --load_params learned_params.csvFiles are automatically named with parameter count, such as: *_learned_params_n13.*.
The system uses chunked processing throughout:
| Item | Detail |
|---|---|
| Chunk Size | Default 250000 events (configurable) |
| Memory Efficient | Processes large datasets without GPU overflow |
| Unified Variance | Maintains proper gradient flow for learning |
| Progress Tracking | Real-time processing updates |
project/
├── data/
│ ├── recording.raw # Original RAW file
│ ├── recording_segments/ # Segmented scans
│ │ ├── Scan_1_Forward_events.npz
│ │ ├── Scan_2_Backward_events.npz
│ │ └── ...
│ ├── learned_params_n13.npz # Trained parameters
│ ├── learned_params_n13.json
│ ├── learned_params_n13.csv
│ └── visualization_20240115_143022/ # Results
│ ├── events_with_params.png
│ ├── sliding_frames_*.npz
│ ├── frame_means_wavelength.png
│ └── time_binned_frames/ # Individual frames
python compensate_multiwindow_train_saved_params.py segment.npz \
--num_params 21 --temperature 3000 --iterations 2000 \
--a_trainable --b_trainable --boundary_trainable \
--smoothness_weight 0.0001 --chunk_size 100000python compensate_multiwindow_train_saved_params.py segment.npz \
--num_params 7 --iterations 500 --chunk_size 500000 \
--a_fixed --b_default -76.0python compensate_multiwindow_train_saved_params.py segment.npz \
--chunk_size 50000 --bin_width 100000The system supports spectral visualization by mapping temporal evolution to wavelength:
# Linear mapping: time -> wavelength
wavelength = wavelength_min + (t_normalized / t_max) * (wavelength_max - wavelength_min)Default Range:
- Microstepping: Use
32×for smooth motion (Arduino). - Bin Width: Start with
50 msfor optimization,2 msfor analysis. - Temperature: Higher values (around
5000) for smoother boundaries. - Smoothness:
0.001provides good regularization.
- GPU Memory: Use chunked processing with appropriate chunk size.
- Event Count:
> 10^6events recommended for stable learning. - Iterations:
1000iterations usually sufficient.
- Keep RAW files and segments in the same directory.
- Parameter files are auto-detected by naming convention.
- Use descriptive filename prefixes for organized output.
versions.mddescribes historical project eras and migration rationale..githooks/pre-commitblocks oversized/binary commits and non-code/doc file types.scripts/setup_hooks.shsetscore.hooksPathto.githooks.versions/05_archive_code_variants/stores older script variants to keep root-level tooling focused.
Known documentation drift (preserved intentionally for backward compatibility context):
- Some older docs mention
sync_image_system/ordual_camera_gui.py; current checkout containsImagingGUI/DualCamera_separate_transform.pyand SDK directories. ImagingGUI/README.mdstill referencespip install -r requirements.txt, but no rootrequirements.txtis present in this checkout.firmware/README.mdreferences several Arduino sketch subfolders that are not present in this checkout.versions.mdmentions legacy script names that differ from current root-level script names.i18n/exists and currently includesREADME.ar.md,README.es.md,README.fr.md,README.ja.md, andREADME.ko.md; links for additional languages are retained as planned targets.
| Symptom | Likely cause | Action |
|---|---|---|
| Parameter loading errors | Parameter count mismatch | Ensure --num_params matches the saved file |
| OOM / memory pressure | Chunk too large or bins too fine | Reduce --chunk_size and/or increase --bin_width |
| Weak compensation quality | Under-trained or poor segmentation | Increase --iterations, enable trainable params, verify segmentation |
| No segment files produced | RAW/SDK/flag issue | Confirm RAW path, Metavision setup, and --segment_events |
| Turbo wrapper args ignored | Incorrect forwarding syntax | Pass trainer args after -- (or use --extra) |
| GUI issues | Tkinter/backend or SDK mismatch | Verify GUI backend and camera SDK availability |
- Parameter loading errors: Ensure
--num_paramsis compatible with the loaded parameter file. - OOM / memory pressure: Reduce
--chunk_sizeand/or increase--bin_width. - Weak compensation quality: Increase
--iterations, enable trainable parameters (--a_trainable,--b_trainable, optionally--boundary_trainable), and verify segmentation quality. - No segment files produced: Confirm RAW path, Metavision reader availability, and that
--segment_eventswas passed. - Turbo wrapper argument passing: Put trainer args after
--(or use--extra). - GUI issues: Verify Tkinter backend support and camera SDK availability on your platform.
- Improve dependency/bootstrap reproducibility (
requirements.txtor environment lockfile). - Consolidate legacy script names and path references across docs.
- Expand documented dataset schemas and expected NPZ field conventions.
- Add regression-style tests for segmentation/compensation on small fixture data.
- Continue integrating publication-quality analysis outputs from
align_*pipelines. - Add/refresh the remaining multilingual README files under
i18n/to fully match the language navigation links at top.
If this repository is useful in your research, please cite the Optica article:
@article{chen2026self,
title = {Self-calibrated neuromorphic hyperspectral derivative imaging},
author = {Chen, Rongzhou and Wang, Chutian and Li, Yuxing and Cao, Yuqing and Zhu, Shuo and Lam, Edmund Y},
journal = {Optica},
volume = {13},
number = {4},
pages = {587--590},
year = {2026},
publisher = {Optica Publishing Group},
doi = {10.1364/OPTICA.585766},
url = {https://doi.org/10.1364/OPTICA.585766}
}- Published Optica article and associated project dissemination materials.
- Hardware and software contributors across repository evolution captured in
versions/and archived tooling. - Community support through GitHub Sponsors and associated project channels.
This project is released under the MIT License. See LICENSE for details.
Contributions are welcome.
- Start with existing scripts and documentation style.
- Keep command-line examples reproducible with repository paths where possible.
- If you add large datasets/outputs, ensure
.githooks/pre-commitpolicies are respected.
Note: a dedicated CONTRIBUTING.md is not present in this checkout. If needed, open an issue or submit a PR with the contribution workflow you propose.
If this project is useful to you, these links directly support ongoing maintenance and hardware iteration.
| Donate | PayPal | Stripe |
|---|---|---|
- 📌 This README keeps legacy-path notes where repository evolution introduced naming/layout drift.
- 🔒 If uncertain about older references, the text is preserved intentionally rather than removed.

