Japanese: Readme_jp.md | Chinese: Readme_zh.md
YOLOZU is an Apache-2.0 vision evaluation toolkit for teams that do not want workflow lock-in.
Bring your own inference. Export once. Evaluate fairly.
YOLOZU uses one stable predictions interface contract:
wrapped predictions.json with protocol-pinned meta.export_settings.
python3 -m pip install -U yolozu
yolozu demo overviewWrites demo_output/overview/<utc>/demo_overview_report.json.
flowchart LR
A["Ultralytics"] --> D["wrapped predictions.json"]
B["RT-DETR"] --> D
C["Detectron2 / MMDetection / custom"] --> D
D --> E["validate"]
E --> F["evaluate"]
F --> G["comparable report"]
docs/README.md: top-level docs map and shortest working pathsdocs/predictions_schema.md: the predictions interface contractdocs/install.md: install,doctor, and environment setup
- Main lane: evaluate precomputed predictions fairly across frameworks and runtimes
- Secondary lane: export and reference training lanes that feed the same predictions interface contract
- Secondary external lane: Apache-2.0-friendly YOLOX-style training bridge, with optional external copyleft-sensitive bridges kept separate
- Advanced lane: continual learning, TTT, SynthGen, and backend parity research paths
- Stable: prediction validation/evaluation, wrapped
predictions.json, repo smoke/demo path, install/doctor flow - Experimental: backend parity, benchmark orchestration, SynthGen intake and handoff, macOS/MPS evaluation paths
- Research: continual learning, self-distillation, TTT, Hessian refinement
- Production-ready today: prediction validation/evaluation and the predictions interface contract
- Needs qualification in your environment: backend parity, benchmark orchestration, SynthGen handoff, macOS/MPS paths
- Research-oriented: continual learning, self-distillation, TTT, Hessian refinement
- Full details:
docs/production_readiness.md
- You already have predictions and want fair cross-framework evaluation.
- You want an Apache-2.0 evaluation layer without rewriting your training stack.
- You do not want framework-native evaluation differences to become silent metric drift.
- You want one end-to-end training framework with one-click defaults.
- You do not need cross-framework comparison or a stable predictions interface contract.
Framework-native evaluation is convenient inside one stack, but it is harder to compare fairly across stacks. YOLOZU keeps the evaluation boundary at one predictions interface contract so the comparison path stays pinned even when the inference stack changes.
- Evaluate precomputed predictions:
docs/external_inference.md - Train, export, then evaluate:
docs/training_inference_export.md - YOLO-style and Detectron2 external training lanes (
yolozu train --external-backend yolox|detectron2|ultralytics|hf-detr ...):docs/training_inference_export.md - Current training support matrix and scope boundary:
docs/training_inference_export.md#current-training-support - Training backend interface / capability matrix / orchestration:
docs/training_backend_interface.md,docs/training_capability_matrix.md,docs/training_orchestration.md - Qualify backend-parity and benchmark paths after the main eval lane is working:
docs/backend_parity_matrix.md,docs/benchmark_mode.md - Prepare YOLOZU-synthgen handoff:
docs/synthgen_repo_integration.md - Tool and manifest references:
docs/tools_index.md,tools/manifest.json
- Advanced docs map:
docs/README.md - Real-image showcase:
docs/assets/readme_multitask_showcase.png - Learning and research workflows:
docs/learning_features.md
python3 -m pip install -e .
bash scripts/smoke.shMore repo-first guidance:
- Docs index:
docs/README.md - Install details:
docs/install.md - Manual sources:
manual/README.md
- Support:
docs/support.md - License policy:
docs/license_policy.md - External training boundary: YOLOX first, optional Ultralytics and HF DETR bridges second
- Apache-2.0 license:
LICENSE - Latest release: GitHub Releases
- Zenodo software DOI: 10.5281/zenodo.18744756
- Zenodo manual DOI: 10.5281/zenodo.18744926