Skip to content

ToppyMicroServices/YOLOZU

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,111 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

YOLOZU (萬)

Japanese: Readme_jp.md | Chinese: Readme_zh.md

YOLOZU is an Apache-2.0 vision evaluation toolkit for teams that do not want workflow lock-in.

Bring your own inference. Export once. Evaluate fairly.

YOLOZU uses one stable predictions interface contract: wrapped predictions.json with protocol-pinned meta.export_settings.

1-Minute Demo

python3 -m pip install -U yolozu
yolozu demo overview

Writes demo_output/overview/<utc>/demo_overview_report.json.

flowchart LR
    A["Ultralytics"] --> D["wrapped predictions.json"]
    B["RT-DETR"] --> D
    C["Detectron2 / MMDetection / custom"] --> D
    D --> E["validate"]
    E --> F["evaluate"]
    F --> G["comparable report"]
Loading

PyPI Python >=3.10 License CI

Read These First

Primary Focus

  • Main lane: evaluate precomputed predictions fairly across frameworks and runtimes
  • Secondary lane: export and reference training lanes that feed the same predictions interface contract
  • Secondary external lane: Apache-2.0-friendly YOLOX-style training bridge, with optional external copyleft-sensitive bridges kept separate
  • Advanced lane: continual learning, TTT, SynthGen, and backend parity research paths

Capability Maturity

  • Stable: prediction validation/evaluation, wrapped predictions.json, repo smoke/demo path, install/doctor flow
  • Experimental: backend parity, benchmark orchestration, SynthGen intake and handoff, macOS/MPS evaluation paths
  • Research: continual learning, self-distillation, TTT, Hessian refinement

Production Readiness

  • Production-ready today: prediction validation/evaluation and the predictions interface contract
  • Needs qualification in your environment: backend parity, benchmark orchestration, SynthGen handoff, macOS/MPS paths
  • Research-oriented: continual learning, self-distillation, TTT, Hessian refinement
  • Full details: docs/production_readiness.md

Who This Is For

  • You already have predictions and want fair cross-framework evaluation.
  • You want an Apache-2.0 evaluation layer without rewriting your training stack.
  • You do not want framework-native evaluation differences to become silent metric drift.

Not The Best Fit

  • You want one end-to-end training framework with one-click defaults.
  • You do not need cross-framework comparison or a stable predictions interface contract.

Why Not Just Use Framework-Native Evaluation?

Framework-native evaluation is convenient inside one stack, but it is harder to compare fairly across stacks. YOLOZU keeps the evaluation boundary at one predictions interface contract so the comparison path stays pinned even when the inference stack changes.

Where To Go Next

More Than The Demo

Repo Users

python3 -m pip install -e .
bash scripts/smoke.sh

More repo-first guidance:

Support And Legal