Automatic person anonymization in surveillance videos using YOLO v8 multi-scale detection, ByteTrack tracking, and temporal smoothing.
Created by Andrea Bonacci
CLI and web tool for automatic person anonymization in surveillance videos. Designed for fixed cameras with wide-angle lenses where people may appear small (30β100 px).
In addition to the default YOLO pipeline, Person Anonymizer supports SAM3 (Segment Anything Model 3 by Meta) as an optional detection/segmentation backend, providing pixel-precise person masks instead of bounding boxes.
Three detection modes:
| Mode | Description | GPU Requirement |
|---|---|---|
yolo |
Default β YOLO v8 multi-scale (fast, battle-tested) | CUDA recommended, CPU supported |
yolo+sam3 |
Hybrid β YOLO detects, SAM3 refines masks | CUDA required |
sam3 |
Full SAM3 β detection and segmentation by SAM3 | CUDA required, β₯8 GB VRAM |
Requirements for SAM3:
- Python 3.12+
- CUDA-capable GPU (β₯8 GB VRAM recommended for
sam3mode)
Install SAM3 dependencies:
# Option A β dedicated requirements file
pip install -r requirements-sam3.txt
# Option B β package extra
pip install 'person-anonymizer[sam3]'CLI usage:
# Hybrid mode: YOLO detection + SAM3 mask refinement
python -m person_anonymizer.cli video.mp4 --backend yolo+sam3
# Full SAM3 mode: detection and segmentation entirely by SAM3
python -m person_anonymizer.cli video.mp4 --backend sam3Web interface: a "Detection backend" dropdown is available in the configuration panel to select the mode without using the CLI.
- Multi-scale YOLO v8 detection β inference at 4 scales (1.0xβ2.5x) + 3x3 sliding window + Test-Time Augmentation
- ByteTrack tracking β persistent person IDs across consecutive frames
- Temporal smoothing EMA β stabilizes bounding boxes with moving average; ghost boxes handle temporary occlusions
- Auto-refinement β re-analyzes the rendered video and adds missed detections (up to 3 passes)
- Manual review β interactive OpenCV (CLI) or browser (web) interface to add/edit/delete polygons
- Two anonymization methods β pixelation (default) or Gaussian blur
- Adaptive intensity β obscuring strength proportional to person size
- Post-render verification β second YOLO pass on the anonymized video to flag residual detections
- Optional fish-eye correction β optical undistortion via camera calibration
- Complete output set β anonymized H.264 video, debug video, CSV report, reusable JSON annotations
- Python 3.11+ (3.12+ required for SAM3 backend)
- ffmpeg (for H.264 encoding and audio preservation)
- ~150 MB disk space for YOLO models (downloaded automatically on first run)
- CUDA GPU recommended; CPU also supported (CUDA required for SAM3 modes)
Install ffmpeg:
# Ubuntu/Debian
sudo apt install ffmpeg
# macOS
brew install ffmpeg
# Windows (Chocolatey)
choco install ffmpeggit clone https://github.com/AndreaBonn/PRIVATE__video-anonimyzer.git
cd PRIVATE__video-anonimyzer
python -m venv person_anonymizer/.venv
source person_anonymizer/.venv/bin/activate
# Windows: person_anonymizer\.venv\Scripts\activate
pip install -r requirements.txt# Standard β automatic detection + manual review (recommended)
python -m person_anonymizer.cli video.mp4
# Fully automatic (no manual review)
python -m person_anonymizer.cli video.mp4 -M auto
# Specify output and method
python -m person_anonymizer.cli video.mp4 -o output.mp4 -m blur
# Disable debug video and CSV report
python -m person_anonymizer.cli video.mp4 --no-debug --no-report
# Reload JSON annotations and reopen review
python -m person_anonymizer.cli video.mp4 --review annotations.json
# Normalize annotations (merge overlapping polygons)
python -m person_anonymizer.cli video.mp4 --review annotations.json --normalizeCLI options:
| Option | Description | Default |
|---|---|---|
input |
Path to input video | (required) |
-M, --mode |
manual (with review) or auto |
manual |
-o, --output |
Output file path | <input>_anonymized.mp4 |
-m, --method |
pixelation or blur |
pixelation |
--backend |
Detection backend: yolo, yolo+sam3, sam3 |
yolo |
--no-debug |
Disable debug video | False |
--no-report |
Disable CSV report | False |
--review |
Reload annotations from JSON | None |
--normalize |
Normalize polygons (requires --review) | False |
python -m person_anonymizer.web.app
# Open http://127.0.0.1:5000The web GUI allows you to:
- Upload videos via drag & drop
- Configure all pipeline parameters
- Monitor progress in real time (SSE)
- Review annotations frame by frame in the browser
- Download all outputs (video, debug, report, JSON)
Environment variables:
| Variable | Description | Default |
|---|---|---|
FLASK_SECRET_KEY |
Secret key for Flask sessions | Random (generated at startup) |
FLASK_HOST |
Web server host | 127.0.0.1 |
FLASK_PORT |
Web server port | 5000 |
- Detection β YOLO v8 multi-scale + sliding window + TTA, with optional motion detection
- Auto-refinement β Re-render + second YOLO pass, up to 3 iterations
- Manual review β Interactive interface (OpenCV or web) for corrections
- Rendering β Apply anonymization to the original video (FFV1 lossless intermediate)
- Post-processing β H.264 encoding with ffmpeg, audio preservation, report saving
| File | Description |
|---|---|
*_anonymized.mp4 |
Video with persons obscured (H.264) |
*_debug.mp4 |
Video with colored detection overlays |
*_report.csv |
Per-frame report (confidence, detections, motion) |
*_annotations.json |
Full annotations (reusable with --review) |
.mp4, .m4v, .mov, .avi, .mkv, .webm
All 40+ parameters are configurable via PipelineConfig or the web GUI. Key parameters:
| Parameter | Description | Default | Range |
|---|---|---|---|
detection_confidence |
YOLO confidence threshold | 0.20 | 0.01β0.99 |
anonymization_intensity |
Obscuring strength | 10 | 1β100 |
person_padding |
Padding around person (px) | 15 | 0β200 |
yolo_model |
YOLO model | yolov8x.pt |
yolov8x.pt, yolov8n.pt |
enable_sliding_window |
3x3 sliding window grid | True |
|
enable_tracking |
ByteTrack tracking | True |
|
enable_temporal_smoothing |
EMA + ghost boxes | True |
|
smoothing_alpha |
EMA weight (1 = no smoothing) | 0.35 | 0.01β1.0 |
ghost_frames |
Ghost frames for occlusions | 10 | 0β120 |
enable_adaptive_intensity |
Intensity proportional to size | True |
|
max_refinement_passes |
Auto-refinement iterations | 3 | 1β10 |
person_anonymizer/
βββ config.py # PipelineConfig with validation
βββ models.py # Dataclasses (PipelineContext, OutputPaths, etc.)
βββ pipeline.py # Pipeline orchestrator
βββ pipeline_stages.py # Stages: detection, refinement, review
βββ output.py # Output saving and JSON loading
βββ cli.py # CLI entry point
βββ detection.py # YOLO multi-scale + NMS
βββ tracking.py # ByteTrack + TemporalSmoother
βββ anonymization.py # Obscuring + polygon geometry
βββ preprocessing.py # CLAHE, fisheye, motion detection
βββ postprocessing.py # H.264 encoding, post-render check
βββ rendering.py # Video rendering + review stats
βββ manual_reviewer.py # OpenCV manual review GUI
βββ camera_calibration.py# Camera calibration utility
βββ sam3_backend.py # SAM3 segmentation backend
βββ backend_factory.py # Backend selection and instantiation
βββ web/ # Flask web interface
βββ app.py # Flask routes + SSE + security
βββ pipeline_runner.py
βββ sse_manager.py
βββ review_state.py
tests/ # 293 tests (pytest)
reports/ # Audit reports
requirements-sam3.txt # SAM3 optional dependencies
source person_anonymizer/.venv/bin/activate
pytest tests/ -v
ruff check person_anonymizer/See SECURITY.md for full details on implemented protections.
- Ultralytics YOLOv8 β Object detection
- Meta SAM3 β Pixel-precise segmentation (optional)
- ByteTrack β Multi-object tracking
- OpenCV β Video processing
- Flask β Web interface
- ffmpeg β Video encoding
This project is licensed under the Apache License 2.0.
Note: This project depends on Ultralytics YOLOv8 which is licensed under AGPL-3.0. If you use this software as a network service, the AGPL requires that the complete source code be made available. Since this project is already open source, there is no practical conflict. For commercial/proprietary use of YOLO, see Ultralytics Licensing.
Tool CLI e web per l'anonimizzazione automatica di persone in video di sorveglianza. Progettato per telecamere fisse con lenti grandangolari, dove le persone possono apparire di piccole dimensioni (30β100 px).
Oltre alla pipeline YOLO predefinita, Person Anonymizer supporta SAM3 (Segment Anything Model 3 di Meta) come backend opzionale di detection e segmentazione, che produce maschere pixel-precise delle persone al posto dei bounding box.
Tre modalitΓ di rilevamento:
| ModalitΓ | Descrizione | Requisiti GPU |
|---|---|---|
yolo |
Default β YOLO v8 multi-scala (rapido, collaudato) | CUDA consigliata, supporta anche CPU |
yolo+sam3 |
Ibrida β YOLO rileva, SAM3 raffina le maschere | CUDA obbligatoria |
sam3 |
SAM3 completo β detection e segmentazione interamente tramite SAM3 | CUDA obbligatoria, β₯8 GB VRAM |
Requisiti per SAM3:
- Python 3.12+
- GPU con supporto CUDA (β₯8 GB VRAM consigliati per la modalitΓ
sam3)
Installazione dipendenze SAM3:
# Opzione A β file requirements dedicato
pip install -r requirements-sam3.txt
# Opzione B β extra del package
pip install 'person-anonymizer[sam3]'Utilizzo CLI:
# ModalitΓ ibrida: YOLO rileva, SAM3 raffina le maschere
python -m person_anonymizer.cli video.mp4 --backend yolo+sam3
# ModalitΓ SAM3 completo: detection e segmentazione interamente tramite SAM3
python -m person_anonymizer.cli video.mp4 --backend sam3Interfaccia web: nel pannello di configurazione Γ¨ disponibile un menu a tendina "Backend rilevamento" per selezionare la modalitΓ senza usare la CLI.
- Rilevamento YOLO v8 multi-scala β inferenza a 4 scale (1.0xβ2.5x) + sliding window 3x3 + Test-Time Augmentation
- Tracking ByteTrack β ID persona persistenti tra frame consecutivi
- Temporal smoothing EMA β stabilizza i bounding box con media mobile; ghost box per gestire occlusioni temporanee
- Auto-refinement β ri-analizza il video renderizzato e aggiunge detection mancanti (fino a 3 iterazioni)
- Revisione manuale β interfaccia interattiva OpenCV (CLI) o browser (web) per aggiungere/modificare/eliminare poligoni
- Due metodi di oscuramento β pixelation (default) o blur gaussiano
- IntensitΓ adattiva β forza dell'oscuramento proporzionale alla dimensione della persona
- Verifica post-rendering β secondo passaggio YOLO sul video anonimizzato per segnalare detection residue
- Correzione fish-eye opzionale β undistortion ottica tramite calibrazione camera
- Output completo β video H.264 anonimizzato, video debug, report CSV, annotazioni JSON riutilizzabili
- Python 3.11+ (3.12+ obbligatorio per il backend SAM3)
- ffmpeg (per encoding H.264 e preservazione audio)
- ~150 MB di spazio disco per i modelli YOLO (scaricati automaticamente al primo avvio)
- GPU CUDA raccomandata; funziona anche su CPU (CUDA obbligatoria per le modalitΓ SAM3)
Installare ffmpeg:
# Ubuntu/Debian
sudo apt install ffmpeg
# macOS
brew install ffmpeg
# Windows (Chocolatey)
choco install ffmpeggit clone https://github.com/AndreaBonn/PRIVATE__video-anonimyzer.git
cd PRIVATE__video-anonimyzer
python -m venv person_anonymizer/.venv
source person_anonymizer/.venv/bin/activate
# Windows: person_anonymizer\.venv\Scripts\activate
pip install -r requirements.txt# Standard β detection automatica + revisione manuale (consigliato)
python -m person_anonymizer.cli video.mp4
# Completamente automatico (senza revisione)
python -m person_anonymizer.cli video.mp4 -M auto
# Specificare output e metodo
python -m person_anonymizer.cli video.mp4 -o output.mp4 -m blur
# Disabilitare video debug e report CSV
python -m person_anonymizer.cli video.mp4 --no-debug --no-report
# Ricaricare annotazioni JSON e riaprire la revisione
python -m person_anonymizer.cli video.mp4 --review annotazioni.json
# Normalizzare annotazioni (merge poligoni sovrapposti)
python -m person_anonymizer.cli video.mp4 --review annotazioni.json --normalizeOpzioni CLI:
| Opzione | Descrizione | Default |
|---|---|---|
input |
Percorso video da elaborare | (obbligatorio) |
-M, --mode |
manual (con revisione) o auto |
manual |
-o, --output |
Percorso file di output | <input>_anonymized.mp4 |
-m, --method |
pixelation o blur |
pixelation |
--backend |
Backend rilevamento: yolo, yolo+sam3, sam3 |
yolo |
--no-debug |
Disabilita video debug | False |
--no-report |
Disabilita CSV report | False |
--review |
Ricarica annotazioni da JSON | None |
--normalize |
Normalizza poligoni (richiede --review) | False |
python -m person_anonymizer.web.app
# Apri http://127.0.0.1:5000 nel browserLa web GUI permette di:
- Caricare video tramite drag & drop
- Configurare tutti i parametri della pipeline
- Monitorare il progresso in tempo reale (SSE)
- Revisionare le annotazioni frame per frame nel browser
- Scaricare tutti gli output (video, debug, report, JSON)
Variabili d'ambiente:
| Variabile | Descrizione | Default |
|---|---|---|
FLASK_SECRET_KEY |
Chiave segreta per sessioni Flask | Random (generata all'avvio) |
FLASK_HOST |
Host del server web | 127.0.0.1 |
FLASK_PORT |
Porta del server web | 5000 |
- Detection β YOLO v8 multi-scala + sliding window + TTA, con motion detection opzionale
- Auto-refinement β Re-rendering + secondo passaggio YOLO, fino a 3 iterazioni
- Revisione manuale β Interfaccia interattiva (OpenCV o web) per correzioni
- Rendering β Applicazione oscuramento al video originale (intermedio FFV1 lossless)
- Post-processing β Encoding H.264 con ffmpeg, preservazione audio, salvataggio report
| File | Descrizione |
|---|---|
*_anonymized.mp4 |
Video con persone oscurate (H.264) |
*_debug.mp4 |
Video con overlay colorati delle detection |
*_report.csv |
Report per-frame (confidenza, detection, motion) |
*_annotations.json |
Annotazioni complete (riutilizzabili con --review) |
.mp4, .m4v, .mov, .avi, .mkv, .webm
Tutti i 40+ parametri sono configurabili tramite PipelineConfig o la web GUI. I principali:
| Parametro | Descrizione | Default | Range |
|---|---|---|---|
detection_confidence |
Soglia confidenza YOLO | 0.20 | 0.01β0.99 |
anonymization_intensity |
IntensitΓ oscuramento | 10 | 1β100 |
person_padding |
Padding intorno alla persona (px) | 15 | 0β200 |
yolo_model |
Modello YOLO | yolov8x.pt |
yolov8x.pt, yolov8n.pt |
enable_sliding_window |
Griglia sliding window 3x3 | True |
|
enable_tracking |
ByteTrack tracking | True |
|
enable_temporal_smoothing |
EMA + ghost box | True |
|
smoothing_alpha |
Peso EMA (1 = nessuno smoothing) | 0.35 | 0.01β1.0 |
ghost_frames |
Frame ghost per occlusioni | 10 | 0β120 |
enable_adaptive_intensity |
IntensitΓ proporzionale alla dimensione | True |
|
max_refinement_passes |
Iterazioni auto-refinement | 3 | 1β10 |
person_anonymizer/
βββ config.py # PipelineConfig con validazione
βββ models.py # Dataclass (PipelineContext, OutputPaths, ecc.)
βββ pipeline.py # Orchestratore pipeline
βββ pipeline_stages.py # Fasi: detection, refinement, review
βββ output.py # Salvataggio output e caricamento JSON
βββ cli.py # CLI entry point
βββ detection.py # YOLO multi-scala + NMS
βββ tracking.py # ByteTrack + TemporalSmoother
βββ anonymization.py # Oscuramento + geometria poligoni
βββ preprocessing.py # CLAHE, fisheye, motion detection
βββ postprocessing.py # Encoding H.264, verifica post-render
βββ rendering.py # Rendering video + statistiche review
βββ manual_reviewer.py # GUI OpenCV per revisione manuale
βββ camera_calibration.py# Utility calibrazione camera
βββ sam3_backend.py # Backend segmentazione SAM3
βββ backend_factory.py # Selezione e istanziazione backend
βββ web/ # Interfaccia web Flask
βββ app.py # Flask routes + SSE + security
βββ pipeline_runner.py
βββ sse_manager.py
βββ review_state.py
tests/ # 293 test (pytest)
reports/ # Report di audit
requirements-sam3.txt # Dipendenze opzionali SAM3
source person_anonymizer/.venv/bin/activate
pytest tests/ -v
ruff check person_anonymizer/Vedi SECURITY.md per i dettagli completi sulle protezioni implementate.
- Ultralytics YOLOv8 β Object detection
- Meta SAM3 β Segmentazione pixel-precisa (opzionale)
- ByteTrack β Multi-object tracking
- OpenCV β Video processing
- Flask β Interfaccia web
- ffmpeg β Video encoding
Questo progetto Γ¨ rilasciato sotto Apache License 2.0.
Nota: Questo progetto dipende da Ultralytics YOLOv8, rilasciato sotto AGPL-3.0. Se si utilizza questo software come servizio di rete, l'AGPL richiede che il codice sorgente completo sia reso disponibile. Essendo questo progetto giΓ open source, non c'Γ¨ conflitto pratico. Per uso commerciale/proprietario di YOLO, vedere Ultralytics Licensing.