This is a proof-of-concept of a distracted driving detector (DDD) that uses the video from a camera pointed at a road to detect when someone driving by is distracted. It has been on my mind for a long time.
πΆββοΈπ¨ They Zip Past My Front Door β Cars speed through my quiet neighborhood far too fast.
π΅π I Keep Catching Them on Their Phones β It worries me how often I see a lit-up screen instead of eyes on the road.
π±β Two Near-Misses Were Two Too Many β I literally had to jump back at the crosswalk because the driver never saw me.
πποΈ I Want Hard Proof, Not Anecdotes β Data is usually a good way to make a difference.
π€πͺ Let the Bots Do the Boring Work β My goal is for AI to handle 95% of the labeling so I can just review the tricky clips over a β.
If we can measure it, we can fix it. β¨
This project implements a multi-phase approach to distracted driving detection. In the beginning there's all the infrastructure to record clips, do as much labelling of relevant clips (is there a π?) and then a web interface to review the clips and classify the drivers as distracted or not. Without any assistance, this wouldn't be practical at a busy road because "luckily" most drivers are not distracted so you'd look at a lot of boring footage. My hope is that over time, more and more of the true negatives can be labelled by AI and the human review can be reduced to a few minutes a week.
Thanks to my employer Clio I was able to spend almost 3 days of our "build it with AI" 2025 Hackathon building the first couple of milestones!!! π Thank you Clio!
- Motion Detection & Recording β - Automatically records video clips when motion is detected (Watcher)
- Car Detection β - Filters clips to identify those containing cars (Inspector)
- Manual Classification β - Web interface for human review and distraction classification (Classifier)
- Timelapse Photography β - High-resolution photo capture for creating timelapse videos of traffic patterns
- Driver Detection π - Identifies clips with visible drivers
- Distraction Detection π - Analyzes driver behavior for signs of distraction
Complete hardware setup including Raspberry Pi, camera, and power supply
Helper web app for camera calibration and testing, this was from when I haven't realized yet that things are sometimes RGB and sometimes BGR π€·
Camera mounted and positioned to capture traffic on the road, early days, the Pi didn't have its shell yet
And here's the final version, the Pi got a shell now
The web-based classification interface for manually reviewing and labeling distracted driving clips
What I learned most from this is that photography is hard πΈ, WiFi is a pain πΆ and people just drive their cars way too fast ποΈπ¨.
To make this a success, we need to:
- π₯ Solve the issue around quality/lighting of the video clips. There might be low hanging fruit here with the hardware we got.
- π€ Once the quality is good enough, we can start to think about the AI part. Driver detection is a good start and "conventional" models like YOLOv8 might be good enough for this.
- π Once we know there's a driver, we can start looking into distraction detection. Maybe there are specialized models out there, maybe LLMs could help here? π€·ββοΈ
The system is split into three specialized components:
- Software: Python console app
- Hardware: Raspberry Pi 4 + Global Shutter Camera
- Role: Motion detection, video recording, storage, timelapse capture
- Dependencies: Lightweight (OpenCV, Picamera2)
- Location:
watcher/directory - Config:
watcher/config.py- Recording and motion detection settings
- Software: Python console app
- Hardware: Standard Ubuntu server with more CPU/memory
- Role: ML processing, car detection, analysis
- Dependencies: Heavy ML libraries (YOLOv8, PyTorch)
- Location:
inspector/directory - Config:
inspector/config.py- ML and analysis settings
- Software: Python web app
- Hardware: Any device with web browser
- Role: Human review, distraction classification, data management
- Dependencies: Lightweight (Flask, SQLite)
- Location:
classifier/directory - Config:
classifier/config.py- Web app and classification settings
cd watcher/
# Run the automated setup script
./setup.sh
# Start motion recording
python3 main.py
# (Optional) Start timelapse capture server
python3 timelapse_capture.pyIn the watcher output you'll see motion being detected and the clips stored locally.
cd inspector/
# Install pipenv
pip3 install --user pipenv
# Install ML dependencies
pipenv install
# Test installation
pipenv run test-yolo
# Continuously transfer clips from Watcher and analyze
./run_server_loop.shIn the inspector output you'll see the clips being analyzed and the results stored in the database.
cd classifier/
# Run the setup script
./setup.sh
# Start the web application
pipenv run startThen open your browser to: http://localhost:5001 and you'll see the web interface.
# Recording schedule
RECORDING_START_TIME = "08:00" # 8 AM
RECORDING_END_TIME = "18:00" # 6 PM
# Motion detection sensitivity
MOTION_THRESHOLD = 80 # Higher = less sensitive
MIN_MOTION_AREA = 1500 # Minimum area to trigger
# Storage settings
STORAGE_DIR = "recorded_clips"
CLIP_RETENTION_DAYS = 7# YOLO model settings
MODEL_SIZE = 'n' # n=nano, s=small, m=medium, l=large
CONFIDENCE_THRESHOLD = 0.5 # Minimum confidence for detection
# Processing settings
SAMPLE_FRAMES = 10 # Frames to sample per video
INPUT_SIZE = (640, 640) # YOLO input size# Flask settings
FLASK_HOST = "0.0.0.0" # Listen on all interfaces
FLASK_PORT = 5001 # Port number
# Database settings
DATABASE_PATH = "../inspector/car_detection.db" # Inspector database
VIDEO_DIR = "../inspector/downloaded_clips" # Video files
# UI settings
MAX_VIDEOS_PER_PAGE = 20 # Clips per page
AUTO_PLAY_VIDEOS = False # Auto-play option# Transfer video clips
cd inspector/
./transfer_clips.sh # Uses default folder
./transfer_clips.sh custom_clips # Custom folder
# Transfer timelapse photos
cd watcher/
./transfer_timelapse_photos.sh # Uses default folder
./transfer_timelapse_photos.sh my_photos # Custom folder# Copy to NAS from Watcher
cp watcher/recorded_clips/*.mp4 /mnt/nas/ddd_clips/
# Copy from NAS to Inspector
cp /mnt/nas/ddd_clips/*.mp4 ./inspector/Note: The watcher uses atomic file operations to prevent partial file transfers. Files are written to a temporary location first, then moved to the final storage location only when complete.
cd watcher/
python3 main.py# Start the timelapse capture server
cd watcher/
python3 timelapse_capture.py
# Capture photos from another machine
curl http://raspberrypi-ddd.local:8081/capture
# Set up automated capture (e.g., every 5 minutes)
*/5 * * * * curl http://raspberrypi-ddd.local:8081/capturecd inspector/
pipenv run analyze-clipscd classifier/
pipenv run startThen open your browser to http://localhost:5001 and:
- πΊ Review unclassified clips that contain cars
- π Watch each video and classify as:
- Yes - Driver is distracted
- No - Driver is not distracted
- Don't Know - Unable to determine (will appear again later)
- π Track progress with real-time statistics
- π Review classification history
- Watcher records motion-triggered video clips
- Timelapse (optional) captures high-resolution photos for traffic pattern analysis
- Inspector analyzes clips for car detection and stores results in database
- Classifier provides web interface for manual distraction classification
- All components share the same database for seamless data flow
- Lightweight dependencies only (installed via apt)
- Efficient motion detection with minimal CPU usage
- Automatic cleanup prevents storage overflow
- Configurable recording windows reduce unnecessary processing
- YOLOv8n model for speed
- Configurable confidence thresholds
- Frame sampling for efficiency
- Batch processing capabilities
- Efficient database queries with indexing
- Pagination for large datasets
- Responsive web design
- Keyboard shortcuts for quick classification
Watcher:
cd watcher/
python3 test_motion_detection.py
python3 test_picamera2.pyInspector:
cd inspector/
pipenv run test-yolo
pipenv run python test_car_detection.pyClassifier:
cd classifier/
pipenv run start
# Then test in browser at http://localhost:5001- Keep Watcher side lightweight
- Add ML features to Inspector side
- Use file-based messaging between components
- Test on actual hardware
- Camera not detected: Check connections and permissions
- Motion detection issues: Adjust thresholds in
watcher/config.py - Storage full: Enable cleanup in
watcher/config.py
- YOLOv8 installation: See
inspector/README.mdfor setup instructions - Memory issues: Use smaller model or increase sampling
- Transfer issues: Check network connectivity
- Database not found: Make sure Inspector has been run first
- Videos not loading: Check video directory path in
classifier/config.py - Port already in use: Change port in
classifier/config.py
road-ranger/
βββ watcher/ # Recording & Motion Detection
β βββ main.py # Motion recording (RPi)
β βββ motion_detector.py # Motion detection (RPi)
β βββ video_recorder.py # Video recording (RPi)
β βββ timelapse_capture.py # High-res photo capture (RPi)
β βββ create_timelapse_video.sh # Create timelapse videos
β βββ transfer_timelapse_photos.sh # Transfer photos from RPi
β βββ config.py # Watcher configuration
β βββ setup.sh # Installation script
β βββ recorded_clips/ # Video storage
β βββ timelapse_photos/ # Timelapse photo storage
β βββ README.md # Watcher documentation
βββ inspector/ # ML Analysis & Car Detection
β βββ yolo_car_detector.py # YOLOv8 detection (server)
β βββ yolo_car_table.py # Analysis script (server)
β βββ database.py # Database management
β βββ transfer_clips.sh # Transfer clips from RPi
β βββ config.py # Inspector configuration
β βββ Pipfile # Server dependencies
β βββ server_setup.md # Server setup guide
β βββ downloaded_clips/ # Video storage
β βββ README.md # Inspector documentation
βββ classifier/ # Manual Classification Interface
β βββ app.py # Flask web application
β βββ database.py # Database wrapper
β βββ config.py # Classifier configuration
β βββ requirements.txt # Python dependencies
β βββ setup.sh # Installation script
β βββ templates/ # HTML templates
β βββ README.md # Classifier documentation
βββ README.md # This file
# Watcher (Raspberry Pi)
cd watcher/
python3 main.py
# Inspector (Server)
cd inspector/
./run_server_loop.sh
# Classifier (Web Interface)
cd classifier/
pipenv run start