Skip to content

tobiasb/road-ranger

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

48 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Road Ranger πŸ˜οΈπŸš—πŸŽ₯πŸ€ πŸ”πŸ€–πŸ“ˆπŸ› πŸ“‰πŸ«‚

This is a proof-of-concept of a distracted driving detector (DDD) that uses the video from a camera pointed at a road to detect when someone driving by is distracted. It has been on my mind for a long time.

🌟 Motivation: Why Are We Doing This?

πŸšΆβ€β™‚οΈπŸ’¨ They Zip Past My Front Door β€” Cars speed through my quiet neighborhood far too fast.

πŸ“΅πŸ‘€ I Keep Catching Them on Their Phones β€” It worries me how often I see a lit-up screen instead of eyes on the road.

πŸ˜±βœ‹ Two Near-Misses Were Two Too Many β€” I literally had to jump back at the crosswalk because the driver never saw me.

πŸ“ŠπŸ—‚οΈ I Want Hard Proof, Not Anecdotes β€” Data is usually a good way to make a difference.

πŸ€–πŸͺ„ Let the Bots Do the Boring Work β€” My goal is for AI to handle 95% of the labeling so I can just review the tricky clips over a β˜•.

If we can measure it, we can fix it. ✨

🎯 Project Overview

This project implements a multi-phase approach to distracted driving detection. In the beginning there's all the infrastructure to record clips, do as much labelling of relevant clips (is there a πŸš—?) and then a web interface to review the clips and classify the drivers as distracted or not. Without any assistance, this wouldn't be practical at a busy road because "luckily" most drivers are not distracted so you'd look at a lot of boring footage. My hope is that over time, more and more of the true negatives can be labelled by AI and the human review can be reduced to a few minutes a week.

Thanks to my employer Clio I was able to spend almost 3 days of our "build it with AI" 2025 Hackathon building the first couple of milestones!!! πŸŽ‰ Thank you Clio!

  1. Motion Detection & Recording βœ… - Automatically records video clips when motion is detected (Watcher)
  2. Car Detection βœ… - Filters clips to identify those containing cars (Inspector)
  3. Manual Classification βœ… - Web interface for human review and distraction classification (Classifier)
  4. Timelapse Photography βœ… - High-resolution photo capture for creating timelapse videos of traffic patterns
  5. Driver Detection πŸ”„ - Identifies clips with visible drivers
  6. Distraction Detection πŸ”„ - Analyzes driver behavior for signs of distraction

πŸ“Έ Screenshots

πŸ”§ Hardware Setup

System Components Complete hardware setup including Raspberry Pi, camera, and power supply

Stream test app Helper web app for camera calibration and testing, this was from when I haven't realized yet that things are sometimes RGB and sometimes BGR 🀷

Camera Mounting Camera mounted and positioned to capture traffic on the road, early days, the Pi didn't have its shell yet

Raspberry Pi Setup And here's the final version, the Pi got a shell now

πŸ’» Software Interface

Classifier Web app The web-based classification interface for manually reviewing and labeling distracted driving clips

πŸš€ Next Steps

What I learned most from this is that photography is hard πŸ“Έ, WiFi is a pain πŸ“Ά and people just drive their cars way too fast πŸŽοΈπŸ’¨.

To make this a success, we need to:

  • πŸŽ₯ Solve the issue around quality/lighting of the video clips. There might be low hanging fruit here with the hardware we got.
  • πŸ€– Once the quality is good enough, we can start to think about the AI part. Driver detection is a good start and "conventional" models like YOLOv8 might be good enough for this.
  • πŸ‘€ Once we know there's a driver, we can start looking into distraction detection. Maybe there are specialized models out there, maybe LLMs could help here? πŸ€·β€β™€οΈ

πŸ—οΈ Architecture

The system is split into three specialized components:

πŸ•΅οΈ Watcher (Recording Side)

  • Software: Python console app
  • Hardware: Raspberry Pi 4 + Global Shutter Camera
  • Role: Motion detection, video recording, storage, timelapse capture
  • Dependencies: Lightweight (OpenCV, Picamera2)
  • Location: watcher/ directory
  • Config: watcher/config.py - Recording and motion detection settings

πŸ” Inspector (Analysis Side)

  • Software: Python console app
  • Hardware: Standard Ubuntu server with more CPU/memory
  • Role: ML processing, car detection, analysis
  • Dependencies: Heavy ML libraries (YOLOv8, PyTorch)
  • Location: inspector/ directory
  • Config: inspector/config.py - ML and analysis settings

🏷️ Classifier (Manual Review Side)

  • Software: Python web app
  • Hardware: Any device with web browser
  • Role: Human review, distraction classification, data management
  • Dependencies: Lightweight (Flask, SQLite)
  • Location: classifier/ directory
  • Config: classifier/config.py - Web app and classification settings

πŸš€ Quick Start

1. Watcher Setup (Raspberry Pi)

cd watcher/

# Run the automated setup script
./setup.sh

# Start motion recording
python3 main.py

# (Optional) Start timelapse capture server
python3 timelapse_capture.py

In the watcher output you'll see motion being detected and the clips stored locally.

2. Inspector Setup (Server)

cd inspector/

# Install pipenv
pip3 install --user pipenv

# Install ML dependencies
pipenv install

# Test installation
pipenv run test-yolo

# Continuously transfer clips from Watcher and analyze
./run_server_loop.sh

In the inspector output you'll see the clips being analyzed and the results stored in the database.

3. Classifier Setup (Web Interface)

cd classifier/

# Run the setup script
./setup.sh

# Start the web application
pipenv run start

Then open your browser to: http://localhost:5001 and you'll see the web interface.

βš™οΈ Configuration

πŸ“Ή Watcher Configuration (watcher/config.py)

# Recording schedule
RECORDING_START_TIME = "08:00"  # 8 AM
RECORDING_END_TIME = "18:00"    # 6 PM

# Motion detection sensitivity
MOTION_THRESHOLD = 80           # Higher = less sensitive
MIN_MOTION_AREA = 1500          # Minimum area to trigger

# Storage settings
STORAGE_DIR = "recorded_clips"
CLIP_RETENTION_DAYS = 7

πŸ€– Inspector Configuration (inspector/config.py)

# YOLO model settings
MODEL_SIZE = 'n'                # n=nano, s=small, m=medium, l=large
CONFIDENCE_THRESHOLD = 0.5      # Minimum confidence for detection

# Processing settings
SAMPLE_FRAMES = 10              # Frames to sample per video
INPUT_SIZE = (640, 640)         # YOLO input size

🏷️ Classifier Configuration (classifier/config.py)

# Flask settings
FLASK_HOST = "0.0.0.0"          # Listen on all interfaces
FLASK_PORT = 5001               # Port number

# Database settings
DATABASE_PATH = "../inspector/car_detection.db"  # Inspector database
VIDEO_DIR = "../inspector/downloaded_clips"      # Video files

# UI settings
MAX_VIDEOS_PER_PAGE = 20        # Clips per page
AUTO_PLAY_VIDEOS = False        # Auto-play option

πŸ“‚ File Transfer Options

πŸ”„ Option 1: Use Transfer Scripts

# Transfer video clips
cd inspector/
./transfer_clips.sh                    # Uses default folder
./transfer_clips.sh custom_clips       # Custom folder

# Transfer timelapse photos
cd watcher/
./transfer_timelapse_photos.sh             # Uses default folder
./transfer_timelapse_photos.sh my_photos   # Custom folder

πŸ“ Option 2: Network Share

# Copy to NAS from Watcher
cp watcher/recorded_clips/*.mp4 /mnt/nas/ddd_clips/

# Copy from NAS to Inspector
cp /mnt/nas/ddd_clips/*.mp4 ./inspector/

Note: The watcher uses atomic file operations to prevent partial file transfers. Files are written to a temporary location first, then moved to the final storage location only when complete.

🎬 Usage

πŸ“Ή Phase 1: Motion Recording (Watcher)

cd watcher/
python3 main.py

πŸ“Έ Timelapse Photography (Optional)

# Start the timelapse capture server
cd watcher/
python3 timelapse_capture.py

# Capture photos from another machine
curl http://raspberrypi-ddd.local:8081/capture

# Set up automated capture (e.g., every 5 minutes)
*/5 * * * * curl http://raspberrypi-ddd.local:8081/capture

πŸ” Phase 2: Car Detection (Inspector)

cd inspector/
pipenv run analyze-clips

🏷️ Phase 3: Manual Classification (Classifier)

cd classifier/
pipenv run start

Then open your browser to http://localhost:5001 and:

  1. πŸ“Ί Review unclassified clips that contain cars
  2. πŸ‘€ Watch each video and classify as:
    • Yes - Driver is distracted
    • No - Driver is not distracted
    • Don't Know - Unable to determine (will appear again later)
  3. πŸ“Š Track progress with real-time statistics
  4. πŸ“‹ Review classification history

πŸ”„ Complete Workflow

  1. Watcher records motion-triggered video clips
  2. Timelapse (optional) captures high-resolution photos for traffic pattern analysis
  3. Inspector analyzes clips for car detection and stores results in database
  4. Classifier provides web interface for manual distraction classification
  5. All components share the same database for seamless data flow

⚑ Performance Notes

πŸš€ Watcher Optimization

  • Lightweight dependencies only (installed via apt)
  • Efficient motion detection with minimal CPU usage
  • Automatic cleanup prevents storage overflow
  • Configurable recording windows reduce unnecessary processing

πŸ€– Inspector Optimization

  • YOLOv8n model for speed
  • Configurable confidence thresholds
  • Frame sampling for efficiency
  • Batch processing capabilities

πŸ’» Classifier Optimization

  • Efficient database queries with indexing
  • Pagination for large datasets
  • Responsive web design
  • Keyboard shortcuts for quick classification

πŸ’» Development

πŸ§ͺ Testing Components

Watcher:

cd watcher/
python3 test_motion_detection.py
python3 test_picamera2.py

Inspector:

cd inspector/
pipenv run test-yolo
pipenv run python test_car_detection.py

Classifier:

cd classifier/
pipenv run start
# Then test in browser at http://localhost:5001

πŸ› οΈ Adding Features

  1. Keep Watcher side lightweight
  2. Add ML features to Inspector side
  3. Use file-based messaging between components
  4. Test on actual hardware

πŸ› οΈ Troubleshooting

πŸ“Ή Watcher Issues

  • Camera not detected: Check connections and permissions
  • Motion detection issues: Adjust thresholds in watcher/config.py
  • Storage full: Enable cleanup in watcher/config.py

πŸ€– Inspector Issues

  • YOLOv8 installation: See inspector/README.md for setup instructions
  • Memory issues: Use smaller model or increase sampling
  • Transfer issues: Check network connectivity

πŸ’» Classifier Issues

  • Database not found: Make sure Inspector has been run first
  • Videos not loading: Check video directory path in classifier/config.py
  • Port already in use: Change port in classifier/config.py

πŸ—‚οΈ Project Structure

road-ranger/
β”œβ”€β”€ watcher/                    # Recording & Motion Detection
β”‚   β”œβ”€β”€ main.py                # Motion recording (RPi)
β”‚   β”œβ”€β”€ motion_detector.py     # Motion detection (RPi)
β”‚   β”œβ”€β”€ video_recorder.py      # Video recording (RPi)
β”‚   β”œβ”€β”€ timelapse_capture.py   # High-res photo capture (RPi)
β”‚   β”œβ”€β”€ create_timelapse_video.sh  # Create timelapse videos
β”‚   β”œβ”€β”€ transfer_timelapse_photos.sh   # Transfer photos from RPi
β”‚   β”œβ”€β”€ config.py              # Watcher configuration
β”‚   β”œβ”€β”€ setup.sh               # Installation script
β”‚   β”œβ”€β”€ recorded_clips/        # Video storage
β”‚   β”œβ”€β”€ timelapse_photos/      # Timelapse photo storage
β”‚   └── README.md              # Watcher documentation
β”œβ”€β”€ inspector/                  # ML Analysis & Car Detection
β”‚   β”œβ”€β”€ yolo_car_detector.py   # YOLOv8 detection (server)
β”‚   β”œβ”€β”€ yolo_car_table.py      # Analysis script (server)
β”‚   β”œβ”€β”€ database.py            # Database management
β”‚   β”œβ”€β”€ transfer_clips.sh      # Transfer clips from RPi
β”‚   β”œβ”€β”€ config.py              # Inspector configuration
β”‚   β”œβ”€β”€ Pipfile               # Server dependencies
β”‚   β”œβ”€β”€ server_setup.md        # Server setup guide
β”‚   β”œβ”€β”€ downloaded_clips/      # Video storage
β”‚   └── README.md              # Inspector documentation
β”œβ”€β”€ classifier/                 # Manual Classification Interface
β”‚   β”œβ”€β”€ app.py                # Flask web application
β”‚   β”œβ”€β”€ database.py           # Database wrapper
β”‚   β”œβ”€β”€ config.py             # Classifier configuration
β”‚   β”œβ”€β”€ requirements.txt      # Python dependencies
β”‚   β”œβ”€β”€ setup.sh              # Installation script
β”‚   β”œβ”€β”€ templates/            # HTML templates
β”‚   └── README.md             # Classifier documentation
└── README.md                  # This file

πŸ“¦ Quick Commands

# Watcher (Raspberry Pi)
cd watcher/
python3 main.py

# Inspector (Server)
cd inspector/
./run_server_loop.sh

# Classifier (Web Interface)
cd classifier/
pipenv run start

About

Distracted driving detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors