ICEalert

Inspiration

In 2025, 68,000+ unlawful immigration detention continue across major U.S. states like Minnesota, California, and Texas β€” often unfolding quickly and without centralized, real-time public visibility. Families frequently rely on scattered social media posts, group chats, or word of mouth to stay informed. By the time information spreads, it can already be outdated.

We built ICEalert to change that. πŸ›‘οΈ

ICEalert is about protection through awareness β€” giving communities timely, spatial, and verified insights so they can make informed, safety-first decisions when it matters most.


What it does

ICEalert is a real-time civic awareness platform that:

  • πŸ—ΊοΈ Displays nearby ICE-related activity on a live map
  • πŸ“‘ Uses a proximity radar to visualize how close incidents are
  • πŸ”” Sends push notifications when activity appears near the user
  • πŸ“Š Assigns dynamic danger levels based on live AI detections
  • πŸ•’ Shows time, distance, and severity of each incident
  • πŸŽ₯ Monitors multiple livestreams with real-time AI overlays (admin dashboard)

The experience blends:

  • A Luma-style immersive map interface
  • An Uber-style sliding bottom panel
  • A radar visualization with layered danger zones
  • A real-time list of nearby incidents

The tone is calm, civic-first, and protective β€” not alarmist.


How we built it

ICEalert combines computer vision, real-time streaming, and spatial intelligence.

🧠 Training the Model (Bright Data)

We trained a YOLO-based object detection model on 500+ curated images of ICE enforcement activity.

Training data was sourced and curated using Bright Data web agent searches, allowing us to collect diverse and realistic examples from public online sources.

This gave our model:

  • Varied lighting conditions
  • Different geographic contexts
  • Real-world enforcement visuals
  • Multiple vehicle and officer configurations

⚑ Live Inference Layer (Modal + WebRTC)

For real-time detection, we deployed our model to Modal using GPU-backed inference.

  • Public livestreams are ingested via WebRTC.
  • Frames are sampled and batched efficiently.
  • Modal processes frames in real time.
  • Detection results are returned with timestamps and stream identifiers.
  • Overlays appear live in the admin dashboard.

This architecture allows us to monitor multiple concurrent streams simultaneously while maintaining low latency.


πŸŽ₯ Admin Dashboard

The admin interface:

  • Monitors several livestreams at once
  • Displays live YOLO bounding box overlays
  • Shows per-stream danger scoring
  • Tracks detection history in real time

This dashboard allows centralized monitoring across states like:

  • Minnesota
  • Los Angeles (California)
  • Texas metropolitan regions

πŸ“± Mobile App

Built with React Native (Expo + TypeScript), the mobile app includes:

  • Full-screen live map
  • Heatmap overlay
  • Proximity radar (3 concentric danger layers)
  • Draggable Uber-style bottom panel
  • Real-time alert feed
  • Push notifications for nearby high-risk activity
  • Incident detail screen with summary and context

Updates are delivered via WebSocket β€” no refresh required.


Challenges we ran into

  • βš™οΈ Efficiently batching multiple livestream frames for GPU inference
  • πŸ” Maintaining low-latency updates across concurrent streams
  • πŸ“ Converting raw detections into meaningful proximity-based danger scores
  • 🎯 Designing a radar visualization that feels intuitive instead of overwhelming
  • βš–οΈ Framing a sensitive product responsibly while keeping the UX calm and civic

The biggest challenge wasn’t the model β€” it was orchestration.


Accomplishments that we're proud of

  • πŸš€ Training a YOLO model on 500+ curated enforcement images
  • ⚑ Deploying live object detection on Modal with WebRTC ingestion
  • πŸ“‘ Successfully monitoring multiple concurrent livestreams
  • πŸŽ₯ Working admin dashboard with live detection overlays
  • πŸ“± Fully functional mobile app with:

    • Heatmap
    • Radar visualization
    • Sliding panel UI
    • Push notifications
  • 🧠 Real-time danger scoring and geo-indexed alerting

Most importantly, we built something that feels empowering rather than reactive.


What we learned

  • Real-time AI systems are orchestration challenges more than model challenges.
  • GPU batching dramatically improves throughput and stability.
  • WebRTC ingestion requires careful latency management.
  • Spatial UX (maps + radar) communicates urgency better than raw numbers.
  • Responsible framing matters when building in sensitive domains.

What's next for ICEalert

  • πŸ” Expand training data for improved model robustness
  • πŸ“Š Add historical trend analysis and pattern clustering
  • 🀝 Introduce moderated community reporting
  • 🌎 Expand livestream coverage across additional states
  • 🧠 Integrate contextual AI-generated summaries for incidents
  • πŸ›  Scale inference for higher concurrent stream loads

Our long-term vision:

A responsible, real-time civic awareness system that protects families through transparency, clarity, and informed decision-making. πŸ›‘οΈ

Built With

Share this project:

Updates