Inspiration

Every day, our entire team commutes on the TTC. And almost every day, we witness something unsettling — a person in mental distress shouting on the platform, someone slumped unconscious against a pillar, or an aggressive confrontation that leaves other riders frozen and scared. Transit staff are spread thin across massive stations, and by the time someone reports an incident, it's often too late to respond effectively. We built WatchLine because we live this problem. We wanted to give transit systems the ability to see what their staff can't — and act before things escalate.

What it does

WatchLine is a real-time AI-powered transit safety system that monitors subway station camera feeds and automatically detects dangerous situations. Using computer vision and pose estimation, it identifies events like people falling, lying on the ground, aggressive behaviour such as punching, erratic movement, and crouching. The moment something is detected, an AI-generated emergency summary is created and pushed instantly to a live operations dashboard, where staff see a red alert screen, hear an alarm, and get a plain-English description of what's happening and where. Every incident is also permanently recorded on the Solana blockchain as an immutable audit log, and all camera footage travels exclusively over a Tailscale private network — never touching the public internet.

How we built it

We built WatchLine across three layers. The AI detection layer runs YOLOv8-pose locally on the camera device, tracking 17 body keypoints per person per frame. On top of that we wrote custom classifiers in Python that analyze joint angles, wrist velocity, body aspect ratios and motion history to identify five distinct threat events with tuned confidence thresholds. When a detection fires, it POSTs to our FastAPI backend hosted on Vultr, which calls the Google Gemini 2.0 Flash API to generate a 2-sentence dispatch summary, writes the incident to Supabase, logs it to Solana Devnet, and broadcasts the alert via native WebSocket to the dashboard. The frontend is built in React with TanStack React Query and Zustand, showing a live camera grid, real-time alert overlays, and an incident log — all updating the moment an event is detected.

Challenges we ran into

The hardest problem was false positives. Early versions of our aggression detector fired on normal walking and arm waving. We solved this by requiring two simultaneous conditions — wrist speed exceeding 1.5 body-heights per second AND elbow extension above 140 degrees — which eliminated false triggers while preserving real punch detection. Synchronizing the async FastAPI WebSocket broadcaster with the synchronous detection pipeline required careful use of asyncio.create_task. We also had to redesign our Gemini integration mid-build when we realized video upload latency was too slow for real-time alerts, switching to a text-only structured prompt approach that responds instantly.

Accomplishments that we're proud of

We're proud that the entire detection-to-dashboard pipeline works end to end in real time. From a camera frame being processed to a red alert appearing on the dashboard with a Gemini-generated summary takes under two seconds. We're also proud of the classifier quality — five distinct threat types detected purely through geometry and motion math, no training data required. Building a system that could realistically be deployed in a real transit environment, not just a demo, felt meaningful given why we built it.

What we learned

We learned how to utilize these different AIs with specific uses such as YOLOv8, how to link Google Gemini 2.0 Flash with our program. However, the biggest takeaway was our experience working together as a team. Not many of us have worked on a project of this scale by ourselves, and despite the many miscommunications and errors as a result of that, we pulled through to create this amazing project.

What's next for WatchLine

We want to expand detection to include platform edge proximity alerts — identifying when someone is dangerously close to the tracks. We plan to add voice distress detection using microphone input to catch screaming or calls for help that cameras can't see. On the infrastructure side, we want to build a multi-station dashboard where a single operator can monitor an entire transit network, with heatmaps showing which stations have the highest incident rates over time. Ultimately we want WatchLine running on existing TTC camera infrastructure — no new hardware required.

Share this project:

Updates