Inspiration

We were inspired by the app Citizen, which allows users to receive alerts based on police radio and crowd-sourced media. While Citizen excels at community awareness, we saw untapped potential to directly assist first responders. What if real-time visual evidence could prepare responders before they arrive? That vision led us to create FirstSight.

What it does

FirstSight lets citizens capture incidents through photos or videos and instantly share them with nearby first responders. Our AI processes the media in real time, analyzing and summarizing key details from the scene, then sends an alert with geolocation and a concise report. Responders receive visual and contextual information before arriving, and nearby citizens can contribute additional media for continuous updates.

How we built it

We built the frontend using React Native, TypeScript, Expo, Gluestack UI. The backend is built using Python, Quart, MongoDB, and the Gemini Python library. We are using computer vision via Google Gemini

Challenges we ran into

Dealing with one of the core team members leaving early on Saturday for prom Ensuring fast, reliable media uploads under poor connectivity. Compiling the application and running into a sea of red errors Beginning overly ambitious and ending up needing to downsize due to time constraints. React Native

Accomplishments that we're proud of

Striving to accomplish an ambitious project Prompt engineering Google Gemini Built a working prototype that captures, analyzes, and would've notifies in seconds. Successfully integrated AI summarization with live media.

Designed a clean, user-friendly UI for citizen reporters and responders.

What we learned

Real-time systems need seamless coordination across network, backend, and AI layers. Empowering citizens with the right tools makes communities more resilient.

What's next for FirstSight

We would like to add text to speech functionality, actually finish the app

Built With

Share this project:

Updates