-
-
The map indicates real-time fire data from NASA FIRMS.
-
For high + critical risk areas, FireProof automatically navigates to the recommended evacuation location through OpenRouteService.
-
The user can verbally ask for help through ElevenLabs; the RAG response is read audibly, but is also logged for future reference.
-
System Architecture Visualization
Team member info
- Akansha Bansiya (May '27): Master of Software Engineering @ SJSU
- Devonny Grajeda (March '27): Bachelor's of Computer Science @ SCU
- Tian Herng Tan (May '26): Master of Electrical Engineering and Computer Science @ UC Berkeley
- Jay Timbol (December '27): Bachelor's of Software Engineering @ SJSU
- Yiying (Irene) Xie (May '27): Master of Computer Science @ Northeastern University
Sub-Prize Submissions
- Best Graduate Hack
- Best Hack by Womxn in Tech
Sponsor Prize Submissions
- Best Project Built with ElevenLabs
- Best Use of AMD Tech
- Best Use of Responsible AI
- Future Unicorn Award
Inspiration
Wildfires happen fast and move quickly, and during these times people are often stressed, driving, or even have limited internet access. So, we wanted to create an accessible tool that will inform you if a fire is nearby and recommend a safe route route + tell you what to do in the moment.
What it does
1. Evacuation routing: provides a safe route by providing turn-by-turn style guidance.
2. Voice hotline interface: allows users to speak naturally and receive real-time spoken guidance during wildfire emergencies.
3. Wildfire safety guidance: gives step-by-step advice grounded in vetted documents (RAG).
4. Fast, hands free experience: the application is designed for high-stress conditions when reading a screen isn't realistic.
How we built it
The system determines the user's location and wildfire risk level using location-based data. If the user is in a high or risk area, FireProof immediately prioritizes evacuation directions to help them leave the area safely.
If the user is in a medium or low risk area, they are given the option to either ask a question for wildfire guidance or request evacuation directions.
If the user chooses to ask for advice, their audio is transcribed using ElevenLabs Speech-to-Text. The system then runs a Retrieval-Augmented Generation (RAG) pipeline that retrieves relevant wildfire safety guidance from a vector database. This information is passed to an instruction-tuned LLM which generates a grounded response.
The final response is converted back into audio using ElevenLabs Text-to-Speech and delivered back to the user.
Challenges we ran into
1. Hosting the backend consisting of FastAPI, vLLM, and pgvector on AMD cloud.
2. Integrating ElevenLabs API for speech to text and text to speech.
3. Making an Android APK for the application.
Accomplishments that we're proud of
1. Designing a safety-focused decision system.
Instead of treating every situation the same (which is never the case), the system prioritizes immediate evacuation direction for critical areas, ensuring that users in urgent situations receive the most critical information first.
2. Building a complete end-to-end voice system.
We successfully created a pipeline that takes spoken input, processes it through speech-to-text, does a RAG retrieval on an LLM, and returns a spoken response back to the user.
3. Grounding AI responses with RAG.
By implementing Retrieval-Augmented Generation, we ensured that responses are based on valid wildfire safety guidance rather than relying solely on the language.
4. Combining multiple technologies into one application.
We were able to integrate speech processing, vector search, LLM reasoning, and routing services into a single system designed for real-time emergency support.
What we learned
1. Integrating multiple AI systems into a single pipeline.
We learned how to connect speech processing, retrieval systems, and large language models into a single working system. Coordinating Speech-to-Text, vector retrieval, LLM reasoning, and Text-to-Speech required careful planning/execution so the experience felt responsive.
2. Designing for high-stress user situations.
We realized that in emergency situations simplicity matters a lot more than features. For example, by designing the system to immediately prioritize evacuation directions in critical/high risk areas helped us think about how AI systems should behave when users are under pressure.
3. Hosting production-level backend.
We leveraged docker containers and docker compose to orchestrate the containers for FastAPI, vLLM, and pgvector on the AMD cloud. There was a lot of networking, routing, and permissions/security learning experience with setting up different containers to allow them to talk to each other and the broader internet.
What's next for FireProof
1. Feed location specific documents to the RAG
Currently, the RAG is filled with general guidance instructions sourced from multiple government websites like ready.gov, CDC, and EPA. We want to add location-specific guidance, such as California State specific guidance, or Yellowstone National Park specific guidance, depending on where the user is.
2. Commercialization
We want to help rural communities stay safe, and the best way to do that is to expand our services with more features and location-specific supports. Getting some funding in, whether it be through ads, a subscription fee, or government grants would support our mission.
3. Expand voice functions
The voice feature currently supports interacting with only our own hosted LLMs. We want to expand support to help call 911, or message any other contacts on your phone. Instead of being Siri-like, we would give it more functionalities, making it a multi-purpose AI agent capable on running on your phone.
Built With
- amd-cloud
- docker
- docker-compose
- elevenlabapi
- expo.io
- fastapi
- nasa-firms-public-api
- openrouteservice
- openstreetmap
- pgvector
- postgresql
- react-native
- retrieval-augmented-generation
- uvicorn
- vllm
Log in or sign up for Devpost to join the conversation.