Inspiration
We come from a country where earthquakes are not distant headlines — they’re lived realities. We've seen firsthand how chaotic and overwhelming the rescue process can be when buildings collapse and time is running out.
That’s why we asked ourselves: How can we use the tools we have — AI, XR, and edge computing — to support the people risking their lives to save others?
Project Mithril was born from that urgency: a vision to give rescuers clearer insight, faster decisions, and a better chance at finding those who are still alive — but hidden from view.
What it does
Project Mithril is an AI-powered augmented reality application designed to support search-and-rescue operations in the chaotic aftermath of earthquakes. Rescuers often face dangerous, unstable environments with little to no visibility into where survivors might be trapped. Mithril transforms this experience by equipping teams with AR headsets that scan the interior of collapsed buildings in real time. The system identifies structural elements like slabs, columns, and beams, then analyzes their angles and positions to detect possible survivable voids. Using this information, it highlights safe paths and warns about hazardous areas—all within the rescuer’s field of view. The result is faster, safer, and more focused rescue efforts where every second counts.
How we built it
Mithril was built by combining accessible consumer hardware with optimized artificial intelligence. We use Meta Quest 3 headsets for their passthrough AR capabilities and pair them with the NVIDIA Jetson Orin for real-time, offline computing. Our custom computer vision model is trained to detect patterns common in building collapses, such as inclined slabs forming voids, overhangs that may create shelter pockets, and pancake collapses indicating tight survival spaces. The system also incorporates a behavioral heatmap engine that factors in the time of the earthquake, building type, and typical human behavior to prioritize search zones. Everything is designed to be modular and run offline, so teams can deploy the system instantly, even in areas with zero connectivity.
Challenges we ran into
One of the biggest challenges was running complex computer vision models on-device without sacrificing speed or accuracy. To address this, we optimized our models through pruning and quantization, tailoring them specifically for the Jetson Orin platform. AR tracking in rubble-heavy environments also presented unique difficulties, requiring us to refine our spatial mapping techniques for accuracy under visually confusing conditions. Additionally, access to annotated datasets of real earthquake collapse scenes was limited, so we created a mix of simulations and physics-based renderings to train and validate our models. Perhaps most importantly, working directly with USAR teams helped us recognize how critical usability was; we had to redesign the interface to be effective even under stress, dust, and noise.
Accomplishments that we're proud of
We’re proud to have created a fully offline, real-time AR solution that runs on consumer-grade hardware costing under $2,000—well within the budget range of most emergency teams. Our simulations successfully identified likely survivor zones and provided valuable situational awareness within seconds of deployment. More than just a tech demo, Mithrıl is a practical tool ready for field use. It’s designed to fit within the current workflows and training standards of rescue teams, which means it can be adopted quickly without extensive re-training. Most importantly, it offers a new layer of vision and insight to those who put themselves at risk to save others.
What we learned
Throughout the development of Project Mithrıl, we learned that speed and trust are everything in emergency response. Rescuers don’t just want data—they need information that is clear, actionable, and arrives exactly when they need it. We also discovered that transparency matters; users want to understand why a zone was flagged as dangerous or safe. Working with limited hardware proved that innovation doesn’t have to be expensive—clever software can make consumer devices life-saving tools. Finally, we realized that designing for field conditions means thinking beyond the screen, and building for real-world use cases like gloves, dust, noise, and urgency.
What's next for Project Mithril
Looking ahead, we’re expanding our training datasets through collaborations with academic institutions and disaster-response organizations to improve model accuracy across diverse building types and collapse patterns. We plan to integrate audio sensors and thermal imaging to increase detection accuracy, especially in low-visibility conditions. Haptic feedback is also in development, so rescuers can receive subtle physical cues alongside visual ones. We’re preparing for broader deployment trials with FEMA and international partners to test and refine Mithrıl in real-world rescue missions. Ultimately, we aim to open the platform to outside developers and NGOs through an API, enabling customization and collaboration that can bring life-saving vision to more people around the world.
Built With
- mcp
- meta-quest-3
- nvidia-jetson-orin
- slam
- unity

Log in or sign up for Devpost to join the conversation.