Inspiration
Being from in California, wildfires are a reality Sarah lives with every year. We were also inspired by the challenge of finding injured people in rubble after disasters — whether earthquakes, fires, or conflict zones. We asked ourselves: what if robots could go into the most dangerous areas first, so medics know exactly where survivors are before they risk their own lives?
That's how RuffTerrain was born — a real-time search & rescue operations platform that deploys autonomous robots into active wildfire zones to locate survivors.
What it does
RuffTerrain sends a robot into a mapped wildfire zone in Pacific Palisades, Los Angeles. The robot navigates around active fire fronts using real-time fire simulation data, scanning for people using computer vision. It classifies whether a person is likely injured (lying down) or ambulatory (standing), streams chemical sensor readings (CO, methane, thermal risk), and relays everything to a live operator dashboard. The fire simulation spreads realistically over time, and the robot autonomously patrols the fire's edge — the most likely place to find trapped survivors.
How we built it
- Charlie focused on the frontend UI, the computer vision pipeline, the injury classification model, and robot navigation logic.
- Sarah handled the robot implementation, fire spread simulation, web scraping for live fire data, and Cyberwave digital twin integration.
The dashboard is built in Next.js with Leaflet for real-time mapping. The fire simulation runs client-side with tuned spread probabilities calibrated to real chaparral wildfire speeds. Robot telemetry flows through Cyberwave's MQTT platform, and we built a ScrapeGraphAI-powered scraper to pull live fire incident data from CAL FIRE and InciWeb.
Challenges we ran into
- Integrating the physical robot into the software platform — bridging the gap between hardware telemetry and a real-time web dashboard required careful coordination between MQTT messaging, the Cyberwave digital twin, and our frontend.
- Injury detection accuracy — our current ML approach uses posture as a proxy: standing $\rightarrow$ not injured, lying down $\rightarrow$ potentially injured. It works as a proof of concept, but real-world triage classification is a much harder problem.
- Fire simulation performance — rendering thousands of grid cells with animated fire spread while keeping robot movement smooth at 60fps required decoupling the rendering pipeline from React's state updates.
Accomplishments that we're proud of
- A fully working real-time dashboard that visualizes fire spread, robot position, and survivor detections on a live map — all updating simultaneously.
- Fire simulation calibrated to real-world chaparral wildfire spread rates, with organic growth patterns including ember spotting and uphill acceleration.
- End
Built With
- cal-fire
- css-frameworks:-next.js
- inciweb-libraries:-leaflet-/-react-leaflet-(mapping)
- next.js
- paho-mqtt
- paho-mqtt-(robot-telemetry)-other:-computer-vision-(injury/posture-detection)
- pydantic-(data-validation)
- python
- react
- tailwind-css-platforms:-cyberwave-(mqtt-digital-twin-platform)-apis:-scrapegraphai-(web-scraping)
- typescript
Log in or sign up for Devpost to join the conversation.