Inspiration

Over 2 billion people worldwide live with vision impairments, yet core assistive tools like white canes have seen little innovation. Meanwhile, modern AI vision systems are cloud-dependent and unusable offline. WayFinder was inspired by the need for a privacy-first, real-time AI system that meaningfully improves mobility and safety for visually impaired individuals while modernizing public accessibility infrastructure.

What it does

WayFinder is an on-device AI guide that uses Visual Language Models (VLMs) to provide real-time audio feedback for visually impaired users: describing surroundings, detecting obstacles, reading text, and guiding navigation. In parallel, it anonymously aggregates hazard data (e.g., potholes, blocked walkways) into a centralized platform for universities and governments to improve public safety and infrastructure planning.

How I built it

I built WayFinder with a local-first edge AI architecture. A quantized multi-billion-parameter VLM runs directly on a mobile device for offline inference. Google Gemini supports higher-level reasoning when available, ElevenLabs provides natural audio narration, and DigitalOcean hosts lightweight services for anonymized hazard aggregation and dashboards. The system integrates mobile OS internals, ML frameworks, and backend infrastructure.

Challenges I ran into

The biggest challenge was system integration on mobile edge devices. Running large VLMs required overcoming memory limits, framework incompatibilities, and latency constraints. Balancing real-time performance, battery usage, and privacy, while keeping the system usable, pushed me to explore unconventional deployment strategies and deep optimization.

Accomplishments that I'm proud of

Running a multi-billion-parameter VLM locally on a mobile device, enabling real-time, offline navigation assistance. Designing a privacy-preserving civic data pipeline. Bridging AI, accessibility, and public infrastructure into one system

What I learned

I learned that impactful AI is not just about model size, but where intelligence lives. Edge AI enables privacy, reliability, and accessibility that cloud-only systems can’t. I also learned the importance of interdisciplinary thinking when building AI for real-world human needs.

What’s next for WayFinder

Next, I plan to further reduce latency and improve real-time feedback through better model optimization. I also want to integrate WayFinder into existing assistive tools, such as smart walking sticks with embedded cameras and microphones, to make AI-powered navigation even more seamless and accessible.

Built With

Share this project:

Updates