SenseRoute - AI-Powered Virtual Assistant
Inspiration
Millions of visually impaired individuals around the world face daily hurdles navigating unfamiliar environments. While tools like white canes and guide dogs provide basic assistance, they lack situational awareness and intelligent feedback. Our team was inspired to bridge this critical gap using technology—enabling not just navigation, but independence, safety, and dignity.
We wanted to create something affordable, intelligent, and impactful, especially for underserved communities. That vision gave birth to SenseRoute, an AI-powered assistant that understands the environment and communicates it in real-time to visually impaired users.
What it does
SenseRoute is a smart mobility assistant that provides real-time voice-based guidance to the visually impaired using computer vision and AI.
Key features include:
Object Detection (YOLOv8): Identifies obstacles like vehicles, fire, poles, people, etc. Text-to-Speech Feedback: **Describes surroundings to the user instantly. **Emergency Alerts: Sends emails to caregivers when dangerous objects are detected. OCR (Optical Character Recognition): Reads printed text like signs, boards, and books. Voice Commands: Enables hands-free interaction for navigation and assistance. News Reader: **Reads daily headlines to keep users informed. All features run through a mobile or wearable camera, making it portable and affordable.
How we built it
Our tech stack includes: Python for backend logic and integrations. YOLOv8 for real-time object detection and scene understanding. OpenCV for image preprocessing and OCR. gTTS / pyttsx3 for converting detected data to speech. Flask **(for local API testing) and SMTP for email alerts. **Android Interface using Kivy or React Native to access camera and voice features.**
The system architecture connects the camera feed → processes it through the model → returns the object label → delivers output via voice and alerts.
Challenges we ran into
** Real-time Processing:** Ensuring low-latency object detection on mobile devices was tough. Model Optimization: YOLOv8’s size required pruning and conversion (ONNX, TensorRT) for smoother performance. Accurate Voice Feedback: Ensuring clarity and language support in the speech output took fine-tuning. Privacy and Alerting: Designing the email alert system without compromising user privacy was a challenge.
Accomplishments that we're proud of
Developed a working prototype that recognizes objects and speaks out the results in real-time. Built an automated emergency alert system for critical hazards. Integrated multi-feature functionality—object detection, OCR, news reading, and voice commands. Ensured accessibility-first design at every stage.
What we learned
Mastered real-time object detection models (YOLOv8) and optimized them for edge use. Understood the importance of UX for accessibility—timing, feedback, and audio clarity. Gained experience in building assistive tech that blends AI with empathy.
What's next for SenseRoute -AI Powered Virtual Assistant
** Navigation Integration: Add GPS-based voice-guided routing for outdoor mobility. ** Multilingual Support: *Support local Indian languages and dialects in speech output. **AI Fine-tuning: **Incorporate personalized learning based on the user’s environment and behavior. * Deploy as a Native Android App and launch on Play Store. *Community Beta Testing * to gather feedback from real users and organizations serving the visually impaired.
Built With
- firebase
- flask
- gmail-api
- google-colab
- google-tts
- gtts
- html/css
- javascript
- news
- numpy
- opencv
- pandas
- pyaudio
- python
- pyttsx3
- react-native
- smtp
- speechrecognition
- tesseract-ocr
- yolov8
Log in or sign up for Devpost to join the conversation.