Inspiration
The inspiration behind AbleAssist comes from the desire to create a tool that removes communication barriers for individuals with hearing, speech, or visual impairments. We wanted to develop an app that empowers these individuals to interact confidently with the world around them, whether through spoken or written language or even sign language. By combining innovative technologies like speech recognition, object detection, and AI, we aim to create a more inclusive environment where everyone can communicate and engage without limitations.
What it does:
AbleAssist is a platform with three key features:
Text to Speech (TTS): Converts written text into spoken language. Speech to Text (STT): Converts spoken language into written text. ASL to English: Translates American Sign Language (ASL) gestures into English.
How we built it:
We used a combination of AI, machine learning models, and APIs:
For TTS and STT: We integrated speech processing libraries like Google’s Text-to-Speech (gTTS) and SpeechRecognition. For ASL to English: We used a hand gesture recognition model trained on ASL data to detect and translate signs into English.
Challenges we ran into
Accuracy: Ensuring accurate translation for both ASL and speech-to-text features posed a challenge, especially in noisy environments. Real-time processing: Ensuring seamless communication without delays while processing speech or gestures. User experience: Designing an intuitive interface that is accessible to everyone, including users with disabilities.
We were having trouble merging the ASL Converter with the UI Interface. I have attached a demo of the ASL Converter and we will continue to work on this.
Accomplishments that we're proud of
Successfully integrating all three features into one platform. Enabling real-time text-to-speech and speech-to-text conversion. Training an ASL recognition model that provides accurate translations of gestures.
What we learned
The importance of building inclusive tools that address real-world challenges faced by people with disabilities. How AI and machine learning can be leveraged to create accessible solutions. The need for continuous improvement in accuracy and user experience.
What's next for AbleAssist
Object Detection: We want to integrate object detection to help visually impaired users identify objects in their environment. By using models like YOLO or TensorFlow, the app will allow users to scan their surroundings and get spoken feedback about the objects around them—things like “This is a chair,” or “There’s a table in front of you.” This will help users navigate spaces with ease.
AI Voice Assistant: We’re looking to add an AI voice assistant to make AbleAssist even more interactive. The voice assistant will allow users to control the app entirely by voice. They could ask it to “Translate my ASL,” “Convert this text to speech,” or “What’s in front of me?” This hands-free interaction will make it easier for users to get help while on the go, without having to touch the app.
Multi-Device and Platform Support: We want to expand AbleAssist’s reach by ensuring it works across multiple devices and platforms. Whether users are on their phone, tablet, or even a smart speaker, we want them to be able to use the app anywhere. This will give users more flexibility and accessibility.
Integration with Assistive Devices: We want AbleAssist to integrate with existing assistive technologies—like hearing aids, Braille devices, and prosthetics—so that users can have a more connected experience. Whether it’s reading text through a hearing aid or displaying ASL translations on a Braille device, we aim to make AbleAssist a seamless part of users' daily lives.
Real-Time Language Translation: We also want to improve language support. Expanding the app to handle real-time translations between ASL and other languages (e.g., Spanish, French, etc.) would help users connect with a larger global audience, making the app more versatile and inclusive for people worldwide.
Built With
- gemini
- github
- javascript
- mediapipe
- opencv
- python
- reactnative
- tensorflow
- vscode
Log in or sign up for Devpost to join the conversation.