Inspiration
In today’s world, communication and learning are becoming increasingly digital. However, most communication tools and learning platforms are still designed around sound, unintentionally excluding Deaf and hard-of-hearing communities. This reflects a broader systemic bias where silence is treated as a limitation rather than a valid and powerful form of communication.
American Sign Language (ASL) is not very prominent in mainstream education, and many people never learn it, not because they don’t want to, but because existing tools are inaccessible, passive, or unengaging. We were inspired to create a modern, digital, and sustainable way for people to learn ASL that aligns with today’s technology-driven world while addressing accessibility and inclusion.
What it does
Silent Speak is an AI-powered ASL learning application that teaches users ASL through small and fun interactive lessons.
The app uses computer vision to analyze hand gestures through a webcam and provide real-time feedback on whether a sign is performed correctly. Lessons are broken into short, manageable segments with progress bars, visual cues, and quizzes at the end of each level. This approach makes ASL more understandable, more fun to learn, and easier to stay committed to over time.
By visualizing progress and learning outcomes, Silent Speak reduces the bias that exists around silence by making visual communication approachable and engaging.
How we built it
We built Silent Speak as a web-based, accessible, and efficient application designed to scale sustainably.
- We used React + TypeScript to build a frontend and implement MySQL for the backend to store user data securely
- Computer vision to detect and interpret hand gestures in real time
- A level-based learning system encourages gradual improvement through small daily progress, and stores it in a MySQL database as well as other user data
- Visual feedback tools such as progress bars and quizzes help users track their learning
- The platform is fully digital, reducing reliance on physical resources
Our design prioritizes accessibility and efficiency, ensuring the app can continue to operate and improve over time with minimal environmental and resource cost.
Challenges we ran into
One of the main challenges we faced was connecting the frontend and backend in a way that was both reliable and efficient. Ensuring smooth communication between the user interface, the database, and the AI components required careful handling of requests, data flow, and error states. Debugging issues across the full stack, especially under hackathon time constraints, was a significant learning experience.
One challenge was ensuring that AI-based gesture recognition remained accurate while still being easy to use. Computer vision can be affected by lighting, camera quality, and hand positioning, so we focused on finding a model that was able to accurately detect ASL.
Another challenge was designing lessons that balanced novelty with simplicity. We wanted the experience to feel fun and engaging without overwhelming beginners who had never interacted with ASL before.
Accomplishments that we're proud of
- Implementing AI and computer vision to actively teach ASL rather than relying on passive content
- Creating a novel learning experience that breaks ASL into small, sustainable daily progress
- Designing an application that clearly visualizes learning progress and improvement
- Building a solution that addresses accessibility bias while remaining efficient and scalable
- Using hashing and email verification to have a secure authentication system and database
We are especially proud of reframing ASL learning as something fun, modern, and achievable.
What we learned
Developing a solution to a accessibility bias by gaining skills in integrating AI models with a fullstack application. We gained valuable skills in MySQL, React, TypeScript, Python (in backend development), and version control.
What's next for Silent Speak
Next, we plan to:
- Expand lessons beyond the alphabet into full ASL words and phrases
- Improve AI accuracy and adaptability across different users
- Integrate a remote database solution (e.g., Microsoft Azure) for scalability
- Support long-term user accounts and learning history
- Extend support to additional sign languages
Built With
- bcrypt
- css
- express.js
- flask
- mediapipe
- mysql
- node.js
- python
- pytorch
- react
- typescript
Log in or sign up for Devpost to join the conversation.