Bridegly
Inspiration
Communication barriers still affect millions of people every day. People who are Deaf, Hard of Hearing, or Blind often face challenges when trying to communicate through traditional phone calls or messaging platforms. Most communication tools assume that users can both hear and see, which excludes a large group of people from seamless real-time interaction.
Our inspiration was to create a system that bridges communication gaps between different accessibility needs. We wanted to build a platform where a Deaf user, a Blind user, and a hearing user could communicate naturally without requiring interpreters or specialized devices.
Bridegly was born from the idea that technology should adapt to people, not the other way around. By combining speech recognition, text conversion, and sign language interpretation, we aimed to create a single platform capable of translating between different forms of communication in real time.
What it does
Bridegly is a real-time accessibility communication platform that allows users with different abilities to communicate seamlessly during a call.
The system translates between three communication formats:
- Speech → Text
- Text → Speech
- Sign Language → Text/Voice
For example:
- A Blind user speaks, and the app converts their speech into live captions for the Deaf user.
- A Deaf user responds using sign language or quick phrases, and the app converts it into spoken audio.
- A hearing user can interact naturally with either user without needing to know sign language.
This creates a live communication bridge that removes barriers and enables inclusive conversations.
How we built it
Bridegly consists of three main components: a mobile client, a backend API, and a real-time communication infrastructure.
Mobile Application
We built the mobile app using Flutter, allowing us to support both Android and iOS from a single codebase. The mobile app manages:
- Camera input for sign language recognition
- Microphone input for speech recognition
- Text-to-speech playback
- Real-time captions
- Call interface and accessibility-focused UI
Backend
The backend was built with Node.js and Express, responsible for:
- User authentication with JWT
- Call session management
- Generating real-time communication tokens
- Handling user status and call states
We used SQLite for lightweight data storage, which allowed us to quickly set up a database without complex configuration.
Real-time Calling
For the video and audio communication layer, we integrated Twilio Programmable Video, which uses WebRTC technology to create secure low-latency calls between users.
Sign Language Recognition
We implemented basic sign recognition using hand landmark detection with MediaPipe-style models. Hand positions and finger angles are analyzed to classify simple sign gestures. To improve reliability, we used a frame stability mechanism so gestures must remain consistent for multiple frames before being accepted.
Challenges we ran into
One of the biggest challenges was real-time performance. Speech recognition, sign detection, and video calls all require processing simultaneously, and delays can disrupt natural conversations.
Another challenge was sign language recognition accuracy. Sign languages are complex and involve subtle hand movements, orientations, and facial expressions. Building a reliable recognition system within a short development timeframe required simplifying the model and focusing on a limited set of gestures.
We also faced accessibility design challenges. Creating an interface that works well for Deaf, Blind, and hearing users required careful UI decisions. For example:
- Blind users rely heavily on audio feedback.
- Deaf users require clear visual captions.
- Camera usage must remain simple and intuitive.
Balancing these needs was one of the most interesting parts of the project.
Accomplishments that we're proud of
One of our biggest accomplishments is successfully building a working real-time communication bridge that connects different accessibility needs within a single platform.
We are proud that Bridegly can:
- Perform real-time speech-to-text captioning
- Convert responses into text-to-speech audio
- Detect basic sign language gestures
- Enable live calls between users with different abilities
Most importantly, we built a system that demonstrates how accessibility can be integrated directly into communication technology rather than added as an afterthought.
What we learned
This project taught us valuable lessons across several areas:
Accessibility Design:
Designing for accessibility requires understanding how different users interact with technology and ensuring interfaces adapt to those needs.
Real-time Systems:
We learned how to integrate real-time communication systems and manage latency-sensitive features like speech recognition and video streaming.
Computer Vision:
Implementing sign detection gave us hands-on experience with gesture recognition and the challenges of translating visual signals into structured data.
System Architecture:
We learned how to combine mobile applications, backend APIs, and cloud communication services into a cohesive system.
What's next for Bridegly
Bridegly is currently a prototype, but we see significant potential for expanding the platform.
Future improvements include:
- Supporting the full sign language alphabet and words
- Training machine learning models for more accurate sign recognition
- Adding AI-powered translation between multiple spoken languages
- Introducing 3D sign avatars for text-to-sign communication
- Supporting group calls and messaging
- Integrating offline sign recognition
Our long-term vision is to transform Bridegly into a universal accessibility communication platform that ensures no one is excluded from everyday conversations.
Built With
- dart
- express.js
- flutter
- mlkit
- node.js
- python
Log in or sign up for Devpost to join the conversation.