Inspiration
The inspiration behind InSignia stemmed from a profound desire to bridge the communication gap faced by the deaf and mute community in India. With over 63 million individuals experiencing hearing impairments, there is a pressing need for innovative solutions that promote inclusivity. The rapid advancements in AI, machine learning, and wearable technology presented a unique opportunity to create a transformative tool that not only translates Indian Sign Language (ISL) into text and speech but also fosters learning and broader societal awareness.
What it does
InSignia is a comprehensive AI-powered platform that provides real-time translation of Indian Sign Language (ISL) into text, speech, and gestures, making communication seamless for the deaf and mute community. Key features include:
- Real-Time ISL Translation: Instantly translates ISL gestures into text and speech and vice versa.
- Multi-Language Support: Offers translations in multiple Indian languages for broader accessibility.
- On-Call Translation: Enables real-time translation during phone and video calls, breaking barriers in remote communication. This feature utilizes a combination of voice-over-IP (VoIP) technology, AI-driven gesture recognition, and natural language processing. The system captures live ISL gestures via camera or connected gloves, converts them into text or synthesized speech, and seamlessly integrates with the call interface to ensure continuous translation without disrupting the communication flow.
- ISL Learning Tool: An interactive platform that makes learning ISL engaging and effective for non-signers and educational institutions.
How we built it
InSignia integrates both software and hardware solutions for a robust, real-time translation experience:
- Hardware Solution:
- Wearable Gloves with Flex Sensors: Specialized gloves embedded with flex sensors capture hand and finger movements for ISL gestures. The flex sensors, made from resistive materials, detect changes in resistance when bent, providing accurate readings of hand posture. These sensors offer high sensitivity with a bending angle range of 0° to 180°, ensuring precise gesture recognition. Additionally, they feature a rapid response time and minimal power consumption, making them ideal for continuous use in wearable devices.
- Microcontroller Circuit: Processes sensor data to detect bending and positional changes.
- Bluetooth Module: Ensures seamless wireless transmission of gesture data to the software.
- Power Management System: Supports long-lasting, energy-efficient usage.
- Software Solution:
- Real-Time Gesture Recognition Engine: Developed with AI models using Convolutional Neural Networks (CNNs) for accurate recognition of hand signs.
- Machine Learning Training: Gesture datasets were labeled and trained to distinguish intricate ISL variations.
- Speech Synthesis and Text Output: Converts recognized signs into spoken words and textual content using natural language processing. The system leverages advanced speech synthesis libraries like Google Text-to-Speech (gTTS) and pyttsx3, providing clear, customizable vocal output. It supports multiple voices, languages, and adjustable speech rates, enhancing accessibility and user experience.
- Multi-Language Translation APIs: Enables translation into regional Indian languages, enhancing accessibility.
Challenges we ran into
- Complex Gesture Distinctions: Differentiating subtle hand movements required refined sensor calibration and robust data preprocessing.
- Hardware Integration: Synchronizing sensor data with AI-based gesture recognition in real time without latency.
- Scalability: Ensuring the system can handle additional languages and more gestures as it evolves.
- On-Call Translation: Developing a seamless interface for real-time ISL conversion during live audio and video calls.
Accomplishments that we're proud of
- Delivering a fully functional prototype that demonstrates real-time ISL translation to text and speech with high accuracy.
- Creating specialized wearable gloves that enhance gesture detection while being comfortable and lightweight.
- Implementing real-time on-call translation capabilities to support dynamic communication scenarios.
- Building a user-friendly ISL learning platform that promotes greater adoption of sign language among non-signers.
What we learned
- Interdisciplinary Integration: Combining hardware engineering with AI software for a cohesive product was a rewarding challenge.
- Human-Centric Design: Engaging with the deaf and mute community provided valuable insights into usability and accessibility.
- Machine Learning Optimization: Training models to improve recognition rates reinforced the importance of diverse datasets and iterative testing.
What's next for InSignia
- Improved Sensor Technology: Enhancing glove sensitivity and durability with next-generation flex sensors.
- AI Personalization: Customizable gesture recognition models tailored to individual users' signing styles.
- Voice and Face Recognition: Integrating visual cue detection for lip-reading assistance and advanced voice synthesis.
- Expanded Market Reach: Collaborating with government bodies, healthcare providers, and educational institutions to deploy InSignia across India.
- Cloud and Mobile Integration: Offering cloud-based gesture processing for scalable services using platforms like Google Cloud and AWS, which provide robust machine learning models and storage capabilities. Additionally, developing a mobile app targeting Android and iOS platforms with frameworks such as Flutter or React Native ensures cross-platform compatibility and user-friendly interfaces for broader accessibility.


Log in or sign up for Devpost to join the conversation.