Inspiration:

🌍 Learning sign language without a teacher is hard. It’s not just about memorizing gestures, but mastering motion, orientation, and expression. We were inspired by how many people want to communicate inclusively but struggle to learn sign language without real-time feedback.

💡 That’s why we built SignAptic. It's an AI-powered tutor that bridges the gap between the Deaf/Hard-of-Hearing (DHH) and hearing communities through sign recognition, interactive education, and generative motion feedback.

What It Does:

SignAptic helps users learn and practice sign language interactively. 📷 Uses your device camera to detect and analyze your gestures and facial expressions in real time.

🤖 Compares them to the correct sign using a mathametical scoring model that compares the distance between two joints.

💬 Gives instant and thorough feedback in your preferred sign language using the mathematical scoring and Gemini model we trained.

🎥 Shows a Google Veo model generated video of a given translated sentence sign to help understanding and communication in all langau.

🌐 Supports multiple languages to make sign learning accessible worldwide.

The result: A personal AI tutor that teaches sign language like a real instructor. It's visual, responsive, and inclusive.

How We Built It:

💻 Frontend: Built with Flutter for smooth, cross-platform performance on Android, iOS, and Web. Adapted it with Microsoft's Azure to deploy and access easy.

👁️ Vision Model: Used MediaPipe-style hand and pose tracking to detect sign accuracy in real time.

⚙️ Backend: Fed data from Vision model into Gemini API to provide feedback with.

🎞️ Generative Model: Google Vertex AI Studio generated videos to produce translated sign from provided input in any language.

🧠 LLM Layer: Connected to a multilingual GPT-based model to explain mistakes conversationally and provide learning tips.

🔄 Data Flow: 🧍 User → 📷 Camera Capture → 🤖 Pose Detection → 📊 Model Comparison → 💬 LLM Feedback → 🎥 Animated Correction

Challenges We Ran Into:

🌓 Lighting and angle issues: The model struggled with inconsistent lighting and occluded hands.

🕒 Latency: Real-time feedback required throttling image captures to maintain app responsiveness.

🤚 Gesture ambiguity: Some signs (like good vs. thank you) were difficult to distinguish because they are similar to each other and similar across languages.

🌍 Language adaptation: Finding datasets for multiple languages and training the model to provide feedback naturally across all the languages implemented.

🔌 Integration complexity: Syncing three different AI components (vision, LLM, motion) smoothly.

Accomplishments That We're Proud Of:

🚀 Built a fully functional Flutter frontend integrated with real camera capture and API calls.

🧠 Created a modular system ready for live AI model integration.

🤝 Designed with inclusivity at the core, not as an afterthought. This included making the UI easy to access and read.

🌐 Developed a backend that simulates AI responses based.

💬 Delivered an intuitive UI that anyone can use — from beginners to experienced signers.

What We Learned:

👁️‍🗨️ Integrating real-time computer vision in Flutter was challenging but incredibly rewarding.

🧠 Discovered that LLMs can be amazing teachers when guided with structured, context-aware prompts.

🎨 Realized the importance of UI simplicity. Accessibility starts with design that “just makes sense.”

🤝 Learned how working across disciplines (AI, design, accessibility) teaches far more than any one field alone.

What's next for SignAptics:

🔊 Add speech-to-sign and sign-to-text translation for two-way communication.

🧍 Train region-specific datasets for ASL, BSL, ISL, and more (adding dialects and regional slang)

🕶️ Introduce learning modes compatible with Meta Glasses for immersive gesture practice.

📈 Partner with Deaf education organizations to deploy in real classrooms.

🌐 Launch a beta on app stores to expand global accessibility.

Built With

Share this project:

Updates