Echo-Sign is a real-time American Sign Language (ASL) interpreter that leverages high-speed computer vision and local multimodal AI to bridge the communication gap. By transforming 3D hand landmarks into natural English text and speech, Echo-Sign serves as a digital bridge for accessibility.
- Real-Time Translation: Uses MediaPipe to track 21 hand landmarks at 30+ FPS.
- Hybrid AI Logic: Combines a Python-based physics engine (Math Layer) with Llama 3.2 (Ollama) to interpret complex gestures with high accuracy.
- Reverse Translation (Text-to-ASL): Type English words to see the corresponding ASL signs (GIFs) or fingerspelling fallbacks.
- Voice Output: Integrated Text-to-Speech (TTS) for hands-free communication.
- Privacy First: The entire translation engine runs locally on your machine using Ollama.
- The Eye: MediaPipe & OpenCV (Hand Tracking)
- The Brain: Llama 3.2 via Ollama (Contextual Sign Interpretation)
- The Voice: pyttsx3 (Text-to-Speech)
- The Interface: Streamlit (Web Dashboard)
-
Prerequisites
- Install Ollama
- Pull the model:
ollama run llama3.2
-
Clone & Install
git clone <your-repo-link> cd Echo-Sign python3.11 -m pip install -r requirements.txt python3.11 -m streamlit run app.py