Skip to content

eamonse/Echo-Sign

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Echo-Sign

Echo-Sign is a real-time American Sign Language (ASL) interpreter that leverages high-speed computer vision and local multimodal AI to bridge the communication gap. By transforming 3D hand landmarks into natural English text and speech, Echo-Sign serves as a digital bridge for accessibility.

Key Features

  • Real-Time Translation: Uses MediaPipe to track 21 hand landmarks at 30+ FPS.
  • Hybrid AI Logic: Combines a Python-based physics engine (Math Layer) with Llama 3.2 (Ollama) to interpret complex gestures with high accuracy.
  • Reverse Translation (Text-to-ASL): Type English words to see the corresponding ASL signs (GIFs) or fingerspelling fallbacks.
  • Voice Output: Integrated Text-to-Speech (TTS) for hands-free communication.
  • Privacy First: The entire translation engine runs locally on your machine using Ollama.

Tech Stack

  • The Eye: MediaPipe & OpenCV (Hand Tracking)
  • The Brain: Llama 3.2 via Ollama (Contextual Sign Interpretation)
  • The Voice: pyttsx3 (Text-to-Speech)
  • The Interface: Streamlit (Web Dashboard)

Installation & Setup

  1. Prerequisites

    • Install Ollama
    • Pull the model: ollama run llama3.2
  2. Clone & Install

    git clone <your-repo-link>
    cd Echo-Sign
    python3.11 -m pip install -r requirements.txt
    
    python3.11 -m streamlit run app.py

About

Echo-Sign is a real-time American Sign Language (ASL) interpreter that leverages high-speed computer vision and multimodal AI to bridge the communication gap between the Deaf and hearing communities. By transforming 3D hand and body movements into natural English text and speech, Echo-Sign serves as a digital bridge for the Deaf community.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages