Multi-language sign language recognition system using neural networks, MediaPipe landmarks, and GPU optimization.
This project recognizes sign language across two alphabet sets:
- ASL (American Sign Language)
- ISL (Indian Sign Language)
SignLanguageProject_Pro/
├── data/
│ ├── raw/ # Original datasets
│ │ ├── ASL/ # Kaggle ASL Alphabet
│ │ └── ISL/ # Kaggle ISL Alphabet
│ └── processed/ # Extracted landmarks
│
├── models/ # Saved model weights
│
├── preprocess.py # Image -> landmarks conversion
├── train.py # GPU-optimized training
├── main.py # Live recognition app
├── check_npy.py # Quick NPY inspection utility
├── requirements.txt
└── README.md
pip install -r requirements.txtpython preprocess.pypython train.pypython main.py
Figure 1: The MLSLRS OpenCV local interface performing real-time inference on sign sequences.
Figure 2: The web-migrated deployment running in the browser with the glassmorphic UI.
- Framework: TensorFlow 2.15+
- Input: Hand landmarks (MediaPipe)
- Output: Sign prediction with confidence scores
Edit the following files to customize:
train.py- Model architecture and training parameterspreprocess.py- Dataset paths and preprocessing optionsmain.py- Live app settings and confidence thresholds
- Landmarks are extracted using MediaPipe Hand Landmarker
- Temporal smoothing reduces jitter in sequences
- Models saved in Keras 2026 format (.keras)
Custom Project
Dhanush Alla