Inspiration
We've all frantically plugged our Spanish homework into Google Translate at 11:55, desperately trying to meet the 11:59 deadline. However, we could not find an affordable solution that also applied to American Sign Language.
What it does
ASL Translator is an IoT device that translates sign language into speech. Simply photograph yourself spelling out the word, then press the finish button. ASL Translator will translate from ASL to English, and speak your message.
How we built it
- A neural network implemented in Keras and Tensorflow was used to recognize the ASL symbols.
- OpenCV to take and process images.
- Python to run scripts and process button inputs.
- Raspberry PI 3 B+ with PiCamera 1.3, bluetooth speaker, and push buttons
Challenges we ran into
- There wasn't enough time to fully train the neural network, so we have very low accuracy .
- We are having issues integrating everything on the Raspberry Pi, so it is currently only implemented on the computer
Accomplishments that we're proud of
- Going from never using tensorflow to creating an entire neural network
- Connecting and configuring all the hardware components.
What we learned
- How to create and train a neural network
- We need to budget more time for training next time
- Integrating things on the Pi is really hard - next time, bring a bigger screen
- Installing packages on the Pi is slow - next time, try to preinstall common packages
What's next for ASLTranslator
- Fully training the network
- Bigger dataset, including words, expressions, and facial expressions
- Auto-detect letters in a video instead of photos
- Fully integrate on Pi.
Built With
- opencv
- python
- raspberry-pi
- tensorflow

Log in or sign up for Devpost to join the conversation.