Inspiration
Our main motivation for this project was to create a proof of concept to allow easy recognition and translation of ASL. The idea is to allow mute people to use sign language and have that converted to text. This text can then be converted to voice using open source software.
What it does
signLeap uses a Leap Motion to recognize ASL alphabets and convert them to English language text.
How we built it
signLeap takes in a user input using Leap-Motion. It uses scikit-learn to match a prediction set with system features that have already been recorded.
Challenges we ran into
The first challenge we experienced was that our hardware wasn't precise enough to detect slight changes in static hand gestures. This is one limitation of the project: it will detect letter only with specific orientation with respect to the sensors. Another drawback was the fact that none of our teammates were too experienced with ML. As a result, we had to conduct some research to decide on the perfect algorithm for our problem. The last challenge was interfacing. We had trouble creating an interface between the Leap Motion device and a web interface.
Accomplishments that we're proud of
Our project currently recognizes 20 out of 26 ASL alphabets with an accuracy of 95%. We believe that this number would get much better as the algorithm learn from more data points.
What we learned
Coming into the hackathon, we didn't have any idea about how ML works. At this time, we have a basic understanding of supervised classification problems.
What's next for signLeap
In the future, we plan on increasing the number of gestures that our application recognizes to include ASL phrases and numbers.
Log in or sign up for Devpost to join the conversation.