Inspiration
When exploring around the templates from Lens Studio, we liked how the ASL machine model worked and wanted to incorporate that into our idea. We wanted to show how easy and fun learning ASL can be. We believe that it was a cool concept to teach ASL through a Snapchat lens or the Spectacles.
What it does
When you tap the screen of the smart device, the program will begin to listen for keywords such as “dog.” With the help of speech recognition, our program will recognize such keywords and be triggered to teach the user how to spell letters of the keyword using American Sign Language. Each letter will be displayed individually on the screen and the next letter will only show up after a few seconds of delay.
How we built it
We used Lens Studio’s platform and templates such as ASL Spelling and Speech Detection to start our project. Our project was written in Javascript and ran on Snapchat and Spectacles. We tested our design through Snapchat as well as the Spectables. We shared the working file among each other by sending the zip file back and forth. With the contribution of each team member and the help of the Snap mentors, we were able to build a successful product after many trial and errors.
Challenges we ran into
Some challenges we had to overcome while working on this project were getting familiar with the Spectacles and coding in a language that was less known to us. There were many fine details that had to be adjusted using Javascript and trying to figure out how to improve the processing speed of our code. As we cleaned up our code and debugged, we realized there were many unnecessary lines of code that could've been removed earlier to reduce lagging while running the program. One of the biggest problems we faced was overcoming the learning curve in the beginning of the day while we explored the features and tools of Lens Studio. Since it was a completely new platform that none of us had used before, it took us a while to get used to and figure out we can utilize the features to make our idea come to life.
Accomplishments that we're proud of
We are happy about being able to create a working project as beginners in hackathons. We are proud of being able to try something new which led to us developing our new abilities of using code and design to create something that can be used in devices.
What we learned
Through coding this program, we learned how to code in Javascript and utilized multiple features in Lens Studio to combine various templates provided. We manipulated them in order to achieve our goals with the given technologies. This pushed us to think outside the box especially while working with the spectacles. This was the first time we had created a program that was directly connected to a hardware device and it taught us to explore our imaginations but also be realistic.
What's next for Speak2Sign
One way that we can improve Speak2Sign is by using accuracy detection rather than wait time to make each letter appear one after another. Furthermore, we would like to expand our vocabulary in teaching our users how to sign words or phrases in American Sign Language. We hope to break out of teaching only how to spell words in ASL and be able to add complicated sentences and terms that could be signed independently instead of depending solely on the ASL alphabet. On top of that we would also like to include images of the term/phrase along with the action of how to sign them to deepen the impression left on the user to help them remember more efficiently.
Built With
- javascript
- lens-studio
- machine-learning
- speech-to-text


Log in or sign up for Devpost to join the conversation.