Demo Link: link

Inspiration

Over half a million people in the United States use American Sign Language as their primary method of communication. However, society in general lacks awareness of ASL, whether in the proper usage or even general principles, leading to harsh equity divides for individuals who are hard of hearing. Communication barriers that could be overcome are instead holding back valuable members of our community, leading to misunderstandings, frustration, inaccessible services, educational challenges, and isolation. In America, close family members and friends often take up the challenge of learning ASL to connect with deaf individuals, and this mitigates many of the problems faced. However, with an entire community of proficient signers, hearing-impaired people can find their rightful place in society. The goal of SiLingo is to increase the awareness and proficiency of American Sign Language in the community by creating gamified software, similar to many popular video games such as Just Dance and Guitar Hero. It has been proven that the more enjoyable the lessons, the better the learning, and by integrating entertaining elements, we hope to accelerate the acceptance of ASL into the mainstream of today’s society.

What it does

SiLingo is an application that aims to gamify the process of learning ASL. The current prototype is based on two major pieces of software. The user is first presented with a simplistic UI built through Flutter. Besides the front-facing user interface, there also exists the actual model that was implemented in Python. It can take in human gestures in real time and create an accurate graph representing 20 key points along the human hand. Taking into account the significant amount of data, SiLingo can reasonably predict what the intended user gesture is.

How we built it

SiLingo is a software that is composed of multiple pieces of software, such as Flutter, Android Studio, and TensorFlow. The user interface, which consists of a splash screen with a vibrant start menu, was made by Flutter via VSC and Android Studio. A machine learning model was also trained with thousands of sample photos and can now recognize a few of the ASL gestures fairly well.

Challenges we ran into

In all challenges involving machine learning, gathering and training data was the toughest part when trying to create the model. At first, our team generated a Python script that would take various snapshots of a certain gesture used to represent a letter, but we soon found out that it was extremely prone to outputting incorrect responses and thus had to adapt by merging both our own collected data with a public dataset provided by Kaggle. While working on the tensor flow model we encountered many challenges, from not having enough computing resources available to run our dataset, to learning a brand new library with complex methods. Getting a model that was actually smart enough to learn was very difficult and required multiple iterations of our model, with varying node quantities and processes, to get a model that would accurately identify sign language over 85% of the time. Given that Flutter operated on a whole new language; Dart, working with Flutter was a whole new experience. None of the team was proficient in UI design or front-end development, and the resultant work took an exceptional amount of time to adjust to. We found out that Tensorflow/Camera support was not an option for web development, something that we desired. Thus, part of the team shifted to web development on Node.js, while the remaining members continued work on Flutter utilizing the Android emulator. Overall, while we weren't able to implement the Tensorflow model in Flutter to our standard, we were still able to create something.

Accomplishments that we're proud of

We overcame difficulties working beyond the scope of our traditional specializations and were able to express our new skills effectively within our programs. One of the major challenges that we encountered was making a machine-learning model. None of our members had any background in machine learning, so learning the entire procedure in less than 24 hours was especially tough. However, in the process of doing so, we developed a really good understanding of how machine learning works and how the process of making a good machine learning model works. We were able to train our own TensorFlow model which could identify a sign language character accurately over 85% of the time. Another major challenge we encountered was implementing the Flutter interface. Similar to machine learning, several members of our team had little to no experience when developing front-end technologies, especially when developing mobile applications. This experience allowed us to push our current capabilities as programmers and we learned significantly about managing multiple dependencies, design, and overall user experience.

What we learned

We learned how to create a tensorflow model, how to edit the model with varying designs, which included different node quantities per layer, various layer configurations, and a high degree of accuracy that were being saved across runs. We then learned how to convert our model, from the default keras model to both tflite_flutter and TensorFlow.js, for application to our Android Studio and Flutter-based applications. We learned how to design a basic user interface in a variety of methods, these include Flutter where we used Dart, which was a new experience for us. We also used some Java and Python to develop alternate iterations.

What's next for SiLingo

There still remain significant advancements that our team would like to make to fully realize the true potential and capabilities of SiLingo. In the future, we would like to make the learning experience much more interactive by adding more modes of studying that gamify the experience of learning ASL which will cater to a greater audience. In addition, as the progress of AI is advancing quite rapidly, we also anticipate needing to develop, train, and create even more refined and accurate computer vision models. We hope to expand the current vocabulary of SiLingo while making the model. Ultimately, we hope to create a massively appealing product that makes people enjoy learning ASL.

Built With

Share this project:

Updates