What track are we submitting to?

The health track.

What's our inspiration?

More than 70 million people around the world use sign language to communicate. ASL (American Sign Language) is by far the most commonly used sign language, and in the US alone approximately half a million people use it to communicate. However, beyond the deaf community, it is not well known. This poses a significant communication barrier for deaf people, restricting their access to opportunities and conveniences that we take for granted. Simultaneously, for non-deaf individuals, the intricate grammar and signs used in ASL paired with the lack of opportunity deter many from ever attempting to learn it. Using technology, we sought to address this communication gap between individuals who use ASL and those who do not. Our solution - Manus.AI.

What solutions currently exist on the market?

There is no effective real-time ASL to English translation solutions on the market. The two main related products currently on the market are: 1) English-to-ASL translator. While effective for one-way communication, it completely neglects to translate from ASL to English, making it only a partial solution. 2) ASL Sign Language Pocket Sign. This application has an extensive database of signs but is ineffective in practice. The process is slow and tedious as users need to manually select one sign at a time. This makes it practically impossible to use in real-time ASL communication since multiple signs need to be translated in rapid succession.

What is your product and how did you build it?

Manus.AI is an innovative web app that integrates computer vision, neural networks, and an easy-to-understand user interface to translate ASL into text. This enables seamless translations for deaf individuals when communicating with non-ASL speakers. By placing your hand in front of your device, Manus.AI uses its object-detection models to quickly translate ASL into text. Our program allows for the streamlining of collecting and labeling of ALS sign data. This not only means our neural networks are accurate but also ensures that we can quickly extend our dataset to include more ASL signs. Currently, our dataset includes many photos of the same sign (at different angles and distances) allowing our neural network to achieve a decent accuracy rating. In addition, our code automatically adjusts the background of training photos to further improve accuracy.

What challenges did you come across and what have you learned?

We came across many challenges while building Manus.AI. Without extensive experience in python machine learning, the learning curve was steep. The process which required several iterations to develop a model with all of our desired functionalities. In order to complete this project, all of us were forced to expand our knowledge of AI and more specifically object detection models significantly. Additionally, finding a way to host the object detection model, and to connect that to our front-end real-time image capture also presented challenges as we had to reconfigure various elements of both components to allow for seamless communication. This challenge in particular taught us a lot about integrating AI models into a presentable front-end, which solutions provide the best performance, how to compensate for certain drawbacks etc. We believe all of this information will be valuable in our future coding pursuits, especially as we now plan to continue working on AI development.

What are you proud of and where do you see this going?

Despite the challenges, we were able to have Manus.AI’s object-detection models performing at a strong accuracy rate. With our aim to maximize accessibility, we are proud of the straightforward and seamless user experience we are able to provide. We hope Manus.AI will have a direct and immediate impact on the ALS community. But we see our project reaching much further. Due to the time constraints of the hackathon, we are currently only able to support a small subset of the ASL vocabulary. Our next steps would be to train our model with more diverse datasets for a more extensive set of signs in order to fully achieve our goal of breaking down communication barriers for deaf individuals. With our streamlined data collection python model this should be easily achievable. Additionally, we would also like to create this as a native mobile application so that people are able to access it even in settings without any internet connection, facilitating communication in all settings. This should likewise be doable with relative ease as our React frontend could easily be reconfigured to work as a React-Native mobile app. We hope you share our vision for creating a true impact for over half a million Americans with Manus.ai.

Built With

Share this project:

Updates