Inspiration

Wanted to work with machine learning and object recognition but wanted to make our project meaningful. We thought that creating a project to benefit an underrepresented community in our world will be very useful.

What it does

Our model uses machine learning to create a random word and converting it from Braille to speech, allowing the user to hear each word if they are visually impaired.

How we built it

We built our model by using Python, Tensorflow, Keras, and NumPy. We applied the knowledge we learning in the ML workshops at Bitcamp and received a lot of help from a few mentors and workshop volunteers, Sagar and Mark.

Challenges we ran into

  • Finding a dataset that had pictures that are representative of Braille one would see in real life. We did not find such a dataset so we decided on one that as somewhat similar to real life.
  • We were overfitting our model with data that only looks a certain way and not like Braille would look in real life
  • After we realized we were overfitting, we were underfitting our model by running only 20 epochs, which gave us the wrong results

Accomplishments that we're proud of

  • Created our first machine learning model using new softwares and libraries
  • Overcame several challenges to get the model to actually work with 91% accuracy

What we learned

We all learned a great deal about machine learning and implementing models to solve real life problems. At first, we did not even know how to approach the solution but we took gradual step to achieve our goal.

What's next for TouchToneAI

We plan to implement the model into a mobile app that can take pictures of the Braille and then the app will say it back to the user. We will create this using Swift for IOS and Kotlin for Android. We believe that when this app is released is can help a lot of people who struggle with this disability.

Built With

Share this project:

Updates