Inspiration

Providing a service for visually impaired that goes beyond a cane.

What it does

Captures images, runs the image through a neural network, and outputs a verbal, descriptive sentence.

How we built it

With APIs, Android Studio, Stack Overflow, machine learning

Challenges we ran into

Converting the image captured into the appropriate data type going into the neural network.

Accomplishments that we're proud of

Getting the button and camera capture to work as intended. Also, getting the text to speech API to work as intended with typed text input.

What we learned

How to use Android Studio, the importance of staying consistent with data types, how tensorflow works

What's next for TextforBlind

Get the program to run as intended. Keep the app from storing the image captured into the camera app. This can be done by deleting the image after it is ran through the neural network. Also, we would like it to be able to capture photos in real time when it can focus, so it requires less inconvenience to the user or implement it with hardware including glasses with a button that captures the image that outputs to an earbud on the user.

Built With

Share this project:

Updates