Inspiration

We wanted to find something which could both demonstrate the advancement of visual identification technology and help make life more convenient for the visually impaired.

What it does

This is a two-part project which serves as both a scavenger-hunt game using image recognition, and a service which helps the visually impaired use menus at restaurants. The scavenger game is a proof of concept of the possibility of ML and AI in day-to-day use. We programmed this with Watson and Swift. This app can also take a picture of a restaurant menu and send it over to an API programmed using node.js. The API will then get the name, description, and price of each menu item, and suggest 5 items that the user may like (based on the descriptions and prices of their previous selections). This service can go a long way to reduce the time and effort needed for a blind person to order at a restaurant.

How I built it

We leveraged APIs such as Watson and Google Cloud as well as our own algorithms to carry out the image-recognition functions. The web API was developed using node.js, and Swift was used to develop the iOS app.

Challenges I ran into

The server we were using was not working properly, so we could not connect the mobile client to the server which processes text on menus.

Accomplishments that I'm proud of

We're proud of successfully leveraging powerful machine-learning libraries, including IBM Watson and Google Cloud Vision, to develop a versatile app which can both showcase our talents and have practical applications.

What I learned

We learned that sometimes, troubleshooting can be the biggest and most time-consuming part of making something new.

What's next for VisionHelper

While this app is only a proof of concept, it could have wide-ranging applications--with a little more fine tuning, we could turn this into both a captivating mobile game and a service which reliably helps the visually impaired in a meaningful way.

Built With

Share this project:

Updates