Inspiration

All of us have been in a spot where we drive by a local restaurant that seems interesting, but we do not have enough time or information to look into it. We wanted something to automatically gage interest and display and save key facts about any restaurant/businesses/services that catch our eye.

What it does

We track eye movement and gaze duration to asses interest levels. Then, geographical location is assessed to pinpoint the precise location of interest and displays an informational pop up on the car's multimedia display. This location will be saved for future use.

How we built it

Our AI to track eye movement and gaze direction is built and deployed using Google Cloud AutoML. We also developed a front-end prototype using Figma. In addition, to illustrate functionality, we used HTML/CSS/JavaScript.

Challenges we ran into

We initially attempted to develop an AI with Tensorflow; however, we were unable to get our model working on our local computers.

Accomplishments that we're proud of

Accuracy of the deployed Google Cloud AI model.

What we learned

Google Cloud and Tensorflow use.

What's next for EyeCatcher

Work with Toyota and other car manufacturers in order to improve and implement a fully functional version of our project into cars.

Share this project:

Updates