Inspiration

Globally the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind. People 50 years and older are 82% of all blind. The biggest challenge for a blind person, especially the one with the complete loss of vision, is to navigate around places. Blind people roam easily around their house without any help because they know everything in the house. But what about the outside areas? Our idea gives an easy solution to this problem. I built a hardware product using raspberry and leveraging deep learning technologies to assist blind people in getting an idea about their surroundings by continuously detecting objects and giving them voice commands to navigate.

What it does

A deep learning model developed to detect nearby objects for the blind peoples. A voice assistant can give feedback to the user about the surroundings.

How we built it

We have used Deep Learning libraries Tensorflow and Keras for object detection and person identification. For object detection, I trained the model on COCO Dataset set after training the model I converted it into TensorFlow lite format using TensorFlow lite converter. To improve the accuracy of the model I have used transfer learning and data augmentation. I have used MobileNet V2 which is trained on ImagenetDataset for transfer learning. I deployed a deep learning model in android studio to showcase the prototype of my idea using TensorFlow lite.

Challenges we ran into

Due to costly hardware to run deep learning models, I deployed our models in android devices but for commercial purposes, We will use hardware to run our models.

Accomplishments that we're proud of

We are proud of our object detection model which is decently accurate, along with our website which can run a whole object detector.

What we learned

I have learned the transfer learning techniques and data augmentation which really helps us in reducing overfitting and increasing the accuracy of our models. I also learned to deploy Machine learning models to production using TensorFlow lite.

What's next for NewVision

We want to add more features such as A better voice assistant and distance from object detection.

Share this project:

Updates