Inspiration

The motivation for our project came from one of our friend's story who grew up believing that bread is green because her dad was color blind and would not be able to identify mold on bread. This story lead us to think of simple tasks that could be difficult to perform due to vision impairments like color blindness (ex. picking out the ripest fruits or seeing a green traffic light).

What it does

The program enables the user to take pictures in real time using a camera app and returns the names and colors of the objects present in the image.

How we built it

We used YOLOv5 (You Only Look Once) to identify the objects and get cropped images of all the prominent things in the input image. We then passed these cropped images into a Python color classification model that identified colors. Furthermore, we used Tkinter to build the UI for this program.

Challenges we ran into

Identifying different interfaces, and combining the three different aspects to make a seamlessly working prototype was one of the biggest issues we faced during the hackathon.

Accomplishments that we're proud of

Our program was able to identify labels and colors of various objects accurately. Additionally, we were able to build an interface that is simple and easy for visually impaired users.

What we learned

We learned a lot about different Python libraries such as Pillow, Scikit-learn, NumPy, matplotlib, cv2, and many others. We also learned a lot about color vision deficiency - its different types and user interface themes suitable for people with different degrees of CVD.

What's next for Chrome Buddy

The next step would be improving the precision of color detection and training the object detection algorithm with a dataset with modified color schemes and transformations. In the future, we plan on expanding to a wider shade range of colors along with creating a customized user interface for different types of CVDs.

Built With

Share this project:

Updates