Inspiration

Before cooking, people, including our parents, often spend significant time searching for recipes online and then additional time looking through their pantries and refrigerators for ingredients. This process is very time-consuming, and can be even more difficult for people with visual impairments, such as color-blinded people who struggle to distinguish fruits, meats, and packaging or people with poor vision who struggle to read ingredient labels. As a solution, we developed SightChef to take over this step of identifying ingredients and recipes.

What it does

People can use the SightChef app to take pictures of their pantries and refrigerators. Then, SightChef uses OWLv2, Google's state-of-the-art image classification and object detection model, to identify the ingredients in the images before using the ingredients to search for recipes in its database. Users can also add the missing ingredients to their shopping list to complete the recipe.

How we built it

To handle the ingredient detection, we used a zero-shot image object detection algorithm in order for the model to be able to detect objects it had never seen before (for niche ingredients). The second part of our pipeline involved taking these images and matching them with the recipe database we created by scraping popular food databases online. Finally, we then output our list to matching recipes and then output them to the user.

Challenges we ran into

One challenge we ran into was gathering data on recipes, for the first dataset had so many ingredients that some were very strange, such as tin. When we finally found a clean dataset, we found more difficulty scraping the ingredients and recipe information from the website due to its structure. Additional challenges included a limited dataset of pantry images and less experience in developing mobile apps.

Accomplishments that we're proud of

We are proud of developing a model that can identify ingredients in an image using zero-shot classification. Another accomplishment we are proud of is devising an efficient method to find all applicable recipes from a dataset of a hundred thousand recipes given a list of ingredients.

What we learned

We learned how to use and optimize OWLv2 for zero-shot image detection. We also learned how to use Sequel to improve data collection and sorting. Lastly, we learned how to use Beautiful Soup to scrape data from an embed shown by JavaScript on click.

What's next for SightChef

In the future, we will further develop SightChef's machine-learning model to improve its accuracy and speed. Since we also collected specific data about the recipes, we will add filters such as cuisines, nutritional values, and dietary restrictions to improve the user experience. Lastly, we hope to convert it into a mobile app for better accessibility and easier usage.

Share this project:

Updates