Inspiration
"How can we leverage technology towards a sustainable lifestyle". Food availability, health and wellness, and environmental impact are all important aspects of sustainability toward a better lifestyle. Carbon emissions have exponentially increased over the years leading to catastrophic environmental impact for millions of people worldwide. Simple fixes oftentimes have the best impact, and with our innovative solution, people can help bring about that change. We have created an application that allows users to upload an image of the food that they eat or purchase, which will then predict the food and its category with our ML model to finally calculate vital statistics about the food such as carbon emission and rating quality. Then it will provide alternative food choices in the same category that has a lower carbon emission footprint.
What it does
CO2 Eats has a simple and intuitive workflow.
- A user either uploads an image of a food from their gallery or uses their camera.
- CO2 Eats runs that image through a machine learning model to determine the type of food.
- CO2 Eats contacts an external API to get the carbon footprint of the food, and suggests alternatives to reduce your carbon footprint.
How we built it
Our tech stack comprises of a frontend created in React using Vite, alongside React bootstrap and MUI for styling. For our backend, we decided to use ASP.Net with C# to connect the backend to the frontend application. Our ML model was trained using C# and ML.Net on image classification of over 23,873 images trained (image dataset was obtained from Kaggle). The carbon footprint information was obtained using the foodprint API from RapidAPI.
Challenges we ran into
Initially, we were going to use a FoodVisor API which would have eliminated the need for an in-house machine learning model as it was supposed to have object detection for foods + various nutritional information. However, it was not integrating as expected and so we decided to create our own image classification model.
Accomplishments that we're proud of
We are proud that we were able to come up with a working implementation of CO2 Eats which has all the features that we came up with during our planning phase. As well, we are proud that we were able to get a reliable machine learning model with 82.3% accuracy in a short amount of time and get it integrated with our front-end.
What we learned
We strengthened our React.js skills and got more experience using C# and ASP.NET minimal APIs as a Backend platform. We also learned how to use ML.NET to create image classification models.
What's next for CO2 Eats
Things to improve:
- Adding additional API's to interact with giving users the ability to view locations where they could purchase the alternative food items
- Expand the application to account for various food item categories with a larger trained image classification model
- Improved styling and interactivity for mobile and web application users
- More nutritional information and detailed information about the user's food item including health facts, water usage, and other greenhouse gas emission calculations.
- Share local food producers & restaurants in the recommendations section of the results page based on the user's location, thus promoting local businesses.
Built With
- asp.net
- axios
- c#
- javascript
- ml.net
- react.js


Log in or sign up for Devpost to join the conversation.