Inspiration

Having grown up in developing countries with limited recycling initiatives, we were used to discarding all our waste into a singular bin. It was only upon our arrival in Canada that the complexities of a segregated waste disposal system were introduced to us.

Even as we adapted to this system, discerning what was truly recyclable remained a challenge.

This challenge is not unique to us - people are often unable to accurately assess the recyclability of the items they dump in designated “recycling bins”. Waste management personnel often face the inefficient task of sifting through mixed materials, which results in a huge loss of revenue for firms.

While initial assumptions might place blame on individuals, a deeper analysis revealed the intricate nature of recycling. Modern plastic items are made up of composite materials which contribute to this ambiguity and perfectly recyclable items can be damaged by contamination with other materials

The reality is that misinformation + the lack of adequate technology are to blame. Thus, to address this pressing issue, we present our solution: BinWise.

Through our research and personal experiences we learned that there is a correlation between income levels and recycling. Poorer parts of cities are more littered and have higher rates of people who don’t bother to recycle correctly.

We wanted to target this demographic of people with our app, to empower them to clean up their neighbourhoods and recycle correctly.

What it does

BinWise is a cutting-edge mobile application that enables users to easily categorize their waste. By simply taking a photo of an item, the image is processed using an ML classification model to determine the type of waste, whether it's "Recyclable," "Organic," or "Electronic." Once identified, the app guides the user on the appropriate bin for disposal and logs this action in a database for reference.

How we built it

To build the frontend, after designing the front-end concept using Figma, we built the React Native app using Expo, and then utililized React Native components and the Expo router to build the structure and navigation of the app. Then, we implemented 1 API call to expo-camera to snap pictures of food items and another to axios to communicate with the backend.

For the backend of the whole application, NodeJS and Express were used along with a MongoDB Atlas instance for data persistence. The Heroku CLI was used along with Git version control to deploy the server instance on Heroku. The TensorflowJS node library was used to run the machine learning model on the server. This allowed to not have to integrate API calls from a dedicated AI model server. The challenges faced during the process of implementing the backend were primarily hosting of the backend. I found that the Google Cloud SDK was really hard to use because of problems with PATH variables, leading to hours of tiem spent on trying to get it to work. I had to consider multiple hosting services and had to choose the quickest one.

For the image classification ML algorithm, we employed Tensor Flow's Keras and OpenCV for our multi-category classification, utilizing advanced computer vision techniques. For transfer learning, our code harnesses the power of the ResNet50 model—a deep convolutional neural network optimized for image classification. By leveraging transfer learning, we can capitalize on the insights gained from extensive datasets, such as ImageNet, and adapt them to our specific classification needs, even when our task has limited data.

Challenges we ran into

Sergio: The greatest challenges for me where figuring out how to set up and utilize the react native's expo router correctly and how to make API calls to the backend. 3 days ago I didn't even know React, and within a couple of days I sifted through pages and pages of documentations and videos and figured out how to implement an app in React Native!

Varun: For the Machine Learning side of things, it was challenging finding the right data sets, fine-tuning the model, the long wait times for the model to learn. Learned a lot supervised machine learning, data augmentation, web scrapping, troubleshooting.

Khushil: Before this journey, I was unfamiliar with UI/UX design. From the initial stages of wireframing to conceptualizing designs on Figma, I delved into the nuances of crafting user interfaces. This experience not only enriched my understanding of aesthetic and functional design but also shaped my perspective on constructing cohesive and user-friendly apps.

Accomplishments that we're proud of

What we learned

What's next for BinWise

Built With

Share this project:

Updates