Inspiration

Lunchtime at the Georgia Tech Student Center is usually a mad rush for Chick-fil-a and Panda Express. Even if you come long after the lines for food disappear, you can tell the lunch rush has happened from the signs of battle left behind - an overflowing trash bin so full that even pushing down on it won’t bring the pile down. Oftentimes, people throw all of their garbage down the trash for convenience, though much of what is thrown in the trash are items that can actually be recycled. For example, Panda Express lunch boxes are all made out of paper, as well as the takeout bags from Chick-fil-a. We also noticed that the recyclable bins are further separated into different types, such as plastic bottles, cans, and paper. From this, we thought to build an automatic trash sorter that would be environmentally friendly, relatively cheap, and easy to use. This way, we can not only separate recyclables from trash, but further separate them into different types of recyclables to dispose of in the appropriate bins. The idea for the name “TreeReplenish” came from a mix of the words “replenish,” as in replenishing the environment, and “tree,” since our device reroutes items through different “branches,” or openings from the central chute, which can be thought of as the “trunk”.

What it does

Our prototype sorts disposables into plastic bottles, aluminum cans, and trash by using an ultrasonic sensor to detect when objects are placed, a camera to take an image of the object, and servo motors that control platforms which direct disposables into their respective bins. The user places an object on the top platform, which then goes down through one of three openings for bottles, cans, and trash.

How we built it

We built the mechanical portion of the design with four wooden posts and acrylic sheets as walls, with spaces for the motors and the two side openings. We cut out notches in the posts for the servo motors to slide in, and secured both the motors and acrylic sheets to the posts with screws. We also mounted a tall wooden post to the top of the design, which acts as a camera mount. For each platform in the center of the chute, there are two servo motors attached to an edge that rotate them through a 90 degree range of motion.

All power and control signals for the motors and ultrasonic sensor run through the GPIO pins on the Raspberry Pi. The camera feeds into the Pi with a USB cable. We used Python to build the firmware code for the mechanical device. It takes readings from the ultrasonic sensor to determine when the camera should take a photo of the object placed on the top platform, sending this photo as an HTTP POST request to our external server running the object detection model. Upon receiving a response, the motors are signaled to rotate into a specific orientation depending on the type of object that was detected.

For our server-side software, we trained a YOLOv8 model with a custom dataset including bottles, cans, and other disposables. We took the resulting Pytorch model and used it to run inference on the images captured by the camera. We hosted the model on a Flask server since the Pi doesn’t have enough compute to run the model itself. The Pi sends a POST request to the flask server with the image the camera captured, and on the server, the ML model is run. The corresponding class of the object is then sent back to the raspberry pi as a JSON object, and is decoded to determine which and how far each servo motors should turn. This allows the object to fall in the correct compartment.

Challenges we ran into

We ran into different challenges in both the software and electrical components of our project. With software, we had problems with compute resources, limiting the datasets and number of epochs we could train on. Google colab also frequently shut down the runtime, disrupting model training. Additionally, we had to use multiple google accounts since we would run into issues of not being allocated a GPU. Because of this, our model sometimes identifies non-bottles and non-cans as either one of the two. We also tried using a Tiny YOLOv4 model to try and run inference directly on the raspberry pi, but due to certain software updates, we were unable to get this working. Thus, we resorted to using an external server to host the model.

Electrically, the final integration of all components was very difficult. Integrating various different sensors, motors, controllers, and displays is always a challenge. However, the unique combination of computer vision inputs and more typical pin-based inputs and outputs made the final product difficult to bring together. Organization of cables was a key difficulty that caused us trouble in the beginning by causing random behavior in our motors. We were able to overcome this challenge by neatly routing all our cables and tracing them with a multimeter.

The servo motor controls ultimately proved to be the most challenging part of the project electrically. Because raspberry pi’s aren’t built for precise pin control, sending PWM signals to control the servos was very difficult. We constantly ran into issues with stutter and jumpiness in the servos. This made our project look very unprofessional and very inconsistent. We were able to overcome it by utilizing a raspberry pi library that uses hardware PWM instead of software PWM, which is much more accurate. Also, it was very difficult to keep track of all four of our motors, so we used lots of labeling and cable tracing to ensure that our connections were correct. Even then, we often found ourselves sitting and waiting for the computer to reboot because we created a short by connecting wires incorrectly!

Accomplishments that we're proud of

We are really proud of combining many aspects of engineering and computing together to create an environmentally friendly, easy to use device. This project leveraged our team members’ software and electrical expertise, but also required us to learn and prototype a mechanical design, something none of us are experts in. We are proud of taking on this challenge, fighting through our setbacks, and finding alternatives to malfunctioning hardware and software.

What we learned

Overall, we learned a lot about mechanical design and different design considerations. For most of us on the team, we hadn’t had prior work with many of the components we used for our project, so learning about the different power tools and electrical components was really interesting. We also learned more about how machine learning models can be hosted on a wide range of servers, including Flask, AWS, and Microsoft Azure.

What's next for TreePlenish

Moving forward, we want to add more branches for additional kinds of disposables. Many jurisdictions now require recyclables to be separated into different types, such as aluminum, plastic, paper, and glass. While we only built TreePlenish to distinguish between plastic, aluminum, and landfill items for now, other types of recyclables can be added in the future. Additionally, we could improve our model by training it with more images, as well as creating a more efficient and compact way of routing items into different bins.

Share this project:

Updates