Inspiration

Recycling only works if waste is sorted correctly, and humans are terrible at that. Meet WALL-3: an autonomous robot that automatically picks up and filters recyclables from the streets. Inspired by everyone's favorite trash-compacting robot, we wanted to take the human error out of sustainability by building a rover that actively reduces contamination at the source.

What it does

WALL-3 is a mobile, autonomous robot that cleans up its environment. Using computer vision, it spots stray bottles, cans, and recyclables as it navigates streets and sidewalks. Once WALL-3 makes its way to an item, it automatically scoops it up and uses a machine learning model to sort the recyclables on the fly.

How we built it

Edge computing & vision pipeline: The system is orchestrated by a Raspberry Pi serving as the central autonomous compute node. We engineered a custom Python-based software stack that takes in a live video feed from an onboard Arducam, allowing us to run our object detection models for real-time material classification directly on the edge.

Autonomous navigation: We translate the real-time visual data into precise pathfinding vectors. The Raspberry Pi interfaces with a tank-drive system powered by DC motors, enabling 360-degree maneuverability. The control logic continuously adjusts the rover's trajectory to align with the coordinates of the targeted debris.

Proximity-triggered manipulation: Once the rover navigates within optimal range of the target, the system triggers a servo-actuated collection arm. This mechanism sweeps the item from the street and transfers it into the robot for an internal sorting system.

Sorting: The core mechanical innovation is our smart, dynamic sorting hopper. Operating as a motorized see-saw, the mechanism relies on the initial computer vision classification to autonomously actuate and tilt, physically routing the item into the respective bin for its type of material.

Web dashboard: We developed a web dashboard to display the live camera feed, as well as statistics for the embedded electronics, for testing and development of the robot.

Challenges we ran into

Power management: Balancing the power draw of the Raspberry Pi, a live camera module, and multiple DC motors pulling stall current from a single mobile battery source required careful circuit planning to prevent voltage drops and sudden reboots. We had to introduce multiple power sources to prevent this from happening.

Model optimization: Running computer vision algorithms on edge hardware is computationally expensive. We spent a lot of time optimizing our Python models to achieve a high enough frame rate so the robot could react to obstacles and trash while moving.

Design cycle time: Designing, manufacturing, and assembling the chassis consumed significantly more time than we had initially anticipated.

Material limitations: Our initial designs relied on specific materials that we weren’t able to get our hands on due to limited availability during the build, forcing us to creatively adapt our design to the parts we had on hand.

Accomplishments that we're proud of

Successfully deploying a trained object detection model directly onto edge hardware and getting it to run efficiently. Creating a seamless software loop in Python that handles complex image processing and low-level hardware GPIO control simultaneously. Building a physical mechanism capable of picking up and sorting items off the ground.

What we learned

Single-board computers are incredibly powerful for AI, but require smart software architecture to handle hardware interrupts and mechanical control smoothly. Optimizing computer vision code is just as important as the model accuracy itself when dealing with mobile, battery-powered systems. The hardware design cycle takes time, particularly when bridging the gap between high-level Python algorithms and physical execution. Iterating on the mechanical systems, from calibrating the servo arm's swing path to balancing the motorized see-saw hopper to handle varying weights and shapes of debris, proved that real-world engineering requires constant testing, failing, and refining.

What's next for WALL-3

Changing the intake system to an active intake that can change based off the object that it’s intaking. A mock up was made (shown below) but we were unable to implement it with the resources we had.

Built With

Share this project:

Updates