Project Story: The Birth of GlideCart

The idea for GlideCart started with a simple observation: the world isn't built for everyone. While modern grocery stores are filled with technology, the physical act of shopping remains a massive barrier for those with mobility challenges. We wanted to see if we could use low-cost hardware and smart software to turn a standard chore into a seamless, dignified experience. Over a 12-hour sprint, we moved from a conceptual "Grocery Buddy" to a working robotic prototype that proves independence shouldn't be a luxury.


Inspiration

We were inspired by the 65 million people globally who require mobility assistance. For someone in a wheelchair, a grocery run is a physical puzzle: how do you navigate a mobility device, hold a shopping list, and push a heavy cart all at the same time? We wanted to build a solution that was user-centric rather than store-centric, giving the power back to the individual shopper.

What it does

GlideCart is an autonomous, vision-powered shopping assistant. Using a "Follow Mode" powered by OpenCV and ArUco tracking, the cart locks onto the user and maintains a perfect distance, navigating aisles hands-free. It also features a "Detect Mode" with a custom-trained vision model that identifies grocery staples (like fruit and milk) as they are placed in the cart, automatically checking them off a real-time shopping list synced via Supabase.

How we built it

  • Hardware: We used a Raspberry Pi 4 paired with a webcam for vision and DC motors with motor drivers for movement.

  • Vision & Control: We implemented OpenCV for the ArUco-based tracking logic and trained a lightweight, high-accuracy object detection model from scratch to handle item recognition.

  • App & Backend: The mobile interface was built on Android (Gradle), providing a live camera feed and a dynamic checklist.

  • Data: We used Supabase as our SQL backend to ensure real-time synchronization between the robot's camera and the user's phone.

Challenges we ran into

The biggest hurdle was the hardware-software trade-off. We initially used a more complex model that was too janky and slow for the Raspberry Pi's processing power. We had to pivot mid-hackathon to train a lighter, more efficient model that could handle real-time detection without latency. Additionally, calibrating the motors to respond smoothly to the vision-tracking data in a crowded-simulated environment was a constant balancing act.

Accomplishments that we're proud of

We are incredibly proud of building a fully integrated ecosystem—from hardware motor control to a cloud-synced mobile app—in just 12 hours. Seeing the "Follow Mode" successfully lock onto a marker and move the robot in sync with a person for the first time was a huge win for the team.

What we learned

We learned the importance of hardware optimization; just because a model works on a powerful laptop doesn't mean it will work on an edge device like a Raspberry Pi. We also gained deep experience in real-time data streaming and the nuances of using ArUco markers for stable autonomous navigation.

What's next for GlideCart

While this is currently an MVP prototype, it is designed to be highly scalable. With more powerful hardware and advanced sensors (like LiDAR), GlideCart could handle high-traffic environments and complex obstacles, becoming a true game-changer in retail. We plan to expand its applications to hospital logistics and warehouse assistance, focusing on making the hardware even more affordable than current industrial robotics.

Built With

Share this project:

Updates