Inspiration

The project was inspired by the challenge of managing lost belongings on the vast UMass Amherst campus and the need for a smarter alternative to traditional search methods.

What it does

Back2U utilizes CLIP-based semantic search to allow users to find lost items using natural language descriptions rather than exact keywords.

How we built it

We integrated OpenAI’s CLIP model with a React/Tailwind frontend and a Python/SQLite backend to map images and text into a shared vector space.

Challenges we ran into

Our primary hurdles were optimizing dual-perspective image encoding and refining similarity thresholds to ensure mathematically accurate search results.

Accomplishments that we're proud of

We are proud of creating a seamless vector-based matching system that successfully translates complex AI processes into an intuitive user experience.

What we learned

We mastered the implementation of high-dimensional embeddings and learned how to build robust, AI-integrated full-stack applications.

What's next for Back2U

Adding user accounts and claim flow.

Built With

Share this project:

Updates