Inspiration
I was inspired to build this when I lost a bag in Newark Airport
What it does
First, an airport worker uploads a picture of a lost bag and adds some metadata. Then, a user describes their bag, and they will be presented with bags that fit the description
How we built it
Images and text are vectorized by CLIP and compared using Cosine Similarity. The vectors are stored in postgres on GCP.
Challenges we ran into
I initially used Google cloud run, which did not play well with my database. I had to switch to a virtual machine.
Accomplishments that we're proud of
I am proud that I got CLIP running on a very minimal VM without a GPU.
What we learned
I learned a lot about networking, HTTP, and Flask.
What's next for BagFinder
I plan to upgrade the VM to increase inference speeds.
DEMO NOTE: Enable mixed-content loading. The website will not work without it.
Built With
- clip
- flask
- gcp
- nextjs

Log in or sign up for Devpost to join the conversation.