Inspiration

Have you ever wondered why you have to wait several days to get a hold of an admissions officer to get your questions answered? Well, LLMs enable us to quickly and efficiently analyze text data. The issue is that default LLMs don't necessarily have information about a specific school so we are using Retrieval Augmented Generation (RAG)

What it does

Lets users interact with an AI in a chat format. The chat uses RAG which gives the LLM the latest relevant information.

How we built it

We first scrapped and cleaned all of admissions related content form our school's website. Then we uploaded the text into Azure and vectorized it. Our Flask app connects to our Azure deployment and makes necessary calls. Lastly, our front-end React app makes a call to our Flask App, which in turn gets a completion

Challenges we ran into

Couldn't get LangChain working. Given starter code was not working. Constant issues with Azure.

Accomplishments that we're proud of

We enabled search, our app does implement the latest available data from our school's website

What we learned

We learned what RAG is, how to implement it using Azure tools. We learned how to work with Docker containers and environments. We learned Azure CLI and SDK.

What's next for QClip

Seed round?

Built With

Share this project:

Updates