Inspiration
Going through my college's course catalog was overwhelming as it had over 300 pages worth of classes. It was incredibly challenging just to find a specific course that I wanted, which is where I came up with the idea of having AI language models analyze these documents for us instead.
What it does
This app takes in text data of an uploaded pdf document in order to find information efficiently, saving time spent searching through long documents.
How we built it
I used a React framework called Next.js which had built-in API routes as well as strong TypeScript and TailwindCSS support. The main component was the ChatGPT API, which was used to make queries through the Twilio API and more.
Challenges we ran into
The model is greatly limited to how many tokens it can analyze in a request. It sometimes stopped working even after analyzing the pdf text. It was also tough trying to have the AI keep previous conversation history to improve usability. Luckily, I found a package for Next.js that could store sessions between requests.
Accomplishments that we're proud of
I had no experience with Next.js whatsoever, so learning things on the go was a challenging feat that I'm proud of.
What we learned
How APIs work behind the scenes, how to build sleek websites with Next.js and Tailwind
What's next for ScholarAI
Possibly use Supabase database to store user login info as well as their pdfs. I may expand this to other formats as well such as .docx. Since this was essentially built on ChatGPT, I would like to experiment with models like GPT-4.
Built With
- chatgpt
- next.js
- react
- tailwindcss
- typescript

Log in or sign up for Devpost to join the conversation.