Inspiration We live in an era of infinite scroll but finite attention. Every day, millions of students and developers doomscroll through TikToks and Reels, encountering nuggets of gold—a coding tip, a physics explanation, a history fact—buried in a sea of noise. But two problems persist:

Retention: That valuable video gets lost in the feed instantly.

Misinformation: "Educational" content is often unverified, leading to the spread of false science and fake news.

We asked ourselves: What if we could turn this short-form noise into a permanent, verified treasure chest of knowledge? That’s how Chestify was born. We wanted to build a tool that doesn't just save videos, but actually understands and vets them, contributing to SDG 4: Quality Education.

What it does Chestify is an intelligent video curation engine. When a user pastes a URL (YouTube Short, Reel, or TikTok):

The "Smart Ingest": Our Python backend extracts the transcript and metadata.

The "BS Detector": Google Gemini 1.5 Flash analyzes the claims against trusted sources (Grounding). It flags content as "Verified" (Cyan) or "Misleading" (Orange).

The Treasure Chest: It organizes the content into a beautiful, masonry-style library where users can chat with their videos using an AI RAG pipeline.

How we built it We built Chestify using a hybrid architecture to balance high-performance UI with powerful AI processing:

Frontend: Built with Next.js 14 and Tailwind CSS. We focused heavily on a "Dark Mode Tech" aesthetic, utilizing Framer Motion for the custom "Direction Aware" hover cards and glass morphism overlays.

Backend: A Python FastAPI worker handles the heavy lifting of video extraction (yt_dlp) and serves as the bridge to the AI.

AI Engine: We leveraged Google Gemini 1.5 Flash for its speed and long-context window to process video transcripts and perform fact-checking.

Database: Firebase Firestore allows for real-time status updates (showing the "Processing..." skeleton state turning into a live card instantly).

Challenges we faced The "Glass" Readability Struggle: One of our biggest UI hurdles was the "Direction Aware" hover effect. We wanted a premium frosted glass look, but it kept making the text unreadable over complex video thumbnails. We had to iterate several times to find the perfect balance of backdrop-blur and opacity to ensure accessibility without losing the "Cyber" vibe.

Prompt Engineering for Fact-Checking: Getting the AI to strictly distinguish between "Simplified Explanation" and "Misinformation" was tricky. We had to refine our system prompts to ensure the "BS Detector" was fair but firm.

Real-time State Sync: syncing the Python backend status (processing/error/complete) with the Next.js frontend in real-time required careful state management with Firebase listeners to ensure the user never felt like the app froze.

Accomplishments that we're proud of The "BS Detector": Watching the AI correctly flag a fake health claim about "Alkaline Water" for the first time was a magical moment.

The UI Polish: We didn't just build a tool; we built an experience. From the "Inferno" gradient themes to the fluid animations, it feels like a production-ready SaaS product.

RAG Implementation: Successfully allowing users to "Chat" with their video library showed us the true power of personalized AI education.

Built With

Share this project:

Updates