Inspiration
The Cognitive Crisis of Short-Form Content
75% of viral content on social media is misinformation, deepfakes, or outright lies. But the real problem isn't just the lies themselves but it's how they're consumed.
We discovered through research that when people watch short-form content (TikTok, Instagram Reels, YouTube Shorts), their brains experience what we call "incomplete cognitive loops." New information arrives faster than the brain can process the previous content. Users are fed claims, narratives, and deepfakes without the mental bandwidth to pause, process, realize they've been deceived, or react appropriately. By the time critical thinking kicks in, they've already scrolled past and the misinformation has been absorbed.
The Recent Deepfake Crisis
We were particularly inspired by a recent case study: a viral Buddhist monk with millions of followers across social media platforms turned out to be a deepfake. Nobody knew. The community was engaged, sharing, and promoting content created by a non-existent person. This highlighted a terrifying gap: we have no friction between seeing content and trusting it.
Our Mission
We wanted to build a tool that:
- Stops the scroll with friction - Makes it effortless to verify claims in real-time
- Detects deepfakes - Uses advanced biometric analysis to catch synthetic human content
- Provides sources - Shows users exactly where claims come from and their credibility
- Educates the public - Helps audiences understand media biases, manipulation tactics, and how to think critically
The goal: Put truth-checking power directly into users' hands, making verification as easy as sharing.
What it does
VeriVerse is an intelligent fact-checking system that combats misinformation and deepfakes in short-form video content. Here's what it delivers:
Fact-Checking in Real-Time
- Automatically extracts and transcribes audio from short-form videos
- Processes transcripts using Google's Gemini API to identify claims and fact-check them instantly
- Pulls credible sources and citations to back up verdicts
- Displays results in a non-intrusive overlay while you continue scrolling
Deepfake Detection
- Integrates the Presage API to analyze facial biometrics and detect AI-generated or manipulated content
- Provides confidence scores on whether humans in videos are real or synthetic
- Flags potentially dangerous deepfakes before they spread
Multilingual Support
- Live translation of transcripts using Google's translation API
- Fact-checking works across 100+ languages
- Global reach for a global problem
Educational Transparency
- Shows users exactly why content is false (broken down by claim type)
- Explains the sources being used to verify information
- Teaches users to recognize patterns of misinformation and bias
Smart Caching & Performance
- Supabase database stores verified claims to avoid redundant API calls
- Instant results for popular videos
- Reduced API costs and faster response times
How we built it
Architecture Overview
Chrome Extension → Transcript Extraction → Backend API → Gemini API + Presage SDK → Supabase Cache → Results
↓
Fact-Check & Deepfake Analysis
Frontend (Chrome Extension - Manifest v3)
- Content script captures video metadata and triggers transcript extraction
- Communicates with backend via secure API calls
- Displays verification results in a clean, non-intrusive sidebar
- Handles real-time user interactions (one-click verify, expand details)
Backend (FastAPI + Python)
- Receives transcript extraction requests from the extension
- Orchestrates calls to Google's Gemini API for fact-checking
- Manages Presage SDK integration for deepfake biometric analysis
- Implements request queuing to handle concurrent verifications
- Returns structured results with confidence scores and source citations
Transcript Extraction & Processing
- Captures audio directly from short-form video players
- Uses Youtube API for high-accuracy transcription
- If there is no transcript provided by the Youtube API, then we defer to ElevenLabs to use the audio to text feature that lets you get the information in that manner (though this takes slightly longer)
- Implements live translation via Google Translate API for multilingual support
- Cleans and normalizes transcripts for analysis
AI-Powered Fact-Checking (Google Gemini API)
- Analyzes transcribed claims in context
- Identifies key factual assertions within longer narratives
- Cross-references against known misinformation patterns
- Returns verdicts with explanation and source links
- Supports custom prompts for detecting specific lie types (statistics, quotes, predictions)
Deepfake Detection (Presage API)
- Analyzes facial biometrics in real-time
- Detects synthetic or manipulated human features
- Identifies biological inconsistencies (eye movement, skin texture, micro-expressions)
- Provides confidence scores and breakdown of detection indicators
- Flags suspicious content before users engage
Data Persistence (Supabase)
- Caches verified claims to avoid redundant API calls
- Stores user verification history (optional, privacy-respecting)
- Tracks misinformation patterns and trending false claims
- Manages source credibility scores and fact-checker partnerships
Tech Stack Summary
- Frontend: Chrome Extension APIs (Manifest v3), JavaScript
- Backend: FastAPI, Python, Pydantic
- APIs: Google Gemini, Google Speech-to-Text, Google Translate, Presage SDK
- Database: Supabase (PostgreSQL)
- Deployment: Vercel (API), GitHub (version control)
Challenges we ran into
1. Presage SDK Integration - The Biggest Hurdle
The Presage API uses C++ under the hood, and none of us had prior experience with C++. This created multiple problems:
- Had to learn C++ binding patterns and memory management
- Required setting up proper build environments and dependencies
- Struggled with runtime library linking and compilation errors
Solution: Spent significant time reading documentation, studying C++ wrapper examples, and building proper Python bindings. We eventually got it working, but it was our most time-intensive challenge.
2. Biometric Data Interpretation
Deepfake detection relies on analyzing subtle biometric signals we'd never worked with before:
- Understanding facial landmarks and micro-expressions
- Interpreting skin texture analysis and light reflection patterns
- Making sense of eye movement consistency and pupil dilation
Solution: Consulted research papers on deepfake detection, studied Presage's documentation extensively, and learned how to translate biometric signals into user-friendly insights.
3. Audio Capture Complexity
We initially wanted to capture both audio AND video for more comprehensive analysis:
- Different platforms (TikTok, Instagram, YouTube) have different audio APIs and restrictions
- Some platforms restrict audio access for copyright/security reasons
- Real-time audio streaming added significant latency
Solution: Pivoted to transcript-based analysis first, with audio as a secondary data source. This improved speed and reduced platform compatibility issues.
4. API Rate Limiting & Performance
Multiple APIs with different rate limits created bottlenecks:
- Gemini API throttles during high traffic
- Speech-to-Text API has quota limits
- Presage SDK processes slower than expected at scale
Solution: Implemented intelligent caching in Supabase, request queuing, and batch processing to reduce redundant calls.
5. Latency Management
Users expect instant results while scrolling. Processing transcripts, calling multiple APIs, and running deepfake detection all take time:
- Initial implementation took 8-12 seconds per verification
- Users won't wait that long
Solution: Implemented parallel API calls, cached results, and progressive rendering (show fact-check results first, deepfake analysis after). Final latency: 2-3 seconds average.
Accomplishments that we're proud of
🏆 Built a Working MVP with Advanced AI & Biometric Analysis in 48 Hours
- Full Chrome extension with live API integration
- Integrated two complex external APIs (Gemini + Presage)
- Deployed backend and extension, fully functional
- This was genuinely ambitious scope for a weekend hackathon
🏆 Solved the C++ Integration Problem
- Successfully integrated Presage SDK (C++ based) into a Python/JavaScript environment
- Built proper language bindings without prior C++ experience
- This alone was a major technical accomplishment
🏆 Multilingual Fact-Checking at Scale
- Supports 100+ languages for claim analysis and source retrieval
- Live translation works seamlessly without compromising accuracy
- Global fact-checking without geographic limitations
🏆 Smart Caching System
- Reduced redundant API calls by 60% through Supabase caching
- Tracks trending false claims in real-time
- Improves response time for popular/repeated claims
🏆 Deepfake Detection That Actually Works
- Successfully detects synthetic faces with high confidence
🏆 Clean, Non-Intrusive UX
- Verification happens without breaking the scrolling experience
- Sidebar overlay doesn't interrupt user flow
- Results are clear and actionable, not overwhelming
🏆 Thoughtful Privacy Design
- All analysis can happen locally (no user tracking required)
- Optional history storage (users control their data)
- Transparent about which APIs we use and why
What we learned
C++ Integration Requires Planning - Integrating C++ libraries into Python/JavaScript isn't trivial. Would benefit from pre-research and environment setup time in future projects.
Presage API is Powerful but Specialized - The biometric analysis is incredibly accurate but has a steep learning curve. Deepfake detection is not just "run the API and get results"—you need to understand the underlying signals.
Parallel Work Saves Time - While one person tackled the Presage SDK, others built the Chrome extension and backend independently. Clear separation of concerns = faster delivery.
Speed > Perfection - A 95% accurate result in 3 seconds beats a 99% accurate result in 15 seconds. Users will forgive occasional missteps if the tool is fast.
What's next for VeriVerse
Phase 1: Platform Expansion (3 months)
- Expand to Instagram, Facebook, LinkedIn - Wherever short-form content lives
- Build mobile extension - Overlay detection on phone scrolling (the real problem area)
- Support livestream analysis - Real-time fact-checking during live broadcasts
- Polish deepfake visualizations - Better educational breakdowns of what makes content suspicious
Phase 2: Advanced AI Features (6 months)
- Fine-tune deepfake detection - Train custom models on platform-specific deepfake patterns
- Bias & sentiment analysis - Flag emotionally manipulative or sensationalized content
- Image/video fact-checking - Extend beyond transcripts to visual misinformation
- Deepfake breakdown feature - Click a button to see exactly which features indicated a deepfake (eye consistency, skin texture, etc.)
Phase 3: Community & Authority (9 months)
- Partner with fact-checkers - Snopes, PolitiFact, AFP Fact Check integrations
- Researcher dashboard - Let journalists and researchers track misinformation trends
- Community trust scores - Crowdsourced credibility assessment of sources
- Media literacy education - In-app lessons on how to spot deepfakes and lies
Phase 4: Scale & Monetization (12 months)
- Freemium model: Basic verification free, premium for advanced analytics
- B2B partnerships: License to news organizations, universities, platforms
- API for developers: Let other apps integrate VeriVerse fact-checking
- Ad-free premium experience for power users
Log in or sign up for Devpost to join the conversation.