.
đź§ Inspiration
AI is incredibly powerful — but trust is broken.
Developers constantly copy answers from AI tools, then spend more time verifying them than it would have taken to research manually. Enterprises hesitate to adopt autonomous AI systems because hallucinations can cause real financial, security, or operational damage.
We asked a simple question:
What if AI was not allowed to act unless it could prove itself first?
That idea became LiveProof AI — a verification-first execution engine that searches live information, structures evidence, computes reliability, and only then performs actions.
Trust is not a feature. It’s infrastructure.
🚀 What It Does
LiveProof AI is a citation-backed AI agent that:
• Uses You.com APIs to fetch real-time, citation-backed web data • Extracts structured claims from sources • Builds a “Claim Graph” stored in Sanity • Calculates a reliability score • Only executes safe actions when evidence meets a confidence threshold
Key features:
âś” Live citation panel âś” Reliability scoring engine âś” Structured knowledge graph âś” Topic comparison over time âś” Safe execution mode (code snippets, configs, PDF reports) âś” Deployed on Akamai Linode Kubernetes Engine
If reliability is low, the system asks clarifying questions instead of acting.
It’s not just AI answering questions — it’s AI proving its reasoning.
🏗 How We Built It
Frontend:
Next.js + Tailwind for interactive UI
Claim graph explorer + evidence panel
Session comparison dashboard
Backend:
FastAPI verification pipeline
You.com Search API integration for live citations
Reliability scoring engine
Safe execution module
Structured Content:
Sanity used as structured backend
Claims, sessions, topics, and sources modeled relationally
MCP Server used to define schema and query content during development
GROQ queries power “compare sessions” and “top sources” features
Infrastructure:
Deployed on Akamai Linode Kubernetes Engine (LKE)
Containerized with Docker
Optional GPU worker for embeddings/inference
Everything is open-source and reproducible.
⚡ Challenges We Ran Into
• Normalizing live web search results into structured claims • Designing a schema that unlocks meaningful cross-session comparisons • Balancing reliability scoring without overcomplicating the UX • Ensuring safe execution without exposing security risks • Deploying GPU workloads efficiently on Kubernetes
The hardest part wasn’t building AI — it was building trustworthy AI.
🏆 Accomplishments We’re Proud Of
✔ End-to-end verify → score → execute pipeline ✔ Structured claim graph stored and queryable in Sanity ✔ Historical topic comparison feature ✔ Real-time citation-backed answers ✔ Kubernetes deployment on Akamai Cloud ✔ Clean, startup-ready architecture
We didn’t just build a demo — we built infrastructure.
📚 What We Learned
• Structured content unlocks features flat text never could • Developers don’t just want answers — they want confidence • Reliability scoring dramatically improves trust • AI systems need guardrails to scale in enterprise
Trust-first AI is a competitive advantage.
🔮 What’s Next
• VS Code extension for real-time code verification • Slack / GitHub PR integration • Enterprise audit logs for compliance teams • Advanced contradiction detection • Policy-based execution controls • SaaS model for dev teams and enterprises
LiveProof AI can become the trust layer between AI and real-world execution.
đź’° Why This Could Become a Startup
• AI governance market is exploding • Enterprises demand explainability • Dev tools market is massive • Citation-backed reasoning increases adoption • Execution gating reduces liability
Monetization model: $25–$49 per developer/month Enterprise API tier Compliance add-ons
This is not a feature. It’s a category.
Built With
- fastapi
- next.js
- tailwind
- you
Log in or sign up for Devpost to join the conversation.