Agent2UI-driven educational platform — the AI doesn't just chat, it builds your entire learning interface
undefined ai replaces traditional, static learning interfaces with a dynamic, AI-generated experience. The UI remains undefined until the user interacts through a single floating chat command center.
A dual-agent AI system powered by MiniMax and Gemini ingests user input (text, audio, PDFs, URLs) and generates the optimal UI format: interactive mind maps, data tables, timelines, quizzes, or rich text, tailored to the content structure and user needs.
Upload (PDF / URL / Voice) --> Knowledge Extraction --> AI Format Decision --> Dynamic UI Render
The AI decides what to teach and how to present it. Every surface is generated at runtime; nothing is hard-coded.
The AI produces full interactive learning surfaces via a custom Agent-to-UI protocol with 25 dynamic components (mind maps, quizzes, timelines, data tables, and more), rendered in real time through a custom React renderer.
A main chatbot agent (LangGraph ReAct) orchestrates the conversation with five tools. When UI is needed, it delegates to a specialized UI Agent that handles element-level CRUD: designing layouts, attaching source facts, and pushing updates live.
Documents are distilled rather than stored raw. The pipeline: raw text to atomic fact extraction (LLM) to iterative compression into a multi-level hierarchy. Multiple sources are merged via cross-tree compression. Every compressed fact traces back to its original source text.
Dual-provider TTS (ElevenLabs and MiniMax) and ElevenLabs Scribe STT. Users can speak to learn and hear responses; audio is generated asynchronously and pushed via SSE.
Exa-powered web search finds relevant sources. Any URL can be ingested directly into the knowledge base as a new fact tree, expanding the topic without losing existing knowledge.
Server-Sent Events (SSE) with eight event types: chat replies, tool call notifications, UI updates, TTS audio, ingestion progress, and more. The frontend stays in sync without polling.
An LLM classifies each topic's difficulty (1–6) and suggests three next topics at escalating challenge levels (same, +1, +2). New users receive personalized introductory recommendations based on education level.
The AI can dynamically generate any of these 25 components:
| Layout | Content | Interactive | Media | Data & Viz |
|---|---|---|---|---|
| Row | Text | Button | Image | DataTable |
| Column | Card | TextField | VideoPlayer | MindMap |
| Tabs | Markdown | CheckBox | AudioPlayer | Timeline |
| Modal | Badge | Quiz | Progress | |
| Divider | Avatar | Skeleton | ||
| List |
All components support dynamic data binding (JSON Pointer resolution), styling tokens, accessibility attributes, and event-driven actions.
| Layer | Technology |
|---|---|
| Framework | React (Vite) + TypeScript |
| A2UI Renderer | Custom component registry with dynamic resolution |
| Styling | TailwindCSS |
| Visualization | xyflow (Node Graphs), Recharts (Data Viz) |
| Animations | Framer Motion |
| Layer | Technology |
|---|---|
| API | FastAPI (Python) with async SQLAlchemy + SQLite |
| Agent Orchestration | LangChain + LangGraph (ReAct agents) |
| LLM Providers | MiniMax (chatcompletion_v2), Google Gemini (rotating pool) |
| Speech | ElevenLabs (TTS + STT), MiniMax TTS (t2a_v2) |
| Web Search | Exa API (search + content extraction) |
| Document Parsing | PyMuPDF |
| Real-Time | Server-Sent Events (SSE) with async queues |
- Docker and Docker Compose
- API keys for: MiniMax, Google Gemini, Exa, ElevenLabs (for full functionality)
1. Clone the repository
git clone https://github.com/MarcusMQF/undefined-ai.git
cd undefined-ai2. Configure environment
Copy the example environment file and set your API keys:
cp backend/.env.example backend/.envEdit backend/.env with your credentials. See Configuration for details.
3. Run with Docker Compose
docker compose up --build --watch- Landing:
http://localhost:3000(Next.js marketing site) - Webapp:
http://localhost:5173(Vite) - Backend:
http://localhost:8001
4. Stop the application
docker compose down -vTroubleshooting: If you see parent snapshot does not exist or similar BuildKit errors, try:
docker builder prune -f
docker compose up --build --watchIf it persists, restart Docker Desktop or build with --no-cache once:
docker compose build --no-cache
docker compose up --watchEnvironment variables are defined in backend/.env. Key variables:
| Variable | Description |
|---|---|
MINIMAX_API_KEY |
MiniMax API key for chat and TTS |
GEMINI_API_KEY |
Google Gemini API key |
GEMINI_API_KEY_LIST |
Optional list of keys for rotation |
EXA_API_KEY |
Exa API key for web search |
ELEVENLABS_API_KEY |
ElevenLabs API key for TTS and STT |
USE_IN_MEMORY_DB |
Set to True for in-memory SQLite (default) |
DEBUG |
Enable debug mode |
Refer to backend/.env.example for the full template.
The landing/ folder is a separate Next.js marketing site. When users click Login or Start for free, they are redirected to the webapp (frontend/).
Included in Docker: The landing page runs automatically with docker compose up. All three services (backend, frontend, landing) start together.
Local development (without Docker):
cd landing
npm install --legacy-peer-deps # or rely on landing/.npmrc which enables this by default
npm run dev- Landing: http://localhost:3000
- Webapp: http://localhost:5173
Set NEXT_PUBLIC_WEBAPP_URL in landing/.env.local to point to the webapp (default: http://localhost:5173).
Licensed under the Apache License 2.0. See LICENSE for details.
Contributions are welcome. Please open an issue to discuss proposed changes before submitting a pull request.
Built for Hack The East 2026. Developed by team spectrUM.
