Overview
undefined ai eradicates traditional, static learning interfaces. Instead of a pre-built dashboard, the UI is entirely "undefined" until you interact via a single floating chat command center.
Using the MiniMax API, the AI ingests user input (text, audio, PDFs, URLs) and dynamically generates the optimal UI format—such as interactive node graphs, tables, slideshows, or video/audio lessons—tailored to the user's specific education level, learning style, and real-time knowledge gaps.
Key Features
- Agent2UI Engine: Reactive architecture where the AI outputs both Content and UI State.
- Multimodal Ingestion: Ingest PDFs, URLs, or speak via mic (STT via MiniMax).
- *Knowledge Gap Detection: AI actively assesses what you *don't ask to subtly introduce necessary foundational concepts.
- Adaptive Leveling: Summarizes complex material (e.g., Ph.D. theses) for high-schoolers or provides deep dives for grad students.
- Interest-Centric Recommendations: Dynamic forking at the end of modules to continue exploring or branch out.
Tech Stack
Frontend & UI
- Framework: React (Vite)
- Styling: TailwindCSS
- Visualization: xyflow (Node Graphs), Recharts (Data Viz)
- Animations: Framer Motion
Backend & AI
- API: FastAPI (Python)
- Agent Orchestration: LangChain & LangGraph
- LLM Core: MiniMax text models (Reasoning & JSON formatting)
- Multimodal: MiniMax STT, ElevenLabs TTS, PyMuPDF (Ingestion)
Architecture
Quick Start
1. Clone the Repository
bash git clone https://github.com/MarcusMQF/undefined-ai.git cd undefined-ai
2. Backend Setup
bash cd backend python -m venv venv source venv/bin/activate # venv\Scripts\activate on Windows pip install -r requirements.txt python main.py
3. Frontend Setup
bash cd frontend npm install npm run dev
Built for Hack The East 2026. Developed by team spectrUM
Log in or sign up for Devpost to join the conversation.