A comprehensive platform that transforms research papers and documents into structured educational content using specialized AI agents. Built with modern Python and web technologies, featuring cost-optimized mixed LLM usage, real-time SSE updates, and privacy-focused design for optimal performance and developer experience.
- Document Upload: Support for TXT, PDF, DOCX, and Markdown files
- AI-Powered Processing: Multi-agent pipeline for intelligent content generation
- Real-time Progress: Server-Sent Events (SSE) based progress tracking with live updates
- Content Editing: Built-in markdown editor with live preview
- Multiple Durations: Support for week, multi-week, and semester courses
- Content Enhancement: Optional research enhancement and cross-referencing
- Revision System: Request revisions based on user feedback
- Download Options: Export generated content as markdown files
- Modern UI: Beautiful, responsive interface built with Tailwind CSS
The platform consists of two main components:
- FastAPI: High-performance REST API with automatic OpenAPI documentation
- SQLite: Lightweight database for session and content management
- AI Agents: Specialized LLM agents for different processing tasks
- Server-Sent Events: Real-time communication for progress updates
- Security: Rate limiting, input validation, and file type restrictions
- Modern HTML5/CSS3/JavaScript: Clean, responsive web interface
- Tailwind CSS: Utility-first CSS framework for rapid UI development
- Vite: Fast build tool and development server
- EventSource Client: Real-time communication with backend via SSE
- Markdown Rendering: Live preview with syntax highlighting
class-planner/
├── backend/
│ ├── app/
│ │ ├── main.py # FastAPI application and routes
│ │ ├── database.py # Database models and operations
│ │ ├── agents.py # AI agent implementations
│ │ ├── models.py # Pydantic data models
│ │ └── security.py # Security utilities and middleware
│ ├── pyproject.toml # Python dependencies and project config
│ └── .python-version # Python version specification
├── frontend/
│ ├── src/
│ │ └── input.css # Tailwind CSS input file
│ ├── static/
│ │ ├── css/
│ │ │ └── style.css # Generated Tailwind CSS
│ │ └── js/
│ │ └── app.js # Frontend JavaScript application
│ ├── templates/
│ │ └── index.html # Main HTML template
│ ├── package.json # Node.js dependencies
│ ├── tailwind.config.js # Tailwind CSS configuration
│ ├── postcss.config.js # PostCSS configuration
│ └── vite.config.js # Vite configuration
├── .env.example # Environment variables template
└── README.md # This file
Before running the project, ensure you have the following installed:
- Python 3.11+: The backend requires Python 3.11 or higher
- uv: Fast Python package manager (installation guide)
- Node.js 18+: For frontend build tools and dependencies
- npm/yarn: Package manager for Node.js dependencies
git clone <repository-url>
cd class-planner# Navigate to backend directory
cd backend
# Install Python dependencies using uv
uv sync
# Install development dependencies (optional)
uv sync --dev
# Set up environment variables
cp ../.env.example ../.env
# Edit .env and add your OpenAI API key# Navigate to frontend directory
cd ../frontend
# Install Node.js dependencies
npm install
# Build Tailwind CSS
npm run build-cssCreate a .env file in the project root:
# Required: OpenAI API Configuration
OPENAI_API_KEY=your-openai-api-key-here
# Optional: Database Configuration
DATABASE_URL=sqlite:///./data/geneacademy.db
# Optional: Security Configuration
MAX_FILE_SIZE=10485760 # 10MB in bytes
RATE_LIMIT_REQUESTS=10 # Requests per minute per IP
RATE_LIMIT_WINDOW=60 # Time window in seconds
# Optional: Application Configuration
DEBUG=False
CORS_ORIGINS=*cd backend
uv run uvicorn app.main:app --reload --host 0.0.0.0 --port 8000cd frontend
npm run devcd frontend
npm run build-csscd backend
uv run uvicorn app.main:app --host 0.0.0.0 --port 8000cd frontend
npm run build
npm run preview- Frontend: http://localhost:3000 (development) or http://localhost:4173 (preview)
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs (Swagger UI)
The platform uses a sophisticated multi-agent system to process documents:
- Purpose: Extract key concepts and create structured summaries
- Output: Learning objectives, key concepts, chapter outlines
- Focus: Preserves technical accuracy while organizing content
- Optimization: Uses cost-effective GPT-3.5-turbo for analysis tasks
- Purpose: Transform summaries into comprehensive ebooks
- Output: Full educational content with examples and exercises
- Focus: Engaging, structured learning materials
- Optimization: Uses GPT-3.5-turbo for analysis, GPT-4.1 for content generation
- Purpose: Validate generated content against source material
- Output: Accuracy score (0-100) and correction suggestions
- Focus: Ensures factual correctness and completeness
- Optimization: Uses cost-effective GPT-3.5-turbo for review tasks
- Purpose: Add supplementary material and context
- Output: Enhanced content with case studies and applications
- Focus: Real-world relevance and deeper understanding
- Optimization: Uses cost-effective GPT-3.5-turbo for enhancement tasks
- Purpose: Handle user feedback and revision requests
- Output: Updated content based on user specifications
- Focus: Maintains consistency while applying changes
- Optimization: Uses cost-effective GPT-3.5-turbo for revision tasks
POST /api/session/create- Create new processing sessionGET /api/session/{session_id}- Get session details and status
POST /api/upload- Upload document for processingGET /api/status/{session_id}- Check processing statusGET /api/content/{session_id}- Retrieve generated content
POST /api/revise/{content_id}- Request content revisionPOST /api/enhance/{content_id}- Add research enhancementGET /api/download/{content_id}- Download content (future feature)
GET /api/events/{session_id}- Server-Sent Events stream for progress updatesPOST /api/events/{event_id}/acknowledge- Acknowledge received event
The platform uses SQLite with the following structure:
- sessions: User sessions and processing status (UUID-based, no IP collection)
- documents: Uploaded documents and metadata
- generated_content: AI-generated educational content with user prompts
- agent_logs: Processing logs and performance metrics
- processing_events: SSE events for real-time updates with auto-acknowledgment
- revision_history: Content revision tracking
- File Validation: Restricts file types (TXT, PDF, DOCX, MD) and sizes (max 10MB)
- Input Sanitization: Cleans and validates all user inputs
- Session Management: Secure UUID-based session handling (no IP collection)
- Privacy-Focused: Minimal data collection, no unnecessary user tracking
- Content Integrity: Secure event streaming with auto-acknowledgment
The project uses Tailwind CSS for styling with the following workflow:
- Edit Styles: Modify
frontend/src/input.css - Build CSS: Run
npm run build-cssto generate output - Development: Use watch mode for automatic rebuilds
The project includes custom Tailwind components:
.btn-primary/.btn-secondary- Button styles.card- Card container component.input-field- Form input styling.upload-area- File upload zone.progress-bar/.progress-fill- Progress indicators
Vite is configured for:
- Development server with API proxy
- Hot module replacement
- API proxy for backend communication
- Production builds with optimization
- pytest: Testing framework
- black: Code formatting
- isort: Import sorting
- flake8: Linting
- mypy: Type checking
- Vite: Build tool and development server
- Tailwind CSS: Utility-first CSS framework
- PostCSS: CSS processing
- Autoprefixer: CSS vendor prefixes
- Upload Document: Select a research paper or document (TXT, PDF, DOCX, MD)
- Configure Options:
- Choose learning duration (week, multi-week, semester)
- Optionally enable research enhancement
- Process Content: AI agents will process through multiple stages
- Review Results: View generated content with accuracy scoring
- Edit & Refine: Use built-in editor or request revisions
- Download: Export content as markdown files
-
Backend won't start
- Check Python version:
python --version(should be 3.11+) - Verify uv installation:
uv --version - Check environment variables in
.env
- Check Python version:
-
Frontend build errors
- Update Node.js:
node --version(should be 18+) - Clear node_modules:
rm -rf node_modules && npm install - Rebuild Tailwind:
npm run build-css
- Update Node.js:
-
File upload fails
- Check file type (TXT, PDF, DOCX, MD only)
- Verify file size (max 10MB)
- Check backend logs for errors
-
SSE connection fails
- Ensure backend is running on port 8000
- Check firewall settings
- Verify API proxy configuration in Vite
- Check browser console for EventSource errors
-
Messages arrive in batches
- This should be fixed with auto-acknowledgment
- Check that SSE polling is at 0.5s intervals
- Verify no proxy is buffering the SSE stream
- Backend logs: Available in terminal where uvicorn is running
- Database: SQLite file at
backend/data/geneacademy.db - Agent logs: Stored in database
agent_logstable - Frontend logs: Available in browser developer console
This project is licensed under the MIT License - see the LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests and linting
- Submit a pull request
# Backend development
cd backend
uv sync --dev
uv run pytest # Run tests
uv run black . # Format code
uv run mypy . # Type checking
# Frontend development
cd frontend
npm install
npm run dev # Start development server
npm run build-css # Build Tailwind CSSFor questions, issues, or contributions:
- Check the troubleshooting section above
- Review existing GitHub issues
- Create a new issue with detailed information
- Join our community discussions
- PDF generation for downloads
- Multi-language support
- Advanced revision workflows
- User authentication and accounts
- Content templates and presets
- Integration with learning management systems
- Advanced analytics and reporting
- Rate limiting implementation for production
- Batch processing for multiple documents
- Export to various formats (EPUB, Word, etc.)
Built by the GeneAcademy Team
This platform has evolved from a WebSocket-based system to a more efficient and cost-effective architecture:
- v1.0: Initial implementation with WebSocket communication and GPT-4 for all tasks
- v2.0: Current implementation with SSE communication and mixed model usage
- 60-70% cost reduction through strategic model selection
- Improved reliability with SSE auto-acknowledgment
- Enhanced privacy with no IP collection
- Direct AI output without unnecessary preamble
- Real-time updates with 0.5s polling intervals
- ✅ SSE Implementation: Replaced WebSocket with Server-Sent Events for better reliability
- ✅ Cost Optimization: Mixed model usage (GPT-3.5-turbo + GPT-4.1) reduces costs by ~60-70%
- ✅ Privacy Improvements: Removed IP collection, UUID-only session management
- ✅ Real-time Updates: Auto-acknowledgment prevents message batching issues
- ✅ Direct Output: Optimized prompts eliminate unnecessary preamble from AI responses