AbleEdu: AI-Powered Accessible Learning Platform
π‘ Inspiration
The inspiration for AbleEdu emerged from a striking statistic: 1 in 4 US adults has a disability that impacts major life activities, yet only 4% disclose it in the workplace. In educational settings, this gap is even more pronouncedβstudents with ADHD, dyslexia, hearing impairments, and other learning differences often struggle in silence, unable to access content delivered in traditional lecture formats.
The turning point came when we realized that while AI has revolutionized content creation, accessibility remains an afterthought rather than a foundational principle. We asked ourselves: What if we could build an AI teaching agent that adapts to each learner's needs from the ground up?
Our mission became clear: create an intelligent tutoring system that transforms any lecture outline into a multi-modal, accessible learning experienceβone that works for students with ADHD who need content chunked into manageable segments, for deaf students who rely on visual information, for dyslexic learners who need multiple repetitions, and for international students navigating accent and terminology barriers.
The name "AbleEdu" reflects our core belief: every student is able to learn when given the right tools and environment.
π― The Problem Space
Target Users
Primary Users: Students with Accessibility Needs
- ADHD students (inability to maintain attention, hyperactivity, emotional regulation challenges): Struggle with long lectures, easily distracted, difficulty completing assignments on time
- Hearing-impaired students (1 in 8 people in the U.S. experience hearing loss): Miss crucial audio information, 54% don't disclose their condition at work/school
- Dyslexic students: Slow reading comprehension, difficulty with simultaneous listening and note-taking, heavy reliance on repetition
- Students with intellectual disabilities: Difficulty understanding new information, slow cognitive processing, challenges with abstract concepts
Secondary Users: General Student Population
- International students struggling with accents, speed, and jargon
- Students learning in their second or third language
- Anyone who benefits from flexible, self-paced learning
Core Challenges
The research revealed several critical pain points:
- Content Delivery Mismatch: Traditional lectures are linear, audio-heavy, and assume sustained attentionβincompatible with diverse learning needs
- Lack of Multimodal Options: Most educational content relies primarily on one medium (usually audio + slides)
- Institutional Barriers: Lecturers often lack training in accessibility awareness
- Disclosure Dilemma: Students fear discrimination and negative stereotyping, leading to hidden struggles
- One-Size-Fits-All Approach: Existing solutions don't adapt to individual learning profiles
ποΈ What We Built
IntelliTutor is an AI-powered platform that transforms lecture outlines into fully accessible, interactive learning experiences. Here's how it works:
For Teachers: Content Creation Pipeline
Step 1: Natural Language Input Teachers input their knowledge requirements in plain textβno special formatting needed. For example: "Teach linear regression for data science students, covering assumptions, implementation in Python, and interpretation of results."
Step 2: Intelligent Outline Generation Using Liquid LFM2 (long-context AI model), the system generates:
- Structured learning outline with clear objectives
- Mindmap visualization (using Mermaid diagrams)
- Automatic content chunking into 5-10 minute segments optimized for ADHD learners
Step 3: Multi-Modal Content Generation The system automatically creates:
- Visual slides with clear diagrams and charts
- AI voice narration (via ElevenLabs) with adjustable speed
- Real-time transcripts with downloadable notes
- Interactive Q&A with pre-generated potential questions
Step 4: AI Teaching Agent Deployment An adaptive agent is created that can:
- Navigate through slides based on student questions
- Add highlights and annotations dynamically
- Generate supplementary diagrams on-demand
- Adjust pacing based on student interaction patterns
For Students: Personalized Learning Experience
Accessibility-First Features:
Intelligent Content Chunking System
- Auto-segmentation into 5-10 minute modules
- Clear learning objectives for each chunk
- Support for custom pacing and breaks
- Visual progress indicators
Multi-Modal Content Delivery
- Visual: Clear slides, mind maps, interactive diagrams
- Auditory: AI voice with speed control (0.5x to 2x)
- Textual: Real-time captions, downloadable notes
- Interactive: Instant Q&A, mini-quizzes for engagement
Personalized Learning Profiles
- ADHD Mode: Enhanced visual cues, frequent break reminders, gamification elements
- Hearing Impairment Mode: Visual-first content, comprehensive captions, sign language support (future)
- Dyslexia Mode: Audio-first delivery, simplified text, repetition-based reinforcement
- Cognitive Support Mode: Simplified concepts, step-by-step breakdown, extra practice exercises
Web Content Extension
- Browser extension for capturing and transforming online content into accessible format
- Integration with existing learning platforms
Technical Architecture
βββββββββββββββββββ
β Next.js β
β Frontend β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββ
β LiquidMetal Raindrop β
β (Request Processing & Routing) β
ββββββββββ¬βββββββββββββββββββββββββββββ
β
ββββββββββββββββ¬βββββββββββββββ¬ββββββββββββββ
βΌ βΌ βΌ βΌ
ββββββββββ ββββββββββββ βββββββββββ βββββββββββ
βLiquid β βElevenLabsβ βFirebase β β MySQL β
βLFM2 AI β β Voice β βDatabase β βAnalyticsβ
ββββββββββ ββββββββββββ βββββββββββ βββββββββββ
Key Technologies:
- AI Model: Liquid LFM2 for long-context understanding
- Voice Agent: ElevenLabs for natural speech synthesis
- Hosting: LiquidMetal Raindrop for orchestration
- Database: Firestore for real-time data, MySQL for analytics
- Frontend: Next.js with accessibility-first components
π οΈ How We Built It
Phase 1: Research & Validation
We started with deep user research, studying accessibility guidelines and interviewing students with learning differences. Key insights:
- WCAG (Web Content Accessibility Guidelines) compliance is non-negotiable
- Integration with existing assistive technologies (JAWS, NVDA) is critical
- Students need control over their learning pace and format
Phase 2: Core Pipeline Development
Built the teacher-facing content generation pipeline:
// Simplified flow
teacherInput β LLM Processing β Content Structuring β
Multi-Modal Generation β Agent Configuration β Deployment
The biggest challenge was making the AI understand not just what to teach, but how to teach it accessibly. We implemented prompt engineering that explicitly instructs the model to:
- Break complex concepts into digestible chunks
- Provide multiple explanations using different analogies
- Generate visual aids that complement (not replace) text
- Create assessment questions at varying difficulty levels
Phase 3: Student Interface
Designed the learning interface with accessibility at the core:
- Keyboard-navigable UI (no mouse required)
- High-contrast mode with customizable color schemes
- Screen reader compatibility tested with JAWS and NVDA
- Flexible layout that adapts to user preferences
Phase 4: AI Agent Development
The teaching agent needed to be more than a chatbotβit needed to be a responsive tutor:
- Context awareness: Tracks which slides the student has viewed and where they're struggling
- Dynamic navigation: Can jump to relevant slides when answering questions
- Visual augmentation: Generates mermaid diagrams or highlights on-demand
- Adaptive pacing: Detects when a student needs more time or is ready to move forward
π What We Learned
Technical Learnings
Long-context models are game-changers: Liquid LFM2's ability to process entire lecture outlines in one context window eliminated the need for complex chunking strategies that could break coherence.
Voice synthesis quality matters: Early prototypes used basic TTS, but switching to ElevenLabs dramatically improved engagement. Students reported feeling like they had a "real teacher" explaining concepts.
Accessibility can't be retrofitted: We initially tried adding accessibility features after building core functionality. Mistake. Rebuilding with accessibility-first principles saved time and resulted in better UX for everyone.
Database choice impacts UX: Firestore's real-time capabilities were perfect for session tracking and live updates, while MySQL handled analytics queries efficiently. The hybrid approach gave us the best of both worlds.
User Experience Learnings
Personalization beats perfection: Students preferred a system that adapted to their needs over one that tried to be universally optimal. An ADHD student's ideal learning experience looks nothing like a dyslexic student's ideal experience.
Disclosure is sensitive: We learned to never force users to "declare" their disability. Instead, we offer learning modes as preferences: "Would you like more visual content?" feels better than "Do you have a hearing impairment?"
Control is empowering: Students with learning differences often feel powerless in traditional classrooms. Giving them control over pace, format, and presentation style was transformative.
Multimodal β overwhelming: Initially, we showed all content types simultaneously. User testing revealed this was overwhelming. Sequential presentation with easy toggling worked better.
Challenges We Faced
Challenge 1: Balancing Automation with Quality
Problem: Fully automated content generation sometimes produced generic or inaccurate explanations.
Solution: Implemented a review layer where teachers can preview and edit generated content before student deployment. Added confidence scoring to flag potentially problematic explanations for human review.
Challenge 2: Real-Time Voice Synthesis Latency
Problem: Students asking questions during "live" sessions experienced 3-5 second delays for voice responsesβbreaking the flow of learning.
Solution:
- Pre-generated common responses and cached them
- Implemented predictive loading based on slide position
- Showed visual "agent is thinking" indicator with typing animations to manage expectations
Mathematically, we optimized the response time function:
$$T_{response} = T_{LLM} + T_{TTS} + T_{network}$$
By parallelizing LLM processing and TTS generation where possible:
$$T_{optimized} = \max(T_{LLM}, T_{TTS}) + T_{network}$$
This reduced average response time from 4.2s to 1.8sβa 57% improvement.
Challenge 3: Making Mermaid Diagrams Accessible
Problem: Auto-generated mermaid diagrams weren't screen-reader friendly.
Solution: Added alt-text generation specifically for diagrams, describing node relationships in natural language. For example:
graph TD
A[Linear Regression] --> B[Assumptions]
A --> C[Implementation]
B --> D[Linearity]
B --> E[Independence]
Screen reader version: "This diagram shows Linear Regression at the top, which branches into two main topics: Assumptions and Implementation. Under Assumptions, there are two subtopics: Linearity and Independence."
Challenge 4: Version Control for Educational Content
Problem: When teachers updated content, students mid-course experienced continuity breaks.
Solution: Implemented version control where students could:
- Continue with the version they started
- Opt-in to updated content
- See a diff of what changed between versions
Challenge 5: Session State Management
Problem: Students with ADHD reported losing progress when they took breaks or got distracted.
Solution: Aggressive session persistence:
- Auto-save every 30 seconds
- Resume exactly where they left off, including:
- Current slide position
- Agent conversation history
- Highlighted text and notes
- Quiz progress
Used the state persistence formula:
$$S_{save} = {P_{slide}, H_{conversation}, A_{annotations}, Q_{quiz}}$$
Where each component is timestamped and indexed for instant retrieval.
π Future Directions
Near-term (3-6 months)
- Zoom/Teams Integration: Bring the AI agent into live classroom settings
- Knowledge Marketplace: Allow teachers to share and monetize their courses
- Advanced Analytics: Show teachers where students struggle most
- Mobile Apps: Native iOS/Android for learning on-the-go
Long-term Vision
- Sign Language Avatar: AI-generated sign language interpretation
- Peer Learning: Connect students with similar learning profiles
- Adaptive Assessments: Quizzes that adjust difficulty based on performance
- Multi-language Support: Break language barriers in education
π Reflections
Building IntelliTutor taught us that accessibility isn't a featureβit's a philosophy. When you design for users with the most constraints, you often create better experiences for everyone. Our "ADHD-friendly" chunking system? Regular students love it too. Real-time captions for deaf students? International students and people learning in noisy environments benefit equally.
The most rewarding moment was our first user test with a dyslexic student who told us: "For the first time, I don't feel like I'm fighting the system just to learn."
That's what drives us. Education should be about exploring ideas and growing knowledgeβnot about overcoming barriers that shouldn't exist in the first place.
π Resources
- WCAG Guidelines
- ADHD Challenges in Education
- Accessibility in Higher Education
- Dyslexia in Higher Education
Built with β€οΈ for learners everywhere
Built With
- anthropic-claude-api
- cloudflare-d1
- cloudflare-workers
- elevenlabs-api
- hono
- kysely
- model-context-protocol
- motion
- openapi
- radix-ui
- raindrop-framework
- react
- react-three-fiber
- shadcn/ui
- swagger
- tailwind-css
- three.js
- typescript
- vite
- vitest
- zod
Log in or sign up for Devpost to join the conversation.