A frontend-only demo of QuokkaQ, an AI-powered Q&A platform for course discussions. This demo showcases UX flows and UI quality using mocked data and services—no real backend, auth, or data security layers.
Built with the Quokka Design System (QDS) v1.0 — A warm, approachable, and academic-grade design language.
# Install dependencies
npm install
# Run development server
npm run devOpen http://localhost:3000 to view the demo.
Production URL: https://quokka-demo.netlify.app
Deployed on Netlify with continuous deployment from the main branch.
- Home (
/) - Browse discussion threads with filtering - Thread Detail (
/threads/thread-1) - View Q&A with AI answer, endorsements, and upvotes - Ask Question (
/ask) - Get AI answer preview with similar questions and duplicate detection - Instructor Dashboard (
/instructor) - View ROI metrics, time saved analytics, moderation tools - Quokka Chat (
/quokka) - Private AI conversations with course context and persistence
✨ Phase 3 Features (New):
- Thread Endorsements - Professors/TAs can endorse quality answers (green checkmark badge)
- Student Upvoting - Community signal before endorsement (visible upvote counts)
- Duplicate Detection - TF-IDF similarity prevents duplicate questions (80% threshold warning)
- ROI Metrics Dashboard - Time saved calculation (5 min/question), citation coverage %, engagement analytics
- Enhanced AI Prompts - Absolute dates ("Friday, Nov 7"), ambiguity handling, better citation sources
By default, QuokkaQ uses template-based AI responses with keyword matching. You can optionally enable real LLM integration with OpenAI or Anthropic for production-quality AI answers powered by course materials.
- Copy environment template:
cp .env.local.example .env.local- Add your API key (choose one):
# Option A: OpenAI (recommended for cost/speed)
NEXT_PUBLIC_OPENAI_API_KEY=sk-proj-your-key-here
NEXT_PUBLIC_LLM_PROVIDER=openai
# Option B: Anthropic (alternative provider)
NEXT_PUBLIC_ANTHROPIC_API_KEY=sk-ant-your-key-here
NEXT_PUBLIC_LLM_PROVIDER=anthropic- Enable LLM mode:
NEXT_PUBLIC_USE_LLM=true- Restart dev server:
npm run devWhen LLM is enabled:
- AI-Powered Responses: Generates answers using GPT-4o-mini or Claude 3 Haiku
- Course Context: Automatically builds context from course materials (lectures, slides, assignments)
- Multi-Course Awareness: Detects relevant courses based on question keywords and content
- Smart Citations: References actual course content with relevance scoring
- Citation Display: Inline
[1] [2]markers with clickable sources panel- Highlighted citation markers scroll to source
- Hover tooltips show material titles
- Keyboard navigable (Tab, Enter, Space)
- QDS-compliant styling with accent colors
- Confidence Scoring: Confidence levels based on material relevance (0-100)
- Private Conversations: Store AI chat sessions per-user with localStorage
- Conversation → Thread: Convert private conversations to public threads
- Automatic Fallback: Falls back to templates on LLM errors
Architecture:
- Context Builder: Ranks course materials by relevance (60% keyword, 40% content matching)
- Auto-Detection: Scores courses by mentions (100 pts), keyword matches (10 pts), content matches (5 pts)
- Token Budget: Proportional allocation across courses (default 2000 tokens)
- Retry Logic: 3 attempts with exponential backoff (1s, 2s, 4s delays)
- Cost Tracking: Token usage monitoring with per-model pricing
Security Warning: This demo uses client-side API keys (NEXT_PUBLIC_*) for simplicity. Production apps should use server-side API routes to protect keys. See .env.local.example for details.
See .env.local.example for full configuration including:
- Model selection (GPT-4o-mini, Claude Haiku, etc.)
- Temperature and token limits
- Cost monitoring and rate limiting
- Context size and relevance thresholds
This project includes a production-ready agentic workflow with 8 specialized AI agents for systematic development.
| I need to... | Use Agent |
|---|---|
| ✨ Check design system compliance | QDS Compliance Auditor |
| ♿ Validate accessibility | Accessibility Validator |
| 🏗️ Design new component | Component Architect |
| 🔌 Add API endpoint | Mock API Designer |
| ⚡ Optimize data fetching | React Query Strategist |
| 🛡️ Fix TypeScript errors | Type Safety Guardian |
| 📦 Reduce bundle size | Bundle Optimizer |
| 🔄 Prepare for backend swap | Integration Readiness Checker |
- Read: AGENTIC-WORKFLOW-GUIDE.md (15 min comprehensive guide)
- Quick Ref: doccloud/AGENT-QUICK-REFERENCE.md (agent prompts)
- Try it: Follow the first task tutorial in the guide
Benefits:
- ✅ Catch issues before implementation (10x faster)
- ✅ Enforce quality (QDS, WCAG 2.2 AA, TypeScript strict)
- ✅ Context persists across sessions
- ✅ Backend-ready architecture
This is a proof-of-concept to demonstrate:
- Student discussion threads with Q&A
- AI-powered answer generation with citations
- Similar question suggestions (as-you-type)
- Instructor dashboard with metrics
- Post endorsement and flagging
- Framework: Next.js 15 (App Router)
- Language: TypeScript (strict mode)
- Design System: Quokka Design System (QDS) v1.0
- Styling: Tailwind CSS v4
- Components: shadcn/ui + Radix UI
- State: React Query (TanStack Query)
- Mock Data: Static JSON files + in-memory state
- Fonts: Geist Sans & Geist Mono
- Icons: Lucide React
quokka-demo/
├── app/ # Next.js App Router pages
│ ├── page.tsx # Threads list
│ ├── ask/ # Ask question page
│ ├── threads/[id]/ # Thread detail page
│ └── instructor/ # Instructor dashboard
├── components/ # React components
│ ├── nav-header.tsx
│ ├── thread-card.tsx
│ ├── ai-answer-card.tsx
│ └── post-item.tsx
├── lib/
│ ├── api/ # Mock API client & hooks
│ ├── models/ # TypeScript types
│ └── utils/ # Helper functions
├── mocks/ # Seed data (JSON)
│ ├── threads.json
│ ├── users.json
│ ├── kb-docs.json
│ └── ai-responses.json
└── doccloud/ # Agentic workflow context
├── SPECIALIZED-AGENTS.md
├── AGENT-QUICK-REFERENCE.md
└── tasks/
- Visit
/to see thread list - Filter by status (All, Open, Answered)
- Notice endorsed badge (✓ green checkmark) on quality threads
- Click a thread to view details
- See AI answer with citations
- Upvote thread if helpful (Phase 3.1)
- Read community replies
- Visit
/askpage - Type question title and content
- See similar threads appear (debounced)
- Get AI answer preview (~800ms)
- View citations and confidence level
- Check for duplicates - Warning shows if 80%+ similar thread exists (Phase 3.2)
- Post to forum if not duplicate
- Visit
/instructor(logged in as Dr. Sarah Chen) - View ROI metrics - Time saved, citation coverage, endorsed threads (Phase 3.4)
- See top contributors and trending topics
- Check unanswered questions queue
- Review endorsed/flagged posts
- Monitor active student stats
- Open any thread as instructor
- Endorse entire thread (Award icon) - Sets thread to "endorsed" status
- Endorse helpful replies (existing feature)
- Flag inappropriate content (Flag icon)
- Mark replies as "Answer"
- Resolve threads
- View upvote counts to gauge community interest
- Seed Data: Pre-loaded from
/mocks/*.json - localStorage Persistence: Data persists across sessions in browser storage
- Reset: Clear localStorage to reseed with latest data
- Version-Based Seeding: Automatically reseeds when mock data version changes
When new mock data is added or updated, you may need to clear your browser's localStorage to see the changes:
Method 1: Browser Console
localStorage.clear()
// Then refresh the pageMethod 2: DevTools
- Open DevTools (F12 or Cmd+Option+I)
- Go to Application tab
- Select "Local Storage" →
http://localhost:3000 - Right-click and select "Clear"
- Refresh the page
The app will automatically reseed with the latest data from /mocks/*.json files.
The AI uses keyword matching to return canned responses:
- "binary search" → Implementation guide with Python code
- "list comprehension" → Syntax and examples
- "gil" → Global Interpreter Lock explanation
- Default → Generic "insufficient info" response
npm run dev # Start dev server (Turbopack)
npm run build # Production build
npm start # Run production server
npm run lint # Lint code
npx tsc --noEmit # Type check
npm run seed # Display seed infoQuick Deploy:
./scripts/deploy.sh # Build, push to GitHub, and deploy to NetlifyManual Deploy:
npm run build # Build the project
netlify deploy --prod # Deploy to productionGitHub Actions (Optional):
The project includes a GitHub Actions workflow (.github/workflows/deploy.yml) that can automatically deploy on push to main. To enable:
- Go to your repository settings → Secrets and variables → Actions
- Add these secrets:
NETLIFY_AUTH_TOKEN: Get from https://app.netlify.com/user/applications#personal-access-tokensNETLIFY_SITE_ID:39644280-e882-4bdb-8c03-baeb54de787b
All API calls go through /lib/api/client.ts:
| Endpoint | Method | Description | Delay |
|---|---|---|---|
getThreads() |
GET | Fetch all threads | 200-500ms |
getThread(id) |
GET | Fetch single thread | 200-500ms |
createThread() |
POST | Create new thread (auto-generates AI answer) | 200-500ms |
createPost() |
POST | Add reply to thread | 200-500ms |
endorsePost(id) |
PUT | Toggle endorsement | 100ms |
flagPost(id) |
PUT | Toggle flag | 100ms |
resolveThread(id) |
PUT | Mark resolved | 100ms |
getSimilarThreads() |
GET | Find similar questions | 300ms |
| Endpoint | Method | Description | Delay |
|---|---|---|---|
askQuestion() |
POST | Get AI answer preview | 800ms |
generateAIPreview() |
POST | Generate answer without saving | 800ms |
endorseAIAnswer() |
PUT | Toggle AI answer endorsement | 100ms |
| Endpoint | Method | Description | Delay |
|---|---|---|---|
createConversation() |
POST | Create new private conversation | 100ms |
getAIConversations() |
GET | Fetch user's conversations | 200-500ms |
getConversationMessages() |
GET | Fetch messages for conversation | 100ms |
sendMessage() |
POST | Send user message + generate AI response | 800ms |
deleteAIConversation() |
DELETE | Delete conversation (cascade) | 100ms |
convertConversationToThread() |
POST | Convert conversation to public thread | 300ms |
| Endpoint | Method | Description | Delay |
|---|---|---|---|
getCourse(id) |
GET | Fetch single course | 200-500ms |
getCourseMaterials(id) |
GET | Fetch course materials for context | 200-500ms |
getInstructorMetrics() |
GET | Dashboard stats | 200-500ms |
To swap in a real backend:
- Replace
/lib/api/client.tswith HTTP fetch calls - Update React Query hooks to use real endpoints
- Add authentication & session management
- Wire up Bedrock for real AI responses
- Implement S3 for file uploads
- Add database (PostgreSQL + Prisma/Drizzle)
The component layer and UI flows are designed to work seamlessly once the client is swapped.
- README.md ← You are here
- AGENTIC-WORKFLOW-GUIDE.md ⭐ Complete agentic workflow guide
- AGENTIC-WORKFLOW-GUIDE.md - Complete guide with tutorials
- doccloud/SPECIALIZED-AGENTS.md - Full agent specifications
- doccloud/AGENT-QUICK-REFERENCE.md - Fast agent lookup
- doccloud/TASK-TEMPLATE.md - Template for new tasks
- doccloud/AGENT-TASK-TEMPLATE.md - Template for sub-agents
- QDS.md - Complete Quokka Design System implementation guide
- QDS-QUICK-REFERENCE.md - Quick reference for developers
- CLAUDE.md - AI instruction file (for Claude Code)
- ANALYSIS.md - Technical analysis and architecture decisions
- New developer? Start with this README, then AGENTIC-WORKFLOW-GUIDE.md
- Need an agent? Go to doccloud/AGENT-QUICK-REFERENCE.md
- Design tokens? Check QDS-QUICK-REFERENCE.md
- Coding standards? See CLAUDE.md
This project implements the Quokka Design System (QDS) v1.0, featuring:
- Warm Color Palette: Quokka Brown (primary), Rottnest Olive (secondary), Clear Sky (accent)
- 4pt Spacing Grid: Consistent spacing throughout
- Accessibility First: WCAG 2.2 AA minimum, full keyboard navigation
- Light & Dark Themes: Automatic theme switching
- QDS Tokens: Semantic color tokens, shadows, radius scales
See QDS.md for complete documentation.
This demo does NOT include:
- Real backend/database
- Authentication or LTI integration
- File uploads or S3 storage
- Actual AI/RAG (Bedrock Knowledge Base)
- Security (RLS, rate limits, etc.)
- Production hardening
Complete REST API documentation for all backend endpoints:
- Location:
backend/docs/API_REFERENCE.md - Base URL:
http://localhost:3001/api/v1 - Includes: Authentication, threads, posts, courses, AI answers, instructor metrics
- Status: 12 of 44 planned endpoints implemented
Comprehensive dependency documentation and quarterly audit process:
- Location:
DEPENDENCIES.md - Includes: All frontend/backend dependencies with rationale, bundle impact analysis
- Last Audit: 2025-10-20
- Next Audit: 2026-01-20 (Quarterly)
- Metrics: 59 packages, 730 MB node_modules, 0 vulnerabilities
- Design System:
QDS.md- Complete Quokka Design System documentation - Development Guide:
CLAUDE.md- Project guidelines and agentic workflow - Deployment: Netlify configuration in
netlify.toml - Environment: See
.env.local.examplefor LLM configuration
# Type check
npx tsc --noEmit
# Lint
npm run lint
# Build (validates all routes)
npm run build
# Run in production mode
npm run build && npm startThis is a demo project for evaluation purposes.