KnowBear is an AI-powered tool that delivers explanations at exactly the right depth for any topic — from ELI5 (explain like I'm 5) to technical deep-dives, meme-style breakdowns, structured reasoning, and more.
It intelligently routes queries across multiple frontier models via a LiteLLM proxy, applies an ensemble judge in Learning mode, caches frequent requests, and offers clean exports — all wrapped in a minimalist, space-themed dark UI.
Live demo: https://knowbear.vercel.app
- Layered explanation system — switch between 5–7 distinct explanation styles
- ELI5 / ELI10 / ELI15
- Meme & analogy heavy
- Structured academic style
- Technical deep-dive (math, proofs, code)
- First-principles reasoning
- Mode-aware routing (LiteLLM aliases)
- Learning: two candidates + judge (
learning-candidate-1,learning-candidate-2, judged byjudge) - Technical:
technical-primarywith fallbacks totechnical-fallbackthendefault-fast - Socratic:
socratic - Default:
default-fast
- Learning: two candidates + judge (
- Ultra-fast repeat queries via Upstash Redis REST caching
- Export formats: .txt, .md
- Pinned & trending topics — discoverability without search
- Authentication & Pro tier (optional, gated features)
- Dark-only, space/minimalist UI with smooth Framer Motion animations
KnowBear monorepo
├── api/ # FastAPI backend ── serverless-ready
│ ├── main.py # uvicorn entrypoint
│ ├── routers/ # FastAPI APIRouter modules
│ │ ├── query.py
│ │ ├── export.py
│ │ ├── pinned.py
│ │ └── health.py
│ ├── services/
│ │ ├── inference.py # LiteLLM alias routing + streaming
│ │ ├── cache.py # Redis abstraction
│ │ ├── auth.py # Supabase / JWT verification
│ │ └── rate_limit.py # per-user / global limits
│ └── schemas/ # Pydantic models
├── infra/litellm/ # LiteLLM proxy config + deployment assets
├── src/ # React + Vite frontend
│ ├── components/ # atomic → molecule → organism
│ ├── pages/ # route-based pages
│ ├── hooks/ # useQuery, useModelRouter, etc.
│ ├── lib/ # utils, constants, api client
│ └── styles/ # tailwind + global css
├── public/ # static files, favicon, manifest
├── tests/ # pytest (backend) + vitest (frontend) — expanding
├── .github/workflows/ # CI (lint, test, deploy preview)
├── vercel.json # monorepo build config for Vercel
└── README.md
All model calls go through a LiteLLM proxy that exposes stable aliases. The backend only references aliases; actual provider models are configured in infra/litellm/config.yaml.
| Alias | Provider model | Purpose |
|---|---|---|
learning-candidate-1 |
groq/llama-3.1-8b-instant |
Learning ensemble candidate |
learning-candidate-2 |
groq/openai/gpt-oss-20b |
Learning ensemble candidate |
judge |
openrouter/z-ai/glm-4.5-air:free |
Judge for Learning mode |
technical-primary |
gemini/gemini-2.5-pro |
Technical mode primary |
technical-fallback |
openrouter/qwen/qwen3-coder:free |
Technical fallback |
socratic |
groq/openai/gpt-oss-120b |
Socratic mode |
default-fast |
groq/llama-3.1-8b-instant |
Default fast responses |
technical-primary falls back to technical-fallback and then default-fast via LiteLLM routing rules.
| Method | Path | Description | Auth? | Rate-limited? |
|---|---|---|---|---|
| GET | /api/health |
Redis, model providers, auth status | No | No |
| GET | /api/pinned |
Curated & trending topics | No | Light |
| POST | /api/query |
Main query endpoint — returns layered output | Optional | Yes |
| POST | /api/export |
Convert result to file (txt/md) | No | Yes |
| GET | /api/usage |
Current user quota & usage (Pro users) | Yes | No |
| Layer | Technologies |
|---|---|
| Frontend | React 18, TypeScript, Vite, Tailwind CSS, Framer Motion, Zustand, React Query |
| Backend | FastAPI, Python 3.11+, Pydantic v2, Structlog, Upstash Redis REST |
| AI Inference | LiteLLM proxy + Groq + Gemini 2.5 + OpenRouter |
| Auth | Supabase Auth (JWT + OAuth) |
| Cache / Queue | Redis (Upstash) |
| Deployment | Vercel (frontend + serverless backend), Render / Railway (alternative) |
| Testing | pytest, vitest, Playwright (planned) |
| License | Apache License 2.0 |
The backend uses the repository-root virtualenv at .venv/. Use the root scripts so local commands always resolve through .venv/bin/python:
npm run api:install
npm run api:dev
npm run api:test- v0.x — chaotic prototype, many deployment experiments (Vercel, Render, path hell, 500s)
- v1.0 — stable product with auth, payments (in progress), multi-model routing, Redis caching, clean exports
- v2.0 (current focus) — major refactor: better dependency injection, comprehensive test suite, OpenTelemetry tracing, more robust error handling, usage analytics
Prerequisites
- Node.js 18+ (for frontend)
- Python 3.11+ (for backend)
- pnpm (recommended package manager for frontend)
From the repository root:
python3 -m venv .venv
npm run api:install
cp .env.example .env
# Edit .env with your real keys:
# LITELLM_BASE_URL=http://localhost:4000
# LITELLM_VIRTUAL_KEY=... (or LITELLM_MASTER_KEY=...)
# SUPABASE_URL=...
# SUPABASE_ANON_KEY=...
# UPSTASH_REDIS_REST_URL=...
# etc.
npm run api:devOptional: run a local LiteLLM proxy (if you are not pointing at a hosted proxy):
litellm --config infra/litellm/config.yaml --port 4000The proxy expects provider keys in its environment (for example GROQ_API_KEY, GEMINI_API_KEY, OPENROUTER_API_KEY) plus LITELLM_MASTER_KEY if you want to secure the proxy.
Open http://localhost:8000/docs to see the Swagger UI.
In a separate terminal, from repo root:
pnpm install
pnpm devOpen http://localhost:5173 (it should proxy /api calls to the backend at http://localhost:8000/api — verify in vite.config.ts if needed).
Run both frontend and backend with:
npm run dev:fullBefore running any migrations, back up your Supabase database.
If this repo does not already have a supabase/ folder, initialize it:
npx supabase initApply the v2 conversation schema migration:
npx supabase migration upIf you use the db-push workflow instead:
npx supabase db pushRun the v1 history → v2 conversations/messages data migration (dry-run by default):
.venv/bin/python scripts/migrate_v1_to_v2_history.pyTo write data:
.venv/bin/python scripts/migrate_v1_to_v2_history.py --dry-run=falseRequired environment variables for the migration script:
SUPABASE_URLSUPABASE_SERVICE_ROLE_KEY(preferred for bypassing RLS) orSUPABASE_ANON_KEY
Contributions welcome — especially:
- Better judge/ensemble logic
- Additional explanation styles
- Frontend animations & UX polish
- Test coverage (both FE + BE)
Please open an issue first for larger changes.
This project is licensed under the Apache License 2.0