CogniGraph (repository folder: Cognigraph) is a small educational demo: you describe a real-world scenario, an LLM classifies brain lobe and neuromodulator tone, a Brian2 spiking neural network (SNN) is simulated, and a web UI visualizes activity on a 3D brain model. The UI is served from / as static HTML plus ES modules under frontend/js/ (no bundler); optional OpenRouter keys are entered only in the in-page API Settings panel (no separate auth route or redirect).
This is not medical software. Outputs are for visualization and learning only, not diagnosis or treatment. The UI includes context for modeled stress-hormone axes (for example HPA / cortisol) as simulation metaphors, not clinical measurements.
Neural Activation Viewer — scenario input, playback controls, and cognitive analysis (example: Doing a heavy deadlift).
Simulation view — colored lobe mesh, spike counters, HPA context, and event log after playback completes.
flowchart LR
U[User scenario] --> FE[index.html / Analyze]
FE -- POST /simulate --> API[FastAPI backend/main.py]
API --> LLM[OpenRouter LLM classify_scenario]
LLM --> NM[neuromodulation.py validate + resolve params]
NM --> SNN[Brian2 SNN run_snn]
SNN --> VFX[build_vfx_profile]
VFX -- JSON --> FE
FE --> Three[Three.js glow/bloom render]
- Python 3.10+ recommended (3.x required)
- pip
Brian2 may need a C compiler on some platforms for full performance; see the Brian2 installation docs.
cd Cognigraph
python -m venv .venv
# Windows: .venv\Scripts\activate
# Unix: source .venv/bin/activate
pip install -r requirements.txt- Copy
.env.exampleto.envin the project root. - Set
OPENROUTER_API_KEYfrom OpenRouter. - Optionally set
OPENROUTER_DEMO_MODEL(default:qwen/qwen3.5-flash-02-23) — used for anonymous / no-browser-key traffic on public demos, with a stronger educator-style system prompt. Alternatives:openai/gpt-oss-120b, oropenai/gpt-oss-120b:freefor a no-cost tier (rate limits apply). See models. - Optionally set
OPENROUTER_MODEL(default:x-ai/grok-4.1-fast) — used only when a visitor saves their own key in the UI (X-OpenRouter-Api-Key); they pay OpenRouter, not you.
Never commit .env; it is listed in .gitignore.
From the repository root (with dependencies installed):
python -m uvicorn backend.main:app --host 0.0.0.0 --port 8000 --reloadThen open http://127.0.0.1:8000.
Windows: double-click start-cognigraph.bat, or use Baslat-Cognigraph.bat for Turkish messages.
Use this command for production-like testing (no --reload):
python -m uvicorn backend.main:app --host 0.0.0.0 --port 8000pytestConfiguration: pytest.ini, tests under tests/.
| Method | Path | Description |
|---|---|---|
GET |
/ |
Serves the web UI (frontend/index.html). |
GET |
/healthz |
Lightweight health check endpoint for platform probes. |
POST |
/simulate |
Runs classification + SNN + returns spikes and VFX echo. |
POST /simulate JSON body:
| Field | Type | Description |
|---|---|---|
prompt |
string | Scenario text (1–1000 characters). |
Optional request headers (same origin as the UI; not required for the shared demo):
| Header | When | Description |
|---|---|---|
X-OpenRouter-Api-Key |
BYOK | Visitor's OpenRouter key; billing on their account. |
X-OpenRouter-Model |
BYOK | OpenRouter model slug (e.g. openai/gpt-4o). Ignored without X-OpenRouter-Api-Key. Invalid values fall back to OPENROUTER_MODEL. |
Response (simplified): active_lobe, dominant_neuromodulator, neuromodulator_intensity, neuromodulator_rationale, explanation, duration_ms, spikes (per-lobe spike indices and times), snn_modulation, vfx_profile.
If OPENROUTER_API_KEY is missing, the API returns 503 with a clear message.
Static files are mounted at /static from the frontend/ directory.
- Fly.io (primary
/simulatebackend): https://cognigraph-13906.fly.dev - Vercel (UI mirror, proxies
/simulateto Fly): https://cognigraph-tau.vercel.app
How the two hosts relate: Fly runs the full FastAPI + Brian2 stack in a long-lived Docker VM (warm via min_machines_running = 1). Vercel serves the static UI and rewrites /simulate and /healthz to Fly as external proxies (vercel.json), so the Vercel page works end-to-end without needing OPENROUTER_API_KEY on Vercel — the server key lives on Fly. Same-origin rewrites mean the browser still POSTs to cognigraph-tau.vercel.app/simulate, so CORS does not apply and BYOK headers (X-OpenRouter-Api-Key, X-OpenRouter-Model) pass through untouched.
Latency: Analyze is typically 6–10 seconds on a warm Fly machine (LLM + Brian2). The Vercel-proxied path adds a small edge hop on top. The old Vercel-Python-function path (cold-starting Brian2 inside a 10–60s serverless budget) is no longer used for /simulate.
- Each user can provide their own OpenRouter key in the UI (
API Settingspanel). - The key is stored in the user's browser local storage and sent as
X-OpenRouter-Api-Key. - Optional model id from the same panel is sent as
X-OpenRouter-Modelwhen a user key is present; if omitted or invalid, the server usesOPENROUTER_MODEL. Without a user key,X-OpenRouter-Modelis ignored (shared traffic always usesOPENROUTER_DEMO_MODEL). - Server-side env key (
OPENROUTER_API_KEY) is still supported as fallback for visitors who do not add a key. - Requests without
X-OpenRouter-Api-KeyuseOPENROUTER_DEMO_MODEL(defaultqwen/qwen3.5-flash-02-23) plus a neuroscientist-educator system prompt; requests with a user key useOPENROUTER_MODELor the validatedX-OpenRouter-Modelvalue (billing is on their OpenRouter account). - For shared/public devices, users should clear their saved key after use.
Vercel is configured as a UI mirror — it serves the static frontend and proxies the dynamic endpoints (/simulate, /healthz) to Fly via external rewrites in vercel.json. Do not set LLM-related env vars on Vercel; the server key lives on Fly and flows through the proxy.
- Install CLI and login:
npm i -g vercel vercel login
- In project root, deploy:
vercel
- No
OPENROUTER_API_KEYor model env vars are needed on Vercel. Fly owns them. - Redeploy to production after any
vercel.jsonchange:vercel --prod
Why the proxy instead of running Python on Vercel? Vercel’s Python functions cold-start Brian2 inside a strict function-duration budget (Hobby ~10s, Pro default 15s and up to 300s only via per-project config). The functions key in vercel.json only validates globs against the legacy api/ directory, so src/index.py — required for Vercel’s FastAPI auto-detection — cannot have its maxDuration declared in vercel.json at all. Rather than fight both constraints at once, /simulate is delegated to Fly (long-lived VM, no function duration cap, no Brian2 cold-start after warm). The Vercel Python function in src/index.py remains for serving / and /static/* only.
- Install and auth:
fly auth login
- Create app (once) and deploy:
fly launch --no-deploy fly deploy
- Set secret key:
fly secrets set OPENROUTER_API_KEY=your_key_here - Optional model overrides:
Use
fly secrets set OPENROUTER_DEMO_MODEL=qwen/qwen3.5-flash-02-23 fly secrets set OPENROUTER_MODEL=x-ai/grok-4.1-fast
OPENROUTER_DEMO_MODELfor the shared demo;OPENROUTER_MODELonly applies to BYOK requests. - Redeploy after secret/config changes:
fly deploy
This repo includes fly.toml and Dockerfile configured for uvicorn backend.main:app. fly.toml sets min_machines_running = 1 so one machine stays warm — this avoids a 15–30s Brian2 cold-start on the first request after idle, including requests that arrive via the Vercel UI proxy.
Run these checks against deployed URL ($BASE_URL):
curl -fsS "$BASE_URL/healthz"
curl -fsS "$BASE_URL/" > /dev/null
curl -sS -X POST "$BASE_URL/simulate" \
-H "Content-Type: application/json" \
-d "{\"prompt\":\"Solving a complex math problem\"}"Expected behavior:
/healthzreturns{"status":"ok"}./returns HTML./simulatereturns JSON withactive_lobe,dominant_neuromodulator,spikes.- If key is missing,
/simulatereturns503with key configuration hint.
- Vercel is now a UI mirror that proxies
/simulateand/healthzto Fly via external rewrites invercel.json; Fly is the single/simulatebackend. No LLM env vars live on Vercel anymore. fly.tomlsetsmin_machines_running = 1(and disablesauto_stop_machines) so Brian2 stays warm and the first request after idle is not a 15–30s cold-start.- Shared demo traffic uses
OPENROUTER_DEMO_MODEL(default Qwen 3.5 Flash + educator prompt); BYOK traffic usesOPENROUTER_MODEL. - Security hardening in LLM error handling to avoid exposing sensitive upstream details to API clients.
- Faster request handling by reusing
httpx.AsyncClientthrough FastAPI lifespan. - SNN runtime optimization by removing repeated dictionary creation inside the
run_snnloop. build_vfx_profileoptimization by moving static profile definitions to module scope.- Added test coverage for
_strip_markdown_fences,_load_dotenv_file,_lerp_toward_neutral,snn_params_to_dict, payload length validation, andGET /(serve_index).
Use this blurb when posting to LinkedIn, X, Reddit, or a blog. Replace YOUR_REPO_URL if you publish the source.
CogniGraph — Describe a scenario; an LLM picks a brain lobe and neuromodulator tone; a Brian2 spiking network runs; a 3D brain visualizes the result. Live demo: https://cognigraph-tau.vercel.app — Not medical software; for learning and demos only.
Optional one-liner for tight character limits:
Educational brain + SNN demo (LLM → Brian2 → 3D). Not clinical. https://cognigraph-tau.vercel.app
After sharing, smoke-test the live URL (/healthz and a sample POST /simulate) as described under Deployment smoke tests. Without a configured key, POST /simulate should still return a clear JSON error about OPENROUTER_API_KEY rather than a generic failure — that confirms the route is live.
This project is licensed under the MIT License — see LICENSE.


