writing-coach is a Docker-deployed writing practice app for structured skill building.
It combines:
- a Go API backend
- a Next.js web client
- Ory Kratos for authentication
- SQLite persistence
- deterministic analysis plus LLM-backed prompt and review generation
- assignment history, revision compare, and archive browsing
The core loop is simple:
- choose a practice path
- generate an assignment
- submit a draft
- get feedback tied to a few active skills
- revise or move on
- use that history to shape the next assignment
- exactly 3 active skill goals at a time
- new assignments and revision briefs
- reviews tied to the current skill goals
- assignment history across prompt, draft, feedback, revision, and later passes
- per-user AI provider settings
- deterministic fallback behavior when model-backed generation is unavailable
Supported personal AI providers:
- Anthropic
- Gemini
- OpenAI
- Groq
- xAI
The intended deployment is a single Docker Compose stack behind host nginx:
- host
nginxterminates TLS - host
nginxreverse-proxies a localhost-bound web port - the web app, API, Kratos, analyzers, and storage stay on the internal Docker network
This repository contains mixed-license materials. The web UI includes Tailwind Plus-derived code. See:
cp .env.example .env
$EDITOR .env
docker compose up -d --buildThen open the public URL you configured in COACH_PUBLIC_URL.
Default localhost binding from .env.example:
127.0.0.1:11234:3000
For a safer production release flow with copied-data staging and rollback scripts, see docs/deployment-staging.md.
Use repo-root commands for backend and frontend independently:
- Backend tests:
./scripts/test-backend.sh - Backend coverage snapshot:
./scripts/test-coverage.sh - Frontend checks:
./scripts/test-frontend.sh - Combined checks:
make test
Notes:
scripts/test-go.shremains as a compatibility alias to backend tests.- Backend tooling intentionally targets
./cmd/...and./internal/...instead ofgo test ./...to avoid scanning frontend dependency trees.
The web container proxies /api and /.ory/kratos/public internally, so host nginx only needs one upstream:
server {
listen 80;
server_name coach.example.com;
proxy_intercept_errors on;
error_page 502 503 504 /maintenance.html;
location = /maintenance.html {
root /var/www/writing-coach;
internal;
}
location / {
proxy_pass http://127.0.0.1:11234;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}For safer outages, copy maintenance.html to a host path such as /var/www/writing-coach/maintenance.html. With proxy_intercept_errors on, nginx will serve that static page whenever the upstream returns 502, 503, or 504.
At minimum, set these in .env before production use:
COACH_PUBLIC_URLKRATOS_COOKIE_SECRETKRATOS_CIPHER_SECRETKRATOS_UI_COOKIE_SECRETKRATOS_UI_CSRF_SECRETWRITING_COACH_ADMIN_EMAILS
If users will save their own provider keys, also set:
WRITING_COACH_AI_KEY_SECRET
Generate secrets with:
openssl rand -base64 48Keep WRITING_COACH_AI_KEY_SECRET stable after deployment. Changing it later will make previously saved user keys unreadable.
Reviews are intentionally layered.
- Deterministic analyzers inspect the draft for concrete issues.
- The active skill goals decide what matters most on this assignment.
- The review and revision flow use those signals to shape feedback and the next step.
That means the app does not simply send a draft to a model and accept the result uncritically.
The deterministic layer includes:
- built-in heuristic checks
Valefor style and custom prose rulesLanguageToolfor grammar and usage suggestionsspaCyplusTextDescriptivesfor sentence and readability analysis
If a language model is enabled, it works on top of that structure. It helps write clearer assignments, coaching summaries, and revision briefs. It does not replace the deterministic analysis layer.
If no model is available, the app still produces deterministic prompts, reviews, and revision briefs.
The app also tracks a separate writing language on each practice path. That is distinct from the UI locale. English is the only shipped coaching language right now, but the analyzer and model pipeline are wired so contributors can add future language support without redesigning the app.
Contributor reference:
The app supports two operating modes:
Set:
OPENAI_API_KEY=...
WRITING_COACH_AI_KEY_SECRET=...Behavior:
- the app can use a shared system OpenAI key
- users can optionally save their own provider keys
- existing users can keep working without immediate setup
Set:
OPENAI_API_KEY=
WRITING_COACH_AI_KEY_SECRET=...Behavior:
- there is no shared system fallback
- users must save their own provider key to run model-backed generation
If WRITING_COACH_AI_KEY_SECRET is missing, personal provider storage is unavailable.
OPENAI_API_KEYOptional shared OpenAI fallback.OPENAI_BASE_URLOptional custom base URL for the shared OpenAI-compatible provider.WRITING_COACH_AI_KEY_SECRETRequired for storing users’ personal provider keys.WRITING_COACH_PROMPT_MODELDefault shared prompt model.WRITING_COACH_REVIEW_MODELDefault shared review model.WRITING_COACH_PROMPT_GENERATION_TIMEOUTMaximum time to wait on prompt-generation provider calls before falling back.WRITING_COACH_AI_VALIDATE_LIMIT_PER_MINUTEPer-user cap for provider validation attempts.WRITING_COACH_AI_VALIDATE_GLOBAL_LIMIT_PER_MINUTEApp-wide cap for provider validation attempts.WRITING_COACH_AI_PROVIDER_EVENT_RETENTION_DAYSRetention window for admin-visible provider activity events.WRITING_COACH_CALIBRATION_MAINTENANCE_ENABLEDEnables scheduled deterministic calibration maintenance runs.WRITING_COACH_CALIBRATION_MAINTENANCE_INTERVALInterval between automatic calibration runs (default720h).WRITING_COACH_CALIBRATION_MIN_SAMPLESMinimum sampled submissions target used for calibration warnings.WRITING_COACH_CALIBRATION_LIMIT_PER_TRACKMaximum submissions sampled per track during each run.WRITING_COACH_DB_MAX_OPEN_CONNSMaximum number of SQLite connections in the app pool (default4).WRITING_COACH_DB_MAX_IDLE_CONNSMaximum number of idle SQLite connections retained by the app (default4).WRITING_COACH_DB_CONN_MAX_LIFETIMEMaximum lifetime of pooled SQLite connections before recycle (default30m).WRITING_COACH_NLP_ANALYZER_URLOptional internal spaCy/TextDescriptives analyzer service URL.WRITING_COACH_WRITER_NAMEWRITING_COACH_DEFAULT_USER_SLUGWRITING_COACH_DEFAULT_TREE_SLUGWRITING_COACH_API_TOKENWRITING_COACH_ADMIN_EMAILSCOACH_PUBLIC_URLWEB_PORT_BINDKRATOS_SMTP_CONNECTION_URI
- keep the published upstream bound to localhost
- let host
nginxterminate TLS - let host
nginxserve a static maintenance page for502,503, and504 - keep Compose volumes persistent
- replace all default Kratos secrets
- decide whether
OPENAI_API_KEYstays as a transition fallback or is removed
State is stored in Docker volumes:
writing-coach-datafor app data and SQLitekratos-datafor Kratos identity storage
Validate connectionandSave provideruse the same validation budget- the default per-user cap is
6checks per minute - the default global cap is
60checks per minute - repeated bad-key retries eventually return
429without hitting the upstream provider - provider activity events are retained for
30days by default - admin users can inspect provider activity in the admin workspace
- calibration maintenance runs can be scheduled and manually triggered from the admin workspace
- calibration runs include confidence/adequacy guardrails, hybrid conflict telemetry, and deterministic objective-eval gating
- objective-eval enforces global, per-track, and per-family policy checks from
internal/review/testdata/objective_eval_corpus.json - admin approval is blocked when calibration data/objective gates fail unless explicit override notes are provided (
override:prefix, minimum detail length) - admin can mark calibration runs approved/rejected before acting on rubric changes
Every review runs built-in heuristic analysis. In the Compose deployment, the stack also includes:
- Vale bundled into the app image
- LanguageTool as an internal Docker service
- spaCy plus TextDescriptives as an internal Docker service
These findings are stored as review artifacts for later reporting and UI use.
If an external analyzer is unavailable, the app continues with the remaining analyzers.
The initial Vale rules live under styles/WritingCoach.
The nlp-analyzer sidecar now emits additional deterministic signals used for rubric and annotation work:
- claim/evidence coverage metrics
- referent/coreference ambiguity counts
- semantic repetition ratio
- topic drift score across sections
Optional CoreNLP augmentation is available for coreference chains.
- Enable by setting
NLP_CORENLP_URL(for examplehttp://corenlp:9000) - Start CoreNLP with:
docker compose --profile phase4-nlp up -d corenlp
If CoreNLP is unavailable, the sidecar falls back to local deterministic heuristics.
Run the sidecar unit suite in the analyzer container:
docker compose run --rm -v "$PWD/docker/nlp-analyzer:/app" nlp-analyzer python -m unittest test_app.py -vThe browser UI is built around the assignment loop:
- current assignment workspace as the default home
- background review and revision queue states with progress loaders
- full assignment timelines showing prompt, draft, feedback, and revision steps
- archive browsing for older assignments
- an About page that explains the coaching loop in plain language
Core browser-facing and app endpoints include:
GET /api/healthGET /api/readyGET /api/auth/sessionGET /api/dashboardGET /api/exercisesGET /api/exercises/{id}POST /api/prompts/nextPOST /api/prompts/reviseGET /api/submissionsPOST /api/submissionsGET /api/submissions/{id}GET /api/reviewsPOST /api/reviewsGET /api/reviews/{id}GET /api/compare?submission_id=<id>[&against=<id>]GET /api/ai/settingsPUT /api/ai/settingsDELETE /api/ai/settingsPOST /api/ai/settings/validate
There are also admin, tree, user, onboarding, and enrollment endpoints.
Optional per-request context:
- query params:
user,tree,user_name - headers:
X-Writing-Coach-User,X-Writing-Coach-Tree
Optional API auth:
- set
WRITING_COACH_API_TOKEN - send
Authorization: Bearer <token>orX-API-Token: <token>
Useful docs:
- docs/architecture.md
- docs/ai-provider-rollout-plan.md
- docs/licensing.md
- docs/release-process.md
- docs/scoring-backtest.md
- docs/web-foundation-plan.md
- docs/web-app-plan.md
- docs/tree-library.md
The repository currently includes:
- a Go API server
- a Next.js web app
- questionnaire-driven onboarding
- assignment timeline and archive browsing
- per-user AI provider settings
- admin-visible AI provider activity reporting
- structured review annotations and comparison artifacts
- SQLite migrations and bootstrap support
- DB-backed trees and enrollment-scoped progress