An educational, expandable coding‑agent harness built in Python.
It started as a single‑file agent built from notes and experiments, and grew into a full learning playground inspired by Codex, Open Code, Claude Code, and Pi. The goal here is simple: learn how these systems work by building one and then expand it with more bells and whistles over time. It's not the best, but it's mine.
I read a couple of posts on building agents that made me want to own the full harness—not just use a product. I wanted something I could understand end‑to‑end, tweak freely, and grow as I learned. This project is the result: a readable, hackable agent loop with modular pieces I can keep expanding.
This is also a unique project in that it is self‑bootstrapped: I built the initial agent using Codex, then added the tools and loop needed for it to help build itself. Most new features I add now are developed with the agent itself.
- Provider‑agnostic LLM support (OpenAI, Anthropic, and OpenAI‑compatible APIs like OpenRouter and Ollama)
- Streaming text + tool calls + thinking blocks
- Session persistence (JSONL) with forking and resuming
- Context compaction to stay inside token limits
- Skills system (Markdown + YAML frontmatter)
- Prompt templates with slash commands and argument substitution
- Extensions API (events, custom tools, custom commands)
- Interactive TUI (Textual) + headless CLI mode
- Config layering (global, project, env vars)
- Built‑in tool suite: read/write/edit/bash/grep/find/ls
- Input intake & preprocessing
- Slash commands and templates (
/something), skills ($skill), and input extensions are resolved before anything hits the model.
- Slash commands and templates (
- Session + context guardrails
- The user message is persisted to the JSONL session; context is compacted if needed.
- Prompt construction & model stream
- System prompt is built from tools + skills + context files; model response streams back events.
- Tool execution cycle
- Tool calls are parsed, validated, executed, and tool results are appended back into the conversation.
- Turn finalization
- Events are emitted, messages are persisted, and token stats are updated.
core/ Agent loop, sessions, context compaction, prompts
llm/ Provider adapters + streaming events
config/ Runtime config loading
tools/ Built‑in tool registry + implementations
skills/ Skill discovery + validation
prompts/ Prompt templates + argument expansion
extensions/ Hooks + custom tools/commands
tui/ Textual UI (interactive mode)
cli.py Headless CLI entry point
Requirements: Python 3.14+ and uv.
make deps
make runHeadless (single prompt):
make run-headless PROMPT="List all Python files"make test
make lint
make formatRelease check (lint + tests):
make can-releaseread– read file contents with line numberswrite– create/overwrite filesedit– find/replace editsbash– run shell commandsgrep– regex search across filesfind– glob‑based file discoveryls– directory listings
Skills are Markdown files with YAML frontmatter. Invoke them with $skill-name to inject curated instructions into the prompt. Skill discovery respects user, project, and custom directories (see docs/skills.md).
Prompt templates are Markdown files invoked with /template-name args... and support $1, $@, ${@:2} style substitution (see docs/prompts.md).
Extensions can:
- Block or transform input
- Modify context before the LLM call
- Intercept tool calls/results
- Register new tools and slash commands
See docs/extensions.md for the API shape.
docs/README.md— index of all docsdocs/architecture.md— system overview and module responsibilitiesdocs/agent-loop.md— detailed step‑by‑step loop walkthroughdocs/tools.md— tool schemas, registry, and built‑insdocs/skills.md— skill format, validation rules, search pathsdocs/prompts.md— template format and argument expansiondocs/extensions.md— extension API and lifecycle hooksdocs/llm.md— provider adapters and streaming eventsdocs/tui.md— Textual UI behavior and commandsdocs/configuration.md— config files, env vars, context filesdocs/sessions.md— JSONL sessions, forking, compaction
- Skills:
- Prompt templates:
- Extensions:
- Usage:
- Lisp interpreter — builds a tiny Lisp interpreter in TypeScript from a problem spec
MIT

