Zero-dependency single-file AI agent loop for learning.
A ~1400-line TypeScript file that implements a complete AI agent loop with 7 core subsystems. No external dependencies. Works with any OpenAI-compatible API. Helps explain the core principles of OpenClaw.
┌─────────────────────────────────────────────────────────────────┐
│ main() │
│ ENV → config → runEmbeddedAgent() → printSummary() │
└────────────────────────┬────────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────────┐
│ runEmbeddedAgent (Outer Orchestration) │
│ Lock → Memory → Tools → Skills → SystemPrompt → AgentLoop │
│ → Chunker → Persist │
└────────────────────────┬────────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────────┐
│ runAgentLoop (Inner Loop) │
│ for each iteration: │
│ Auth → Format messages → streamFromAPI() → handle events │
│ → if toolUse: execute tools → append results → continue │
│ → if endTurn: return result │
└────────────────────────┬────────────────────────────────────────┘
│
┌────────────────────────▼────────────────────────────────────────┐
│ streamFromAPI (SSE Parser) │
│ HTTP fetch → ReadableStream → SSE lines → StreamEvent yields │
│ Accumulates chunked tool_calls arguments │
└─────────────────────────────────────────────────────────────────┘
| # | Subsystem | Purpose | Lines |
|---|---|---|---|
| 1 | SessionManager | JSONL persistence + LRU cache + file locks | ~160-226 |
| 2 | MemoryStore | Long-term + short-term dual-layer memory | ~228-268 |
| 3 | AuthFailover | API key pool with cooldown rotation by failure type | ~270-318 |
| 4 | BlockChunker | 5-level breakpoint intelligent chunking | ~320-370 |
| 5 | Tool System | Unified tool interface (JSON Schema + execute) | ~540-615 |
| 6 | Skill System | Skill loading, filtering, injection (Prompt Engineering) | ~617-836 |
| 7 | Agent Loop | Inner loop (stream + tool execution) + outer orchestration | ~910-1250 |
git clone https://github.com/anthropics/agent-loop-mini.git
cd agent-loop-mini
cp .env.example .env # Edit .env with your API key
bun run agent-loop-mini.tsSet via environment variables or .env file:
Example:
LLM_API_KEY=sk-xxx LLM_BASE_URL=https://api.deepseek.com/v1 LLM_MODEL=deepseek-v4 bun run agent-loop-mini.tsSkills are Markdown-driven workflows injected into the system prompt. Unlike Tools (JSON Schema + code execution), Skills guide the model through multi-step procedures using natural language.
| Tool | Skill | |
|---|---|---|
| Format | JSON Schema | Markdown (SKILL.md) |
| Delivery | API tools field |
System prompt text |
| Invocation | finish_reason: tool_calls |
Model reads + follows steps |
| Granularity | Single function | Multi-step workflow |
| Example | weather({city: "Beijing"}) |
"Follow this checklist to review a PR" |
skills/
├── weather-report/ # Pure guide — uses tools to query + compare
│ └── SKILL.md
├── batch-weather/ # Guide + bundled script
│ ├── SKILL.md
│ └── scripts/
│ └── format_report.sh
└── code-review/ # Guide with dependency check (requires: git)
└── SKILL.md
Create skills/your-skill/SKILL.md:
---
name: your-skill
description: "What this skill does"
metadata:
openclaw:
emoji: "sparkles"
requires:
bins: ["git"] # Optional: required binaries
---
# Your Skill Name
## Steps
1. First step...
2. Second step...Trigger with /your-skill in the chat.
-
Why JSONL instead of JSON for sessions? Append-only writes. A crash only loses the last message, not the entire file.
-
Why XML for skill injection? LLMs parse XML tags more reliably than plain text boundaries. Training data includes abundant XML.
-
Why AsyncGenerator for streaming? Decouples SSE parsing from event handling. Each consumer gets a clean
for awaitinterface. -
Why 5-level breakpoint chunking? Messaging platforms have character limits. Splitting at code blocks > paragraphs > sentences > clauses > words preserves readability.
-
Why separate internal and API message formats? Internal format has
timestampfor ordering andtoolResults[]for persistence. API format hastool_call_idfor protocol compliance. Decoupling allows switching LLM providers without touching persistence.
If you don't have Bun installed:
npx tsx agent-loop-mini.tsThe exec and calculator tools are included for educational demonstration only. The exec tool runs arbitrary shell commands and calculator uses new Function(). Do not use in production. In a real system, these would have sandboxing, allowlists, and proper input validation.
MIT