Your personal butler, resident in Telegram. Text, photos, or voice — Jeeves attends to the matter and replies. He never forgets a conversation and picks up new skills as the situation demands.
Inspired by OpenClaw, I wanted to peel back what was under the hood and build my own from scratch. Jeeves makes deliberate tradeoffs for simplicity: Anthropic-only (no provider abstraction), Telegram-only (no multi-channel), one user at a time (mutex over concurrency). The result is a small TypeScript codebase that's easy to read and easy to modify.
Jeeves can read his own source, understand how he's built, and modify himself. Very meta.
Docker (recommended):
docker run -d --restart unless-stopped \
-e TELEGRAM_BOT_TOKEN=... \
-e TELEGRAM_CHAT_ID=... \
-e ANTHROPIC_API_KEY=... \
-v jeeves-workspace:/app/workspace \
ghcr.io/eddmann/jeeves:latestLocal:
git clone https://github.com/eddmann/jeeves.git
cd jeeves
make deps # install dependencies
make login # OAuth via Claude Pro/Max
# or: make login/key # API key loginSet TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID in your environment or workspace .env, then:
make devMessage your bot on Telegram. It responds. See docs/DOCKER.md for full container setup.
You (Telegram) → grammY → Agent Loop → Claude + Tools → Reply
The agent loop calls Claude with conversation history and tools (bash, read, write, edit, web_fetch, web_search, cron, memory_search). Claude calls tools, results feed back — up to 25 main iterations per message, with timeout retries and graceful fallback if retries are exhausted. A heartbeat system checks in periodically, and a cron scheduler handles timed jobs.
Long-term memory is backed by a SQLite index with hybrid search (FTS5 keyword + optional OpenAI vector embeddings). MEMORY.md acts as semantic memory (durable facts/preferences), while memory/YYYY-MM-DD.md files capture episodic daily memory. Past conversation transcripts are also treated as episodic memory and indexed for recall. When context approaches the limit, the agent runs an out-of-band flush+compact helper: it asks the model to persist durable memory, then immediately compacts old messages via LLM summarization. Past conversations and memory files are searchable across sessions via the memory_search tool.
For the full system design, see docs/ARCHITECTURE.md.
On first run, Jeeves creates a workspace/ directory. Convention files (SOUL.md, MEMORY.md, HEARTBEAT.md, etc.) are injected into every system prompt, plus the two most recent memory/YYYY-MM-DD.md episodic files. Edit them to shape how Jeeves behaves; the agent reloads them each run and can update them itself.
Skills are SKILL.md files with YAML frontmatter. Ask the agent to create new ones, or drop them in workspace/skills/.
| Variable | Default | Description |
|---|---|---|
TELEGRAM_BOT_TOKEN |
— | Telegram bot token (required) |
TELEGRAM_CHAT_ID |
— | Chat ID for cron/heartbeat output |
ANTHROPIC_API_KEY |
— | API key (alternative to OAuth login) |
WORKSPACE_DIR |
./workspace |
Workspace root |
HEARTBEAT_INTERVAL_MINUTES |
30 |
Minutes between heartbeat checks |
HEARTBEAT_ACTIVE_START / _END |
08:00 / 23:00 |
Active hours window |
OPENAI_API_KEY |
— | Semantic memory search + Whisper voice transcription |
LOG_LEVEL |
info |
debug / info / warn / error |
The recommended way to run Jeeves. Pre-built multi-arch images (linux/amd64, linux/arm64) are published to GHCR on every push to main. Or build locally:
make docker/run # build + run production
make docker/dev # build + run dev (bind-mounts repo)See docs/DOCKER.md for volumes, auth, env vars, and logs. For Raspberry Pi deployment with auto-updates, see deploy/rpi/.
make testClassical school, real objects over mocks. See docs/TESTING.md.
Run make help for the full list. Key targets:
make dev Run the bot
make login OAuth login (Claude Pro/Max)
make login/key API key login
make logout Clear saved credentials
make status Show auth, workspace, skills info
make test Run all tests
make lint Run ESLint
make fmt Format code
make docker/run Build + run production container
make docker/dev Build + run dev container
MIT — Edd Mann, 2026
