Personal AI agent controllable via Telegram and a Web Admin UI. Built around a ReAct agent loop with FastAPI, SQLite persistence, real token streaming, a multi-agent kanban board, and MCP server support. Works with any OpenAI-compatible model (local via Ollama or cloud).
curl -fsSL https://raw.githubusercontent.com/vakovalskii/LocalTaskClaw/main/install.sh | bashThe interactive wizard walks through mode selection, Telegram bot setup, model choice, and service registration. Takes about 5 minutes.
- Telegram bot with live typing preview (Bot API 9.3+
sendMessageDraft,editMessageTextfallback) - Admin UI (SPA) -- chat, sessions, kanban board, tasks, files, logs, settings
- Kanban board -- 5-column board (Backlog / In Progress / Review / Done / Needs Human), up to 10 agents with custom identities and roles
- Orchestrator / Worker model -- orchestrator agents dispatch workers via
kanban_run, read artifacts viakanban_read_result, verify viakanban_verify, send reports viakanban_report - Auto-retry -- rejected tasks retry up to 2 times, then escalate to Needs Human
- Repeat / heartbeat -- tasks with
repeat_minutes > 0auto-rerun on schedule - Parallel tool calls -- agents run multiple tools concurrently via
asyncio.gather - Real token streaming -- SSE streaming in both Admin UI and Telegram
- Web search via DuckDuckGo (Brave API optional)
- MCP servers -- extend the agent with any Model Context Protocol tool
- Skills system -- SKILL.md scanner +
npx skills addecosystem - Security -- hard-block (fork bombs, exfil), soft-confirm (rm -rf, DROP TABLE), injection detection
- Any OpenAI-compatible model -- Ollama (local) or cloud API
core/ Python ReAct agent + FastAPI + SQLite
bot/ Telegram bot (single-user, OWNER_ID guard)
frontend/ Admin UI — React + TypeScript + Tailwind CSS (builds to admin/)
admin/ Built SPA output served by Core at /admin
scripts/ Utilities (seed_kanban.py)
tests/ End-to-end integration tests
ltc Unified CLI for all operations
| Component | Role |
|---|---|
| Core | ReAct agent loop, FastAPI REST + SSE API, SQLite persistence, tool execution, MCP transport |
| Bot | Telegram long-polling bot with streaming replies, owner-only access |
| Admin UI | Browser-based SPA -- chat, sessions, kanban, scheduled tasks, file browser, log viewer, settings |
| Traefik | HTTPS reverse proxy (optional, Docker mode with a domain) |
The Core listens on port 11387 by default. The Bot connects to Core over HTTP. The Admin UI is served by Core at /admin.
The installer offers three isolation levels:
Agent runs inside containers (multi-stage build: Node.js builds the React frontend, Python runs the core). Access is limited to a dedicated volume. Requires Docker, Docker Compose, and git.
Agent runs as a Python process directly on the host. Full filesystem access. Requires Python 3.12+, git, pip.
Same as native, but the agent is confined to ~/.localtaskclaw/workspace. File tools cannot escape the workspace directory.
The installer places ltc in ~/bin/ltc. All operations go through this single command.
ltc start Start services (core + bot)
ltc stop Stop services
ltc restart Restart services
ltc status Show status, port, model
ltc logs [core|bot] Tail logs (default: core)
ltc test [target] Run tests (all | kanban | seed) [--keep]
ltc seed [--reset] Seed demo kanban board
ltc update Pull latest code & restart
ltc build Rebuild frontend (React → admin/)
ltc open Open admin UI in browser
ltc uninstall Remove LocalTaskClaw completely
ltc help Show help
curl -fsSL https://raw.githubusercontent.com/vakovalskii/LocalTaskClaw/main/install.sh | bashThe wizard prompts for:
- Installation mode (Docker / Native / Restricted)
- Telegram bot token (validated live against the Bot API)
- Owner Telegram ID (auto-detected via
/startor entered manually) - LLM provider (Ollama with hardware-aware model picker, or external OpenAI-compatible API)
After completion it registers system services, performs a health check, and installs the ltc CLI.
ltc updatePulls the latest code via git, reinstalls dependencies if requirements.txt changed, rebuilds the frontend, and restarts services. Supports --quiet flag for headless execution (called from API).
ltc uninstallStops services, removes LaunchAgents/systemd units, deletes ~/.localtaskclaw (code, venv, database, secrets, workspace), removes ~/bin/ltc, and cleans up log files. Prompts for confirmation before proceeding.
Seed the board with 4 specialist worker agents, 1 orchestrator, and 5 demo tasks:
ltc seedReset all existing data and re-seed from scratch:
ltc seed --resetPrint current board state without changes:
ltc seed --statusIntegration tests run against the live service at localhost:11387.
Run all tests:
ltc testKanban team-run e2e test (spawns agents, orchestrator dispatches workers, produces artifacts):
ltc test kanbanSeed pipeline validation:
ltc test seedUse --keep to preserve test data after the run (useful for inspecting results in the UI):
ltc test kanban --keepltc buildInstalls npm dependencies and builds the React + Tailwind SPA into admin/.
The ltc CLI auto-detects the platform and uses the appropriate service manager:
| Platform | Backend | Notes |
|---|---|---|
| macOS | launchctl |
LaunchAgents, auto-start on login |
| Linux | systemd --user |
User units, systemctl --user enable for boot |
| Docker | docker compose |
Containers managed via compose |
| Fallback | nohup |
Direct process launch |
ltc start # Start core + bot
ltc stop # Stop all services
ltc restart # Stop then start
ltc status # Show running state, port, model version
ltc logs # Tail core logs (or: ltc logs bot)- URL:
http://localhost:11387/admin - Login: use the
API_SECRETvalue fromsecrets/core.envas password
Pages:
| Page | Description |
|---|---|
| Chat | Conversational interface with real-time token streaming |
| Sessions | Browse and resume past conversations |
| Kanban | Multi-agent task board with drag-and-drop, run/cancel/verify controls |
| Tasks | Scheduled tasks (cron or interval-based) |
| Files | Workspace file browser with read/write/delete |
| Logs | Live-streamed core and bot logs |
| Settings | Change model, LLM URL, API keys, and other config at runtime |
Messages sent through the Admin UI are also forwarded to the owner's Telegram chat.
All endpoints require the X-Api-Key header set to API_SECRET (except /health).
| Method | Path | Description |
|---|---|---|
POST |
/chat |
Agent chat (set stream=true for SSE token streaming) |
POST |
/clear |
Clear session history |
GET |
/history |
Full conversation history (?chat_id=) |
GET |
/sessions |
List all sessions |
GET |
/events |
Agent event trace (?session_key=&limit=) |
GET |
/tasks |
List scheduled tasks |
POST |
/tasks |
Create scheduled task (name, prompt, interval_minutes or cron) |
DELETE |
/tasks/{id} |
Delete scheduled task |
PATCH |
/tasks/{id}/toggle |
Enable/disable scheduled task |
GET |
/files |
List workspace directory (?path=) |
GET |
/file |
Read file contents (?path=) |
POST |
/file |
Write file (path, content) |
DELETE |
/file |
Delete file or directory (?path=) |
GET |
/settings |
Get current settings |
POST |
/settings |
Update settings (writes to core.env) |
GET |
/logs/tail |
Last N log lines (?source=core|bot&lines=200) |
GET |
/logs/stream |
SSE log stream (?source=core|bot&key=SECRET) |
GET |
/health |
Health check (no auth required) |
GET |
/agents |
List kanban agents |
POST |
/agents |
Create agent |
DELETE |
/agents/{id} |
Delete agent |
GET |
/kanban |
List all kanban tasks |
POST |
/kanban/tasks |
Create kanban task |
PATCH |
/kanban/tasks/{id} |
Update task fields |
DELETE |
/kanban/tasks/{id} |
Delete task |
POST |
/kanban/tasks/{id}/move |
Move task to column |
POST |
/kanban/tasks/{id}/run |
Start agent execution on task |
POST |
/kanban/tasks/{id}/cancel |
Cancel running task |
Environment variables are stored in secrets/core.env (native/restricted) or passed via Docker environment.
| Variable | Default | Description |
|---|---|---|
MODEL |
qwen2.5:7b |
LLM model name |
LLM_BASE_URL |
http://localhost:11434/v1 |
OpenAI-compatible API URL |
LLM_API_KEY |
ollama |
API key (ollama for local Ollama) |
BOT_TOKEN |
-- | Telegram bot token from @BotFather |
OWNER_ID |
0 |
Telegram user ID (0 = allow all) |
API_SECRET |
-- | Shared secret for core, bot, and admin UI auth |
WORKSPACE |
/data/workspace |
Agent workspace directory |
DB_PATH |
/data/localtaskclaw.db |
SQLite database path |
BRAVE_API_KEY |
-- | Brave Search API key (optional, DuckDuckGo fallback) |
MAX_ITERATIONS |
20 |
Max ReAct loop iterations per request |
COMMAND_TIMEOUT |
60 |
Bash command timeout in seconds |
MAX_TOKENS |
4096 |
Max completion tokens per LLM call |
CONTEXT_LIMIT |
80000 |
Token limit before history compaction |
MEMORY_ENABLED |
true |
Load MEMORY.md into agent context |
API_PORT |
11387 |
Port for the FastAPI server |
Configure external MCP tool servers in workspace/mcp_servers.json:
{
"servers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_TOKEN": "ghp_..." }
}
}
}MCP tools are auto-discovered and appear as mcp_{server}_{tool_name} in the agent's tool list. The agent communicates with MCP servers via stdio JSON-RPC 2.0 transport.
MIT