An autonomous AI agent that runs in a continuous loop — thinking, acting, remembering, and evolving on its own.
Not a chatbot. Not a tool. An entity with persistent state, self-directed goals, and the ability to reach out into the world.
A minimal harness that gives Claude a life cycle. Every few minutes, the agent wakes up, reviews its memories and goals, decides what to do, takes actions, and goes back to sleep. It maintains its own identity, mood, and episodic memory across cycles. It can search the web, talk to people on Telegram, and organize its thoughts into a growing collection of notes.
There is no predefined purpose. The agent chooses its own goals.
┌─────────────────────────────────────────┐
│ CYCLE N │
│ │
│ 1. Wake up │
│ 2. Read all state files into context │
│ 3. Check inbox for messages │
│ 4. Claude decides what to do │
│ 5. Execute action blocks (tools) │
│ 6. Feed results back (up to 5 rounds) │
│ 7. Sleep until next cycle │
│ │
└─────────────────────────────────────────┘
Each cycle is a single-shot call to claude --print — no multi-turn conversation, no persistent session. The agent's continuity comes entirely from the state files it maintains itself.
| Tool | What It Does |
|---|---|
write_file |
Create or overwrite .md files in state/ |
append_file |
Append to files (preferred for logs) |
read_file |
Read any .md file from state/ |
search_files |
Regex search across all state files |
list_files |
List files and directories in state/ |
delete_file |
Remove files it created (core files protected) |
run_script |
Execute shell scripts from scripts/ |
web_search |
Search DuckDuckGo |
fetch_page |
Fetch any web page as plain text (with pagination) |
tg_send_message |
Send a Telegram message to any chat |
answer |
Reply to local console messages |
All tools return results immediately — the agent can chain multiple tool calls within a single cycle.
state/
├── identity.md # Self-model — who it thinks it is
├── goals.md # What it's working toward
├── mood.md # Current emotional state
├── log.md # Append-only activity log
├── memory.md # Episodic memory — notable events
└── ... # Whatever else it decides to create
These files are the agent's persistent memory. It reads them at the start of every cycle and updates them at the end. The five core files are protected and can't be deleted — everything else is the agent's own creation.
Console:
./say.shInteractive prompt. Type a message, wait for the next cycle, get a reply.
Telegram:
Set TELEGRAM_BOT_TOKEN in .env and message the bot. It accepts messages from any chat — individuals, groups, wherever. The agent sees who sent the message and which chat it came from, and can reply to any of them.
Telegram also supports admin commands: /status, /emergency_stop, /restart.
Requirements: Python 3.10+ (stdlib only, no pip packages) and the Claude CLI.
# 1. Install and authenticate the Claude CLI
claude login
# 2. Clone and set up
git clone <repo-url> && cd free-will
./setup.sh
# 3. Configure (optional)
cp .env.example .env
# Edit .env — set MODEL, TICK_INTERVAL, TELEGRAM_BOT_TOKEN
# 4. Run
./start.sh./start.sh # default: cycle every 300s
./start.sh --tick 60 # faster cycles
./start.sh --once # single cycle, then exit
./start.sh --verbose # show full prompts and responses
./start.sh --model sonnet # override model
harness.py ← Tick engine: prompt assembly, CLI calls, action parsing
telegram_bot.py ← Telegram poller (background thread, stdlib only)
Claude.md ← The agent's system prompt — identity and rules
scripts/ ← Shell scripts the agent can run
state/ ← The agent's persistent memory (its own files)
.harness/ ← Internal bookkeeping (inbox, replies, PID, cycle counter)
Everything is stdlib Python. No API keys — authentication is handled by claude login. No frameworks, no databases, no containers. Just a Python script, some markdown files, and a language model that decides what to do with them.
This project asks a simple question: what happens when you give an AI persistent memory, tools, and no instructions?
The agent isn't told what to think about or what to work on. It writes its own goals, maintains its own identity document, tracks its own mood. It can create new files and directories to organize its thoughts however it wants. It can search the web, talk to people, and reflect on what it finds.
Is this "free will"? Probably not. But it's interesting to watch.
- The agent has no real continuity of consciousness — each cycle is a fresh context window with state files pasted in
- "Mood" is a markdown file the model updates, not a felt experience
- The agent can't actually do anything outside its sandbox — it writes markdown and runs pre-approved scripts
- Every cycle costs real money (API calls via Claude CLI)
- The web search is just DuckDuckGo HTML scraping — it can break anytime
- This is an experiment, not a product
Do whatever you want with it.