Hooman is a Bun-powered local AI agent CLI built with TypeScript, Strands Agents SDK, and Ink.
It gives you:
- a one-shot
execcommand for single prompts - a stateful
chatinterface for interactive sessions - a
daemoncommand for processing MCP channel notifications in background - an Ink-powered
configureworkflow for editing app config,instructions.md, MCP servers, and installed skills - an
acpcommand for running Hooman as an Agent Client Protocol (ACP) agent over stdio
- Multiple LLM providers:
ollama,openai,anthropic,google,bedrock,groq,moonshot,xai - Local configuration under
./.hoomanwhen that folder exists in the current working directory, otherwise~/.hooman - MCP server support via
stdio,streamable-http, andsse - MCP server
instructionssupport: server-provided instructions are appended to the agent system prompt - MCP channel notification support through
hooman daemon --channels - Skill discovery / install / removal through the integrated configure flow
- Interactive terminal UI for chat and configuration
- Bun
>= 1.0.0 - Node/npm available if you want to install skills from the public skills catalog
- Provider credentials or local model runtime depending on the LLM you choose
Fastest way to get started without cloning the repo:
npx hoomanjs configure
npx hoomanjs chatOr with Bun:
bunx hoomanjs configure
bunx hoomanjs chatRecommended first run:
- Run
hooman configureto choose your LLM provider and model. - Start chatting with
hooman chat. - Use
hooman exec "your prompt"for one-off tasks.
bun installRun locally:
bun run src/cli.ts --helpOr use the dev alias:
bun run dev -- --helpLink the CLI locally:
bun link
hooman --helpRun a single prompt once.
hooman exec "Summarize the current repository"Use a specific session id:
hooman exec "What changed?" --session my-sessionSkip interactive tool approval (allows every tool call; use only when you trust the prompt and environment):
hooman exec "Summarize this repo" --yoloStart an interactive stateful chat session.
hooman chatOptional initial prompt:
hooman chat "Help me plan the next task"Resume or pin a session id:
hooman chat --session my-sessionSkip the in-chat tool approval UI (same semantics as exec --yolo):
hooman chat --yoloRun a long-lived daemon that subscribes to MCP servers advertising the fixed hooman/channel capability and feeds each received notification into the agent as a queued prompt.
hooman daemon --channelsResume or pin a session id:
hooman daemon --session my-daemon --channelsSkip remote channel permission relay and allow every tool call from daemon turns (same risk profile as exec / chat with --yolo):
hooman daemon --channels --yoloRuntime tools and prompt sections are controlled from config.json under features:
features.fetch.enabledfeatures.filesystem.enabledfeatures.shell.enabledfeatures.ltm.enabledfeatures.wiki.enabled
Both ltm and wiki include dedicated Chroma settings under:
features.ltm.chroma(default collection:memory)features.wiki.chroma(default collection:wiki)
Open the Ink configuration workflow.
hooman configureThe configure UI currently lets you:
- edit app configuration values
- edit
instructions.mdin your$VISUAL/$EDITOR(cross-platform fallback included) - add, edit, and delete MCP servers with confirmation
- search, install, refresh, and remove skills
Run Hooman as an Agent Client Protocol (ACP) agent over stdio.
hooman acpACP notes:
- ACP sessions are stored under the active Hooman data directory in
acp-sessions/ - ACP loads MCP servers passed on
session/newandsession/load, in addition to Hooman's localmcp.json - ACP
session/newandsession/loadsupport_meta.userIdand_meta.systemPrompt - when
_meta.systemPromptis provided, it is appended to the agent system prompt with a section break
Hooman stores its data in:
./.hooman/ # when this folder exists in the current working directory
~/.hooman/ # otherwise
Important files and folders:
config.json- app name, LLM provider/model, tool approvals, feature flags, LTM/wiki settings, compactioninstructions.md- system instructions used to build the agent promptmcp.json- MCP server definitionsskills/- installed skillssessions/- persisted session dataacp-sessions/- persisted ACP session metadata and message snapshots
This is the shape managed by hooman configure:
{
"name": "Hooman",
"llm": {
"provider": "ollama",
"model": "gemma4:e4b",
"params": {}
},
"tools": {
"allowed": []
},
"features": {
"fetch": {
"enabled": true
},
"filesystem": {
"enabled": true
},
"shell": {
"enabled": true
},
"ltm": {
"enabled": false,
"chroma": {
"url": "http://127.0.0.1:8000",
"collection": {
"memory": "memory"
}
}
},
"wiki": {
"enabled": false,
"chroma": {
"url": "http://127.0.0.1:8000",
"collection": {
"wiki": "wiki"
}
}
}
},
"compaction": {
"ratio": 0.75,
"keep": 5
}
}Supported llm.provider values:
ollamaopenaianthropicgooglebedrockgroqmoonshotxai
Good default for local usage. Example:
{
"provider": "ollama",
"model": "gemma4:e4b",
"params": {}
}Example:
{
"provider": "openai",
"model": "gpt-5",
"params": {
"apiKey": "..."
}
}Provider-specific settings such as apiKey, authToken, baseURL, and headers are supported. Other values are forwarded into the model config.
{
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}Uses Strands GoogleModel on top of @google/genai. Top-level options like apiKey, client, clientConfig, and builtInTools are supported; other values go into Google generation params.
{
"provider": "google",
"model": "gemini-2.5-flash",
"params": {
"apiKey": "...",
"temperature": 0.7,
"maxOutputTokens": 2048,
"topP": 0.9,
"topK": 40
}
}Supports region, clientConfig, and optional apiKey, with all other values forwarded as Bedrock model options.
Uses the Vercel AI SDK Groq provider (@ai-sdk/groq) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to GROQ_API_KEY from the environment when no apiKey is supplied.
{
"provider": "groq",
"model": "gemma2-9b-it",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}Uses the Vercel AI SDK Moonshot provider (@ai-sdk/moonshotai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, headers, and fetch are picked up; other values are forwarded into the model config (temperature, maxTokens, providerOptions, etc.). Defaults to MOONSHOT_API_KEY from the environment when no apiKey is supplied. Moonshot reasoning models such as kimi-k2-thinking can be configured through params.providerOptions.moonshotai.
{
"provider": "moonshot",
"model": "kimi-k2.5",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}Uses the Vercel AI SDK xAI provider (@ai-sdk/xai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to XAI_API_KEY from the environment when no apiKey is supplied.
{
"provider": "xai",
"model": "grok-4.20-non-reasoning",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}mcp.json is stored as:
{
"mcpServers": {}
}{
"mcpServers": {
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"env": {
"EXAMPLE": "1"
},
"cwd": "/tmp"
}
}
}{
"mcpServers": {
"remote": {
"type": "streamable-http",
"url": "https://example.com/mcp",
"headers": {
"Authorization": "Bearer token"
}
}
}
}{
"mcpServers": {
"legacy": {
"type": "sse",
"url": "https://example.com/sse",
"headers": {
"Authorization": "Bearer token"
}
}
}
}- MCP server
instructionsfrom the protocolinitializeresponse are appended to Hooman's system prompt, after localinstructions.mdand session-specific prompt overrides. - Hooman reads these instructions automatically from connected MCP servers when building the agent.
hooman daemon --channelssubscribes to MCP servers that advertise the experimentalhooman/channelcapability.- Hooman also reads
hooman/user,hooman/session, andhooman/threadcapability paths so daemon turns preserve origin metadata from the source channel. - When a matching notification is received, Hooman uses
params.contentas the prompt if it is a string; otherwise it JSON-stringifies the notification params and sends that to the agent. - Daemon mode processes notifications sequentially and reuses the same agent session over time.
- Tool calls from daemon turns are no longer blanket auto-approved: if the originating MCP server supports
hooman/channel/permission, Hooman relays a remote approval request back to that source; otherwise the tool call is denied. exec,chat, anddaemonaccept--yoloto bypass those approval paths and allow all tools without prompting or relay.
Skills are installed under:
./.hooman/skills # when ./.hooman exists
~/.hooman/skills # otherwise
The configure workflow can:
- search the public skills catalog
- install a skill from a source string, repo, URL, or local path
- refresh installed skills
- remove installed skills with confirmation
Install dependencies:
bun installRun the CLI:
bun run src/cli.ts --helpRun typecheck:
bunx tsc --noEmitMIT. See LICENSE.
