Skip to content

vaibhavpandeyvpz/hooman

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hooman

Hooman is a Bun-powered local AI agent CLI built with TypeScript, Strands Agents SDK, and Ink.

Bun TypeScript Ink Build GitHub Repo stars GitHub last commit

Hooman screenshot

It gives you:

  • a one-shot exec command for single prompts
  • a stateful chat interface for interactive sessions
  • a daemon command for processing MCP channel notifications in background
  • an Ink-powered configure workflow for editing app config, instructions.md, MCP servers, and installed skills
  • an acp command for running Hooman as an Agent Client Protocol (ACP) agent over stdio

Features

  • Multiple LLM providers: ollama, openai, anthropic, google, bedrock, groq, moonshot, xai
  • Local configuration under ./.hooman when that folder exists in the current working directory, otherwise ~/.hooman
  • MCP server support via stdio, streamable-http, and sse
  • MCP server instructions support: server-provided instructions are appended to the agent system prompt
  • MCP channel notification support through hooman daemon --channels
  • Skill discovery / install / removal through the integrated configure flow
  • Interactive terminal UI for chat and configuration

Requirements

  • Bun >= 1.0.0
  • Node/npm available if you want to install skills from the public skills catalog
  • Provider credentials or local model runtime depending on the LLM you choose

Usage

Fastest way to get started without cloning the repo:

npx hoomanjs configure
npx hoomanjs chat

Or with Bun:

bunx hoomanjs configure
bunx hoomanjs chat

Recommended first run:

  1. Run hooman configure to choose your LLM provider and model.
  2. Start chatting with hooman chat.
  3. Use hooman exec "your prompt" for one-off tasks.

Install

bun install

Run locally:

bun run src/cli.ts --help

Or use the dev alias:

bun run dev -- --help

Link the CLI locally:

bun link
hooman --help

Commands

hooman exec

Run a single prompt once.

hooman exec "Summarize the current repository"

Use a specific session id:

hooman exec "What changed?" --session my-session

Skip interactive tool approval (allows every tool call; use only when you trust the prompt and environment):

hooman exec "Summarize this repo" --yolo

hooman chat

Start an interactive stateful chat session.

hooman chat

Optional initial prompt:

hooman chat "Help me plan the next task"

Resume or pin a session id:

hooman chat --session my-session

Skip the in-chat tool approval UI (same semantics as exec --yolo):

hooman chat --yolo

hooman daemon

Run a long-lived daemon that subscribes to MCP servers advertising the fixed hooman/channel capability and feeds each received notification into the agent as a queued prompt.

hooman daemon --channels

Resume or pin a session id:

hooman daemon --session my-daemon --channels

Skip remote channel permission relay and allow every tool call from daemon turns (same risk profile as exec / chat with --yolo):

hooman daemon --channels --yolo

Feature Flags

Runtime tools and prompt sections are controlled from config.json under features:

  • features.fetch.enabled
  • features.filesystem.enabled
  • features.shell.enabled
  • features.ltm.enabled
  • features.wiki.enabled

Both ltm and wiki include dedicated Chroma settings under:

  • features.ltm.chroma (default collection: memory)
  • features.wiki.chroma (default collection: wiki)

hooman configure

Open the Ink configuration workflow.

hooman configure

The configure UI currently lets you:

  • edit app configuration values
  • edit instructions.md in your $VISUAL / $EDITOR (cross-platform fallback included)
  • add, edit, and delete MCP servers with confirmation
  • search, install, refresh, and remove skills

hooman acp

Run Hooman as an Agent Client Protocol (ACP) agent over stdio.

hooman acp

ACP notes:

  • ACP sessions are stored under the active Hooman data directory in acp-sessions/
  • ACP loads MCP servers passed on session/new and session/load, in addition to Hooman's local mcp.json
  • ACP session/new and session/load support _meta.userId and _meta.systemPrompt
  • when _meta.systemPrompt is provided, it is appended to the agent system prompt with a section break

Configuration Layout

Hooman stores its data in:

./.hooman/   # when this folder exists in the current working directory
~/.hooman/   # otherwise

Important files and folders:

  • config.json - app name, LLM provider/model, tool approvals, feature flags, LTM/wiki settings, compaction
  • instructions.md - system instructions used to build the agent prompt
  • mcp.json - MCP server definitions
  • skills/ - installed skills
  • sessions/ - persisted session data
  • acp-sessions/ - persisted ACP session metadata and message snapshots

Example config.json

This is the shape managed by hooman configure:

{
  "name": "Hooman",
  "llm": {
    "provider": "ollama",
    "model": "gemma4:e4b",
    "params": {}
  },
  "tools": {
    "allowed": []
  },
  "features": {
    "fetch": {
      "enabled": true
    },
    "filesystem": {
      "enabled": true
    },
    "shell": {
      "enabled": true
    },
    "ltm": {
      "enabled": false,
      "chroma": {
        "url": "http://127.0.0.1:8000",
        "collection": {
          "memory": "memory"
        }
      }
    },
    "wiki": {
      "enabled": false,
      "chroma": {
        "url": "http://127.0.0.1:8000",
        "collection": {
          "wiki": "wiki"
        }
      }
    }
  },
  "compaction": {
    "ratio": 0.75,
    "keep": 5
  }
}

Supported llm.provider values:

  • ollama
  • openai
  • anthropic
  • google
  • bedrock
  • groq
  • moonshot
  • xai

Provider Notes

Ollama

Good default for local usage. Example:

{
  "provider": "ollama",
  "model": "gemma4:e4b",
  "params": {}
}

OpenAI

Example:

{
  "provider": "openai",
  "model": "gpt-5",
  "params": {
    "apiKey": "..."
  }
}

Anthropic

Provider-specific settings such as apiKey, authToken, baseURL, and headers are supported. Other values are forwarded into the model config.

{
  "provider": "anthropic",
  "model": "claude-sonnet-4-20250514",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

Google

Uses Strands GoogleModel on top of @google/genai. Top-level options like apiKey, client, clientConfig, and builtInTools are supported; other values go into Google generation params.

{
  "provider": "google",
  "model": "gemini-2.5-flash",
  "params": {
    "apiKey": "...",
    "temperature": 0.7,
    "maxOutputTokens": 2048,
    "topP": 0.9,
    "topK": 40
  }
}

Bedrock

Supports region, clientConfig, and optional apiKey, with all other values forwarded as Bedrock model options.

Groq

Uses the Vercel AI SDK Groq provider (@ai-sdk/groq) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to GROQ_API_KEY from the environment when no apiKey is supplied.

{
  "provider": "groq",
  "model": "gemma2-9b-it",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

Moonshot

Uses the Vercel AI SDK Moonshot provider (@ai-sdk/moonshotai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, headers, and fetch are picked up; other values are forwarded into the model config (temperature, maxTokens, providerOptions, etc.). Defaults to MOONSHOT_API_KEY from the environment when no apiKey is supplied. Moonshot reasoning models such as kimi-k2-thinking can be configured through params.providerOptions.moonshotai.

{
  "provider": "moonshot",
  "model": "kimi-k2.5",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

xAI

Uses the Vercel AI SDK xAI provider (@ai-sdk/xai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to XAI_API_KEY from the environment when no apiKey is supplied.

{
  "provider": "xai",
  "model": "grok-4.20-non-reasoning",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

MCP Configuration

mcp.json is stored as:

{
  "mcpServers": {}
}

Example stdio server

{
  "mcpServers": {
    "filesystem": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
      "env": {
        "EXAMPLE": "1"
      },
      "cwd": "/tmp"
    }
  }
}

Example streamable HTTP server

{
  "mcpServers": {
    "remote": {
      "type": "streamable-http",
      "url": "https://example.com/mcp",
      "headers": {
        "Authorization": "Bearer token"
      }
    }
  }
}

Example SSE server

{
  "mcpServers": {
    "legacy": {
      "type": "sse",
      "url": "https://example.com/sse",
      "headers": {
        "Authorization": "Bearer token"
      }
    }
  }
}

MCP Notes

  • MCP server instructions from the protocol initialize response are appended to Hooman's system prompt, after local instructions.md and session-specific prompt overrides.
  • Hooman reads these instructions automatically from connected MCP servers when building the agent.
  • hooman daemon --channels subscribes to MCP servers that advertise the experimental hooman/channel capability.
  • Hooman also reads hooman/user, hooman/session, and hooman/thread capability paths so daemon turns preserve origin metadata from the source channel.
  • When a matching notification is received, Hooman uses params.content as the prompt if it is a string; otherwise it JSON-stringifies the notification params and sends that to the agent.
  • Daemon mode processes notifications sequentially and reuses the same agent session over time.
  • Tool calls from daemon turns are no longer blanket auto-approved: if the originating MCP server supports hooman/channel/permission, Hooman relays a remote approval request back to that source; otherwise the tool call is denied.
  • exec, chat, and daemon accept --yolo to bypass those approval paths and allow all tools without prompting or relay.

Skills

Skills are installed under:

./.hooman/skills   # when ./.hooman exists
~/.hooman/skills   # otherwise

The configure workflow can:

  • search the public skills catalog
  • install a skill from a source string, repo, URL, or local path
  • refresh installed skills
  • remove installed skills with confirmation

Development

Install dependencies:

bun install

Run the CLI:

bun run src/cli.ts --help

Run typecheck:

bunx tsc --noEmit

License

MIT. See LICENSE.

About

Hooman is a Bun-powered local AI agent CLI built with TypeScript, Strands Agents SDK, and Ink.

Topics

Resources

License

Stars

Watchers

Forks

Contributors