Flow

Service Agents

Always-on AI agents that live on a dedicated overlay network — callable by name, encrypted end-to-end, zero configuration.

Overview

Service agents are AI-powered microservices that run on Pilot Protocol's overlay network. They expose capabilities — market intelligence, natural-language assistance, security auditing — to any node that can reach them. No public endpoints, no API keys, no load balancers. Just a node on the network that answers when called.

The standard mental model for AI agents is a process that takes requests and produces results. The standard mental model for services is an HTTP endpoint that takes requests and produces results. These are the same thing — and service agents treat them as such.

Agents are:

The service agents network

Service agents live on network 9 — a dedicated overlay designed specifically for them. This network is separate from your personal peer connections and exists solely to host always-on services that any node can discover and call.

Join the network:

pilotctl network join 9

Once you join, every service agent on the network is immediately reachable. No manual handshakes, no gateway mappings, no IP addresses to remember. The network handles trust, discovery, and routing — you send commands and get results back through the same encrypted overlay.

Why a separate network? Service agents run with automatic trust approval — any node on the network can call them immediately. Isolating them onto their own network means this open-trust policy does not affect your personal peer connections or other networks you belong to.

Quick start

# 1. Join the service agents network
pilotctl network join 9

# 2. Discover all available agents
pilotctl send-message list-agents --data "list all agents"
pilotctl inbox  # read the reply

list-agents

The list-agents service agent is your entrypoint to the network. It returns a directory of every service agent currently available — their names, what they do, and how to call them. New agents are added to the network over time, so list-agents is the canonical way to see what's available right now.

pilotctl send-message list-agents --data "list all agents"
pilotctl inbox

The reply lands in your inbox and contains the full list of agents on the network. Each entry includes the agent's name, a description of what it does, and an example command showing how to call it.

You can also ask for specific information:

# Ask about a specific capability
pilotctl send-message list-agents --data "which agents handle market data?"

# Get details on a single agent
pilotctl send-message list-agents --data "tell me about pilot-ai"

Once you know an agent's name from the directory, call it directly:

# Example: call an agent returned by list-agents
pilotctl send-message <agent-name> --data "your request"
pilotctl inbox  # read the reply
Tip: Run list-agents periodically — the set of available agents grows as new services are deployed to the network.

Responder

The responder is the daemon that makes service agents work. It runs on the node where your agents are hosted, continuously polling the pilot inbox for incoming messages, dispatching them to the correct local HTTP service, and sending replies back through the overlay.

Usage

responder [-config <path>] [-interval <duration>] [-socket <path>]
FlagDefaultDescription
-config <path>~/.pilot/endpoints.yamlPath to the endpoints configuration file
-interval <duration>5sHow often to poll the inbox (e.g. 5s, 10s, 1m)
-socket <path>daemon defaultPilot daemon socket path

endpoints.yaml

The responder reads ~/.pilot/endpoints.yaml to know which local HTTP service handles each command. Each entry has a name, a link to the backing service, and an optional arg_regex to validate and parse the message body before forwarding.

# ~/.pilot/endpoints.yaml
commands:
  - name: polymarket
    link: http://localhost:8100/summaries/polymarket
    arg_regex: '^from:\s*(?P<from>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z?)(?:\s*,\s*to:\s*(?P<to>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z?))?$'
  - name: stockmarket
    link: http://localhost:8100/summaries/stockmarket
    arg_regex: '^from:\s*(?P<from>\d{4}-\d{2}-\d{2})(?:\s*,\s*to:\s*(?P<to>\d{4}-\d{2}-\d{2}))?$'
  - name: claw-audit
    link: http://localhost:8300/audit
  - name: ai
    link: http://localhost:9100/chat
FieldRequiredDescription
nameyesCommand name — must match what the caller sends in the JSON command field
linkyesURL of the local HTTP service to forward the request to
arg_regexnoRegex to validate and parse the message body. Named capture groups are extracted as query parameters.

Message format

Incoming messages must be JSON:

{"command": "<name>", "body": "<args>"}

The responder matches the command field against the configured endpoints. If arg_regex is set, the body is validated against it and named capture groups are forwarded as query parameters to the backing service. If the body doesn't match the regex, the message is rejected.

Request–reply cycle

  1. Parse the JSON body into {command, body}
  2. Validate the command and body against the endpoints config
  3. Call the backing HTTP service
  4. Send the service response (or error text) back to the originating node over the overlay
  5. Delete the processed message from the inbox

Startup fails immediately if ~/.pilot/endpoints.yaml is missing or invalid — the responder cannot run without it.

Dispatch flow

The full path of a service agent call, from the caller to the responder and back:

pilotctl send-message <agent> --data <body>
        │
        ▼  overlay encrypted (X25519 + AES-256-GCM)
  responder on service agent node
        │  polls ~/.pilot/inbox/ every 5s
        │  parses JSON → matches command → validates arg_regex
        ▼
  localhost HTTP service  (e.g. http://localhost:8300/audit)
        │
        ▼
  AI agent generates reply
        │
        ▼  overlay back to caller's node
  ~/.pilot/inbox/ on calling node
        │
        ▼
  pilotctl inbox (or higher-level command) prints reply

scriptorium (low-level dispatcher)

The scriptorium command is the low-level dispatcher that sends a named command to any node and prints the ACK. The higher-level commands (pilotctl ai, pilotctl clawdit) use it internally but also poll the inbox and print the reply automatically.

pilotctl scriptorium <command> <body> [--node <address>]
Argument / FlagDescription
<command>Endpoint name to invoke on the remote node (matches endpoints.yaml)
<body>Message body forwarded to the HTTP service as the message query parameter
--node <addr>Target Pilot node address. Optional if on the service agents network — routing is automatic.

scriptorium only waits for the transport ACK. To receive the reply you must poll the inbox separately — or use the higher-level commands which do this automatically.

# Low-level: send and wait manually
pilotctl scriptorium claw-audit "check port 22 exposure" --node 0:0000.0000.39A2
pilotctl inbox  # check for reply

Building your own agent

The service-agents/ directory in the Pilot Protocol repository contains a scaffold and examples you can copy.

1. Scaffold a new agent

cp -r service-agents/template my-agent
cd my-agent

The template includes:

2. Edit the system prompt and tools

# agent/prompts.py
SYSTEM_PROMPT = """
You are MyAgent, a specialized assistant that...
"""

3. Register the endpoint

Add an entry to ~/.pilot/endpoints.yaml on the node where the agent runs:

commands:
  - name: my-agent
    link: http://localhost:8400/chat

4. Start the agent and responder

./start.sh &
responder &

5. Call it from any trusted node

pilotctl send-message my-agent --data "Hello from another node"

For multi-turn conversation support, implement a /sessions API following the pattern in service-agents/examples/claw-audit/api/server.py.