Skip to content

Latest commit

 

History

History
215 lines (176 loc) · 9.87 KB

File metadata and controls

215 lines (176 loc) · 9.87 KB

Architecture

System Overview

AgentLoop is a TypeScript runtime that implements a tool-using agentic loop on top of LangChain and Mistral AI. The agent receives a natural-language task, calls tools iteratively until the task is complete, and returns a final text response.

graph TD
    User([User / CLI]) -->|input| Main
    Main[src/index.ts<br>AgentExecutor] -->|initialize| Init[ensureInitialized]
    Init -->|load| ToolReg[ToolRegistry<br>src/tools/registry.ts]
    Init -->|connect| MCP[MCP Bridge<br>src/mcp/bridge.ts]
    Init -->|bindTools| LLM[LLM<br>src/llm.ts]
    Init -->|load| Skills[SkillRegistry]
    Init -->|load| Agents[AgentProfileRegistry]
    Main -->|explicit profileName?| ProfileCheck{Profile<br>provided?}
    ProfileCheck -->|yes| Activate[activateProfile<br>src/agents/activator.ts]
    ProfileCheck -->|no| AgentLoop
    Activate -->|AgentRuntimeConfig| AgentLoop
    Main -->|loop| AgentLoop[Agentic Loop]
    AgentLoop -->|invoke| LLM
    LLM -->|tool_calls| ToolExec[Tool Execution]
    ToolExec -->|checkPermission| PermMgr[ToolPermissionManager<br>src/security.ts]
    ToolExec -->|run| ToolReg
    ToolExec -->|ToolMessage| AgentLoop
    AgentLoop -->|no tool_calls| Main
    Main -->|output| User
Loading

Agent Loop Flow

The main loop in src/index.ts follows this sequence on every invocation:

sequenceDiagram
    participant User
    participant AL as Agent Loop
    participant LLM
    participant Tool as Tool(s)

    User->>AL: executeWithTools(input)
    AL->>AL: ensureInitialized()
    AL->>AL: build SystemMessage + trim context
    loop Until no tool_calls or MAX_ITERATIONS
        AL->>LLM: invoke(messages)
        LLM-->>AL: AIMessage
        alt has tool_calls
            loop For each tool call
                AL->>Tool: checkPermission + invoke
                Tool-->>AL: ToolMessage
            end
        else no tool_calls
            AL-->>User: { output }
        end
    end
    AL-->>User: { output } (MAX_ITERATIONS warning)
Loading

Key behaviours:

  • ensureInitialized() runs exactly once — it loads all tools from src/tools/, connects MCP servers, binds them to the LLM with bindTools(), and loads prompt templates, skills and agent profiles.
  • Each LLM call is wrapped in an exponential back-off retry (src/retry.ts) and an AbortController-based timeout.
  • Tool calls are executed through ToolPermissionManager (blocklist / allowlist / permission level) and ConcurrencyLimiter.
  • Tool results are re-injected as ToolMessage entries so the LLM can reason about them in the next iteration.
  • The context window is trimmed to MAX_CONTEXT_TOKENS tokens before each LLM call (src/context.ts).
  • Every invocation produces a structured trace via Tracer (src/observability.ts) when TRACING_ENABLED=true.

Module Map

Module Responsibility
src/config.ts Dotenv initialization; exports appConfig with all runtime settings
src/index.ts Main agentic loop, streaming variant, CLI REPL, agentExecutor export
src/llm.ts createLLM() factory; provider switch block
src/tools/registry.ts ToolRegistry class; loadFromDirectory() for dynamic tool discovery
src/tools/*.ts Individual tool definitions; each exports a toolDefinition constant
src/security.ts ToolPermissionManager, ConcurrencyLimiter, checkNetworkAccess
src/context.ts Token counting and context trimming
src/retry.ts withRetry(), invokeWithTimeout()
src/streaming.ts streamWithTools() — streaming agent loop with chunk assembly
src/observability.ts Tracer, FileTracer, NoopTracer, per-invocation JSON traces
src/mcp/client.ts McpClient — MCP SDK wrapper for stdio/SSE transports
src/mcp/bridge.ts registerMcpTools() — translates MCP tools into ToolDefinition entries
src/subagents/runner.ts runSubagent() — isolated agent loop for a single subagent
src/langgraph/types.ts BlocksPlan v2.0 format, CompiledPlan DAG, GraphState
src/langgraph/compiler.ts Validates BlocksPlan, flattens to DAG, handles parallel fork/join
src/langgraph/scheduler.ts selectRunnable() — dependency-and-resource-aware step scheduling
src/langgraph/step-runner.ts Executes a single graph node via runSubagent() + activateProfile()
src/langgraph/graph.ts LangGraphJS StateGraph with plan → compile → select → execute → replan → finalize nodes
src/langgraph/index.ts graphExecutor.invoke() — public entry point for the LangGraph engine
src/prompts/system.ts getSystemPrompt() — assembles the runtime system prompt
src/prompts/registry.ts PromptRegistry — versioned prompt template storage
src/prompts/context.ts getCachedPromptContext() — TTL-cached runtime context injection
src/skills/registry.ts SkillRegistry — loads and exposes skill definitions
src/agents/registry.ts AgentProfileRegistry — loads agent profile JSON/YAML files
src/agents/activator.ts activateProfile() — applies a profile's overrides to runtime config
src/workspace.ts analyzeWorkspace() — detects language, framework, and lifecycle commands
src/logger.ts Structured Pino logger; configured from appConfig.logger
src/errors.ts ToolExecutionError, ToolBlockedError typed error classes

Subagent Architecture

Subagents are isolated agent loops that run a focused task with a restricted tool set. They do not share message history with the parent and communicate only through their return value.

graph TD
    Parent[LangGraph Graph Node] -->|runSubagent| SubRunner[runSubagent<br>src/subagents/runner.ts]
    SubRunner -->|isolated loop| IsolatedLLM[LLM + filtered tools]
    IsolatedLLM -->|SubagentResult| Parent
Loading

runSubagent() is the shared primitive used by both the simple agentic loop and the LangGraph engine.


LangGraph Engine

When ORCHESTRATOR=langgraph the LangGraph-based engine handles planning, parallel execution, and automatic replanning. It replaces the old sequential orchestrator.

graph LR
    Goal[Goal string] --> PlanNode[plan node<br>LLM produces BlocksPlan v2.0]
    PlanNode --> CompileNode[compile node<br>compiler.ts → CompiledPlan DAG]
    CompileNode --> SelectNode[select node<br>scheduler.ts selectRunnable]
    SelectNode --> ExecNode[execute node<br>step-runner.ts runPlannedStep]
    ExecNode --> HandleNode{success?}
    HandleNode -->|yes| SelectNode
    HandleNode -->|all done| FinalNode[finalize]
    HandleNode -->|failure| ReplanNode[replan node]
    ReplanNode --> CompileNode
Loading

Key features: parallel branches with join:all / join:any semantics, resource-aware scheduling (file/network quotas), automatic replanning on failure, and per-step agent profile activation.


Agent Profiles

Agent profiles restrict which tools the LLM can call, set a custom temperature and model, and activate skills.

graph TD
    Invoke["agentExecutor.invoke(input, profileName?)"] --> HasProfile{Explicit<br>profile name?}
    HasProfile -->|yes| Registry[AgentProfileRegistry.get]
    HasProfile -->|no| DefaultLoop[Default loop<br>no profile]
    Registry --> Activate[activateProfile<br>activator.ts]
    Activate --> SkillReg[activate skills<br>SkillRegistry]
    Activate --> FilterTools[filter tool list<br>by profile.tools & blockedTools]
    Activate --> AgentRuntimeConfig[AgentRuntimeConfig<br>model · temperature · maxIterations<br>activeSkills · activeTools · constraints]
    AgentRuntimeConfig --> BoundLLM[LLM bound with<br>filtered tools]
    BoundLLM --> AgentLoop[Agentic Loop]
Loading

Built-in profiles:

Profile Temperature Tools Max Iterations Skills
planner 0.7 file-read, file-write, file-list, code-search 10
coder 0.2 file-read/write/edit/delete, code-run, code-search, shell, calculate 20 typescript-expert
reviewer 0.3 file-read, file-list, code-search, git-diff, git-log, git-status 15 code-reviewer
devops 0.2 shell, file-read/write/edit, file-list, git-commit/diff/log/status 30 git-workflow
security-auditor 0.1 file-read, file-list, code-search, git-diff, shell 25 security-auditor

MCP Integration

The Model Context Protocol (MCP) bridge connects to external tool servers at startup and registers their tools in the local ToolRegistry so the agent loop treats them identically to built-in tools.

graph LR
    Config[MCP_SERVERS config] --> Bridge[registerMcpTools<br>src/mcp/bridge.ts]
    Bridge --> Client1[McpClient stdio]
    Bridge --> Client2[McpClient sse]
    Client1 -->|listTools| External1[External MCP Server]
    Client2 -->|listTools| External2[Remote MCP Endpoint]
    Bridge -->|register| ToolReg[ToolRegistry]
Loading

Each MCP tool's JSON Schema is converted to a Zod schema at registration time. The McpClient also supports sampling callbacks (MCP server requests an LLM completion) and resource/prompt discovery.


Streaming Mode

When STREAMING_ENABLED=true, the agent loop switches to streamWithTools() in src/streaming.ts. The LLM is called via .stream(), text chunks are yielded immediately, and ToolCallChunk fragments are accumulated until a complete tool call is assembled before execution.

sequenceDiagram
    participant CLI
    participant Stream as streamWithTools
    participant LLM
    participant Tool

    CLI->>Stream: executeWithToolsStream(input)
    loop Until done
        Stream->>LLM: stream(messages)
        loop chunks
            alt text chunk
                LLM-->>CLI: yield text
            else tool call chunk
                LLM-->>Stream: accumulate
            end
        end
        Stream->>Tool: execute accumulated tool calls
        Tool-->>Stream: ToolMessage
    end
    Stream-->>CLI: (generator exhausted)
Loading