Developed and maintained by Quantlix
AnyCode is a lightweight yet powerful orchestration engine written entirely in TypeScript. It enables you to compose autonomous AI agents into collaborative teams that communicate, share context, resolve task dependencies, and operate concurrently — all from a single runtime. Whether you're deploying on bare metal, inside containers, across serverless functions, or within CI/CD pipelines, AnyCode adapts to your infrastructure without friction.
Instead of managing individual agents in silos, AnyCode introduces a team-oriented paradigm: agents exchange messages through a built-in event bus, persist shared knowledge in memory stores, and execute work items according to a topologically sorted task graph. The result is a cohesive system where every agent understands its role and collaborates toward a unified objective.
- AnyCode
- Scalable Multi-Agent AI Orchestration Framework for TypeScript
- Table of Contents
- Key Capabilities
- Quick Start
- Building Agent Teams
- Defining Task Pipelines
- Creating Custom Tools
- Cross-Provider Model Mixing
- Live Streaming Output
- Architecture Overview
- Built-In Tool Reference
- Core Concepts at a Glance
- Contributing
- License
Traditional agent libraries focus on running a single LLM in a loop. AnyCode takes a fundamentally different approach — it gives you an entire coordinated team:
| Feature | Description |
|---|---|
| Inter-agent communication | Agents relay information through MessageBus, share persistent state via SharedMemory, and synchronize through managed task queues |
| Dependency-driven execution | Express task relationships with dependsOn and let TaskQueue resolve ordering through topological sorting — no manual sequencing needed |
| Automatic goal decomposition | Provide a high-level objective and the orchestrator intelligently partitions it into targeted subtasks assigned to the right agents |
| Provider-agnostic design | Seamlessly use Anthropic Claude, OpenAI GPT, or integrate any custom backend through the LLMAdapter interface |
| Schema-validated tooling | Every tool is declared with a Zod schema for input validation, plus five practical tools are included out of the box |
| Bounded parallelism | Independent work items execute simultaneously, governed by a configurable concurrency semaphore |
| Flexible scheduling strategies | Choose between round-robin, least-busy, capability-match, or dependency-first assignment policies |
| Incremental streaming | Receive real-time text deltas from any agent as an AsyncGenerator<StreamEvent> |
| Full type safety | Strict TypeScript types enforced at every layer, with Zod validation at all external boundaries |
Install the package from npm:
npm install anycodeThe simplest way to get started — spin up one agent and hand it a task:
import { AnyCode } from 'anycode'
const engine = new AnyCode({ defaultModel: 'claude-sonnet-4-6' })
const result = await engine.runAgent(
{
name: 'engineer',
model: 'claude-sonnet-4-6',
tools: ['bash', 'file_write'],
},
'Create a TypeScript utility that checks whether a given string is a palindrome, save it to /tmp/palindrome.ts, and execute it.',
)
console.log(result.output)Note: Export
ANTHROPIC_API_KEYand/orOPENAI_API_KEYas environment variables before running any example.
Real-world workflows benefit from specialization. AnyCode lets you define distinct agents — each with its own system prompt, model, and tool access — and unify them into a collaborative team:
import { AnyCode } from 'anycode'
import type { AgentConfig } from 'anycode'
const planner: AgentConfig = {
name: 'planner',
model: 'claude-sonnet-4-6',
systemPrompt: 'You draft module interfaces, folder layouts, and endpoint schemas.',
tools: ['file_write'],
}
const builder: AgentConfig = {
name: 'builder',
model: 'claude-sonnet-4-6',
systemPrompt: 'You translate specifications into production-ready code.',
tools: ['bash', 'file_read', 'file_write', 'file_edit'],
}
const auditor: AgentConfig = {
name: 'auditor',
model: 'claude-sonnet-4-6',
systemPrompt: 'You inspect code for bugs, edge cases, and readability concerns.',
tools: ['file_read', 'grep'],
}
const engine = new AnyCode({
defaultModel: 'claude-sonnet-4-6',
onProgress: (ev) => console.log(ev.type, ev.agent ?? ev.task ?? ''),
})
const team = engine.createTeam('backend-crew', {
name: 'backend-crew',
agents: [planner, builder, auditor],
sharedMemory: true,
})
const result = await engine.runTeam(team, 'Scaffold a CRUD API for a notes app in /tmp/notes-api/')
console.log(`Completed: ${result.success}`)
console.log(`Tokens used: ${result.totalTokenUsage.output_tokens}`)Each agent contributes its expertise while maintaining access to a shared knowledge base. The orchestrator handles assignment, sequencing, and concurrency behind the scenes.
For workflows that demand precise control over the execution graph, you can manually specify tasks along with their dependencies:
const result = await engine.runTasks(team, [
{
title: 'Draft schema definitions',
description: 'Produce TypeScript type declarations and save them to /tmp/types.md',
assignee: 'planner',
},
{
title: 'Implement core logic',
description: 'Read /tmp/types.md and build the service layer in /tmp/lib/',
assignee: 'builder',
dependsOn: ['Draft schema definitions'],
},
{
title: 'Write unit tests',
description: 'Author Vitest test suites covering all service methods.',
assignee: 'builder',
dependsOn: ['Implement core logic'],
},
{
title: 'Audit implementation',
description: 'Examine /tmp/lib/ and generate a detailed review report.',
assignee: 'auditor',
dependsOn: ['Implement core logic'],
},
])The TaskQueue resolves the dependency graph using topological sorting. Tasks with no unmet dependencies are dispatched in parallel, while dependent tasks wait until their predecessors complete successfully.
Extend agent capabilities by registering your own tools. Each tool is defined with a Zod input schema for automatic validation:
import { z } from 'zod'
import { defineTool, Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from 'anycode'
const fetchArticles = defineTool({
name: 'fetch_articles',
description: 'Retrieves relevant articles from the knowledge base.',
inputSchema: z.object({
topic: z.string().describe('Subject to search for.'),
limit: z.number().optional().describe('Maximum articles to return (default 5).'),
}),
execute: async ({ topic, limit = 5 }) => {
const articles = await myKnowledgeBase(topic, limit)
return { data: JSON.stringify(articles), isError: false }
},
})
const registry = new ToolRegistry()
registerBuiltInTools(registry)
registry.register(fetchArticles)
const executor = new ToolExecutor(registry)
const agent = new Agent(
{ name: 'analyst', model: 'claude-sonnet-4-6', tools: ['fetch_articles'] },
registry,
executor,
)
const result = await agent.run('Summarize the latest changes in the ECMAScript specification.')You can register as many custom tools as needed. Each one benefits from the same schema-validated, type-safe pipeline that the built-in tools use.
One of AnyCode's most practical features is the ability to combine different LLM providers within a single team. Assign a reasoning-heavy model to your strategist and a fast coding model to your implementer — the framework handles routing transparently:
const thinker: AgentConfig = {
name: 'thinker',
model: 'claude-opus-4-6',
provider: 'anthropic',
systemPrompt: 'You devise architectural blueprints and technical strategies.',
tools: ['file_write'],
}
const implementer: AgentConfig = {
name: 'implementer',
model: 'gpt-5.4',
provider: 'openai',
systemPrompt: 'You transform plans into functional, tested code.',
tools: ['bash', 'file_read', 'file_write'],
}
const team = engine.createTeam('cross-provider', {
name: 'cross-provider',
agents: [thinker, implementer],
sharedMemory: true,
})
await engine.runTeam(team, 'Create a CLI utility that transforms YAML files into JSON format.')This approach lets you optimize for both capability and cost — use premium models where deep reasoning matters and lighter models where speed is the priority.
For interactive applications or real-time feedback, stream agent output token-by-token as it's generated:
import { Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from 'anycode'
const registry = new ToolRegistry()
registerBuiltInTools(registry)
const executor = new ToolExecutor(registry)
const narrator = new Agent(
{ name: 'narrator', model: 'claude-sonnet-4-6', maxTurns: 3 },
registry,
executor,
)
for await (const ev of narrator.stream('Describe the observer pattern in three sentences.')) {
if (ev.type === 'text' && typeof ev.data === 'string') {
process.stdout.write(ev.data)
}
}The streaming interface returns an AsyncGenerator<StreamEvent>, making it straightforward to pipe output into CLIs, web sockets, or server-sent event streams.
The diagram below illustrates how AnyCode's components connect from top-level orchestration down to individual LLM calls:
+--------------------------------------------------------------+
| AnyCode (orchestrator) |
| |
| createTeam() · runTeam() · runTasks() · runAgent() |
+-----------------------------+--------------------------------+
|
+----------v----------+
| Team |
| AgentConfig[] |
| MessageBus |
| TaskQueue |
| SharedMemory |
+----------+----------+
|
+-------------+-------------+
| |
+--------v---------+ +------------v-----------+
| AgentPool | | TaskQueue |
| Semaphore | | dependency graph |
| runParallel() | | cascade failure |
+--------+---------+ +------------------------+
|
+--------v---------+
| Agent | +------------------------+
| run / prompt / |--->| LLMAdapter |
| stream | | Anthropic · OpenAI |
+--------+---------+ +------------------------+
|
+--------v---------+
| AgentRunner | +------------------------+
| conversation |--->| ToolRegistry |
| loop + dispatch | | defineTool + 5 |
+------------------+ | built-in tools |
+------------------------+
Data flow summary:
- The orchestrator receives a goal or an explicit task list
- A Team manages the agent roster, message bus, and shared memory
- The AgentPool dispatches work using a bounded concurrency semaphore
- The TaskQueue resolves dependencies via topological sort and cascades failures
- Each Agent runs a conversation loop through AgentRunner, invoking tools from the ToolRegistry as needed
- LLM calls are routed through the LLMAdapter abstraction, supporting any provider
AnyCode ships with five practical tools that cover the most common agent operations:
| Tool | What It Does |
|---|---|
bash |
Executes shell commands with stdout/stderr capture, configurable timeout, and working-directory support |
file_read |
Reads file contents from an absolute path, with optional offset and line-limit for handling large files |
file_write |
Creates or overwrites a file at the specified path — parent directories are generated automatically |
file_edit |
Performs targeted substring replacement within a file, with an option to replace all occurrences |
grep |
Runs regex-based searches across files, leveraging ripgrep when available or falling back to a pure Node.js implementation |
All tools follow the same defineTool() pattern, so extending or replacing them works identically to registering custom tools.
| Concept | Component | Responsibility |
|---|---|---|
| Conversation loop | AgentRunner |
Manages the model ↔ tool turn cycle until the task completes |
| Typed tool declaration | defineTool() |
Defines tools with Zod-validated input schemas |
| Orchestration | AnyCode |
Decomposes goals, assigns work, and manages concurrency |
| Team coordination | Team + MessageBus |
Enables inter-agent messaging and shared knowledge state |
| Task scheduling | TaskQueue |
Resolves execution order through topological dependency sorting |
Contributions, suggestions, and issue reports are welcome. Please open an issue or submit a pull request on the GitHub repository.
Released under the MIT License — see LICENSE for details.
Built with purpose by Quantlix