One YAML = Any Agent.
A Rust framework for building AI agents from a single YAML specification. No code required for common use cases.
ai-agents.rs - Documentation, guides, and examples
- Declarative behavior - everything in YAML, not code
- Language-agnostic semantics - intent, extraction, validation via LLM (no regex)
- Layered overrides - global → agent → state → skill → turn
- Safety by default - tool policies, HITL approvals, error recovery
- Extensible - custom LLMs, tools, memory, storage, hooks
Status: 1.0.0-rc.9 — Under active development. APIs and YAML schema may change between minor versions.
- Multi-LLM with fallback - 12 providers (OpenAI, Anthropic, Google, Ollama, DeepSeek, Groq, Mistral, Cohere, xAI, Phind, OpenRouter, any OpenAI-compatible); named aliases (default, router); auto-fallback on failure
- Hierarchical state machine - nested sub-states, LLM-evaluated transitions, guard-based short-circuiting, intent-based routing, entry/exit actions
- Skill system - reusable tool + prompt workflows with LLM-based intent routing
- built-in tools + MCP - datetime, JSON, HTTP, file, text, template, math, calculator, random, echo; connect any MCP server for hundreds more
- Tool scoping & conditions - 3-level filtering (state → spec → registry), context/state/time/semantic conditions, multi-language aliases, parallel execution
- Input/output process pipeline - normalize, detect, extract, sanitize, validate, transform, format - all LLM-based, works across languages
- CompactingMemory - LLM-based rolling summarization, token budgeting, SQLite/Redis/file persistence
- Dynamic context - runtime, file, HTTP, env, and callback sources with Jinja2 templates in prompts
- Reasoning & reflection - chain-of-thought, ReAct, plan-and-execute, auto mode; LLM self-evaluation with criteria and retry
- Intent disambiguation - LLM-based ambiguity detection, clarification generation, multi-turn resolution
- Safety & control - error recovery with backoff, tool security (rate limits, domain restrictions), human-in-the-loop approvals with multi-language messages
- Dynamic agent spawning - runtime agent creation from YAML/templates, agent registry, inter-agent messaging
- Extensible via traits -
LLMProvider,Memory,Tool,ApprovalHandler,Summarizer,AgentHooks,ToolProvider
See Concepts for architecture details and Providers for per-provider setup.
[dependencies]
ai-agents = "1.0.0-rc.9"Create agent.yaml:
# agent.yaml
name: MyAgent
system_prompt: "You are a helpful assistant."
llm:
provider: openai
model: gpt-4.1-nano
# For any OpenAI-compatible server:
# llm:
# provider: openai-compatible
# model: qwen3:8b
# base_url: http://localhost:11434/v1
# Provider-specific extra params are also allowed.
# Example for OpenAI reasoning-capable models:
# llms:
# default:
# provider: openai
# model: gpt-5.4-mini
# reasoning_effort: low
# llm:
# default: defaultRun it:
cargo run -p ai-agents-cli -- run agent.yamluse ai_agents::{Agent, AgentBuilder};
#[tokio::main]
async fn main() -> ai_agents::Result<()> {
let agent = AgentBuilder::from_yaml_file("agent.yaml")?
.auto_configure_llms()?
.auto_configure_features()?
.build()?;
let response = agent.chat("Hello!").await?;
println!("{}", response.content);
Ok(())
}use ai_agents::{AgentBuilder, UnifiedLLMProvider, ProviderType};
use std::sync::Arc;
#[tokio::main]
async fn main() -> ai_agents::Result<()> {
let llm = UnifiedLLMProvider::from_env(ProviderType::OpenAI, "gpt-4.1-nano")?;
let agent = AgentBuilder::new()
.system_prompt("You are a helpful assistant.")
.llm(Arc::new(llm))
.build()?;
let response = agent.chat("Hello!").await?;
println!("{}", response.content);
Ok(())
}See the examples/ directory for more.
# Install from crates.io
cargo install ai-agents-cli --version 1.0.0-rc.9
# Or run directly from source
cargo run -p ai-agents-cli -- run agent.yamlai-agents-cli run agent.yaml # interactive REPL
ai-agents-cli run agent.yaml --stream --show-tools # stream tokens, show tool calls
ai-agents-cli run agent.yaml --show-state --show-timing # show state transitions and timing
ai-agents-cli validate agent.yaml # check YAML without startingSee the CLI Guide for REPL commands, metadata configuration, and full reference.
See the full roadmap for what's shipped, what's next, and the complete feature catalog.
| Resource | Description |
|---|---|
| Getting Started | Install and run your first agent in under a minute |
| YAML Reference | Complete spec for agent definition files |
| CLI Guide | All commands, flags, and REPL features |
| Rust API | Embedding agents in your Rust application |
| Providers | Setup for all 12 LLM providers |
| Concepts | Architecture, lifecycle, and core ideas |
| Examples | YAML and Rust examples for every feature |
| API Docs | Auto-generated Rust API reference |
| Crate | Role |
|---|---|
| llm | Unified LLM provider interface (OpenAI, Anthropic, Google, Ollama, and more) |
| rmcp | Official Rust SDK for Model Context Protocol (MCP) |
| tokio | Async runtime |
| minijinja | Jinja2-compatible template engine for system prompts and spawner templates |
| sqlx | SQLite storage backend (optional, sqlite feature) |
| redis | Redis storage backend (optional, redis-storage feature) |
This repository is an independent open-source project maintained by the author in a personal capacity.
It is not an official product or offering of any employer, and no employer owns or governs this project.
See INDEPENDENCE.md for details.
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT)