A Python execution engine for JSON-defined AI workflows - Recipe Executor runs structured "recipes" that combine file operations, LLM interactions, and control flow into automated workflows. Perfect for AI-powered content generation, file processing, and complex automation tasks.
Recipe Executor is a pure execution engine that runs JSON "recipes" - structured workflow definitions that describe automated tasks. Think of it as a workflow engine specifically designed for AI-powered automation.
Key Features:
- 🤖 Multi-LLM Support - OpenAI, Anthropic, Azure OpenAI, Ollama
- 📁 File Operations - Read/write files with JSON/YAML parsing
- 🔄 Control Flow - Conditionals, loops, parallel execution
- 🛠️ Tool Integration - MCP (Model Context Protocol) server support
- 🎯 Context Management - Shared state across workflow steps
- ⚡ Concurrent Execution - Built-in parallelization and resource management
pip install recipe-executor- Create a recipe (JSON file):
{
"name": "summarize_file",
"steps": [
{
"step_type": "read_files",
"paths": ["{{ input_file }}"]
},
{
"step_type": "llm_generate",
"prompt": "Summarize this content:\n\n{{ file_contents[0] }}"
},
{
"step_type": "write_files",
"files": [
{
"path": "summary.md",
"content": "{{ llm_output }}"
}
]
}
]
}- Execute the recipe:
recipe-executor recipe.json --context input_file=document.txtConfigure your LLM providers via environment variables:
# OpenAI
export OPENAI_API_KEY="your-api-key"
# Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_BASE_URL="https://your-resource.openai.azure.com/"Recipe Executor provides 9 built-in step types:
read_files- Read file content (supports JSON/YAML parsing, glob patterns)write_files- Write files to disk with automatic directory creation
llm_generate- Generate content using various LLM providers- Supports structured output (JSON schemas, file specifications)
- MCP server integration for tool access
- Built-in web search capabilities
conditional- Branch execution based on boolean conditionsloop- Iterate over collections with optional concurrencyparallel- Execute multiple steps concurrentlyexecute_recipe- Execute nested recipes (composition)
set_context- Set context variables and configurationmcp- Direct MCP server interactions
{
"name": "generate_python_class",
"steps": [
{
"step_type": "llm_generate",
"prompt": "Create a Python class for {{ class_description }}",
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "code_generation",
"schema": {
"type": "object",
"properties": {
"code": {"type": "string"},
"explanation": {"type": "string"}
}
}
}
}
},
{
"step_type": "write_files",
"files": [
{
"path": "{{ class_name }}.py",
"content": "{{ llm_output.code }}"
}
]
}
]
}{
"name": "process_documents",
"steps": [
{
"step_type": "read_files",
"paths": ["docs/*.txt"],
"use_glob": true
},
{
"step_type": "loop",
"items": "{{ file_contents }}",
"concurrency": 3,
"steps": [
{
"step_type": "llm_generate",
"prompt": "Extract key points from: {{ item }}"
},
{
"step_type": "write_files",
"files": [
{
"path": "summaries/summary_{{ loop_index }}.md",
"content": "{{ llm_output }}"
}
]
}
]
}
]
}{
"step_type": "llm_generate",
"model": "gpt-4o",
"provider": "openai",
"max_tokens": 1000,
"temperature": 0.7,
"prompt": "Your prompt here"
}{
"step_type": "llm_generate",
"mcp_servers": [
{
"server_name": "web_search",
"command": "mcp-server-web-search",
"args": []
}
],
"tools": ["web_search"],
"prompt": "Search for information about {{ topic }}"
}{
"step_type": "conditional",
"condition": "file_exists('config.json')",
"then_steps": [...],
"else_steps": [...]
}recipe-executor RECIPE_FILE [OPTIONS]
Options:
--context KEY=VALUE Context variables (can be used multiple times)
--config KEY=VALUE Configuration overrides (can be used multiple times)
--log-dir DIR Directory for log files (default: logs)Examples:
# Basic execution
recipe-executor workflow.json
# With context variables
recipe-executor workflow.json --context input=data.txt output=results/
# With configuration overrides
recipe-executor workflow.json --config model=gpt-4o --config temperature=0.3
# Custom log directory
recipe-executor workflow.json --log-dir ./execution-logsYou can also use Recipe Executor programmatically:
import asyncio
from recipe_executor.executor import Executor
from recipe_executor.models import Recipe
from recipe_executor.context import Context
from recipe_executor.logger import init_logger
async def run_recipe():
# Load recipe
with open("recipe.json") as f:
recipe = Recipe.model_validate_json(f.read())
# Create context
context = Context(
artifacts={"input": "Hello World"},
config={"model": "gpt-4o"}
)
# Execute
logger = init_logger("logs")
executor = Executor(logger)
await executor.execute(recipe, context)
asyncio.run(run_recipe())Recipe Executor provides comprehensive error handling:
- Step-level isolation - Errors in one step don't break the entire workflow
- Detailed logging - Structured logs with step-by-step execution details
- Graceful failures - Clear error messages with context information
- Resource cleanup - Automatic cleanup of temporary resources
Recipe Executor is the core execution engine of the larger Recipe Tool ecosystem:
- recipe-tool - CLI for creating and executing recipes from natural language
- recipe-executor - This package - pure execution engine
- Document Generator App - Web UI for document workflows
- MCP Servers - Integration with AI assistants like Claude
For more examples and advanced usage patterns, visit the Recipe Tool repository.
This project is licensed under the MIT License - see the LICENSE file for details.
This is an experimental project from Microsoft. For issues and examples: