Minimal, practical snippets that demonstrate Alloy’s core philosophy: Python for logic. English for intelligence.
- Install package (editable suggested):
pip install -e '.[dev]' - Load environment:
cp .env.example .env(if available) and setOPENAI_API_KEY - Optional: run offline with a fake backend:
export ALLOY_BACKEND=fake
- Each example is a standalone script:
python examples/<category>/<file>.py - Default model is
gpt-5-mini; you can override withconfigure(...)orALLOY_MODEL. - Streaming policy: Commands → Streaming constraints: https://docs.alloy.fyi/guide/streaming/
Set one of the following before running examples. Full configuration knobs: https://docs.alloy.fyi/guide/configuration/
- OpenAI
export OPENAI_API_KEY=...export ALLOY_MODEL=gpt-5-mini
- Anthropic (Claude)
export ANTHROPIC_API_KEY=...export ALLOY_MODEL=claude-sonnet-4-20250514- If required:
export ALLOY_MAX_TOKENS=512
- Google Gemini
export GOOGLE_API_KEY=...export ALLOY_MODEL=gemini-2.5-flash
- Ollama (local)
- Ensure a model is running:
ollama run <model> export ALLOY_MODEL=ollama:<model>
- Ensure a model is running:
| Pattern | Example | When to Use |
|---|---|---|
| First command | 10-commands/01_first_command.py | Reusable text functions |
| Typed output | 20-typed/02_dataclass_output.py | Need structured data |
| With tools | 30-tools/01_simple_tool.py | External capabilities |
| DBC workflow | 40-contracts/02_workflow_contracts.py | Enforce order/validation |
| Streaming | 80-patterns/04_streaming_updates.py | Text-only streaming |
| Providers | 70-providers/00_switch_providers.py | Provider portability |
00-explore/— Explore withask()(no structure needed)10-commands/— First commands, sync and async20-typed/— Provider-enforced typed outputs (primitives, dataclasses, lists)30-tools/—@toolbasics, tools with commands, parallel calls, recipes (HTTP/file/SQL)40-contracts/— Design by Contract (@require/@ensure) and workflows50-composition/— Compose commands (routing, recursive analysis, translator network)60-integration/— Pandas, Flask endpoint, batch processing, pytest generator70-providers/— Same task across OpenAI / Anthropic / Gemini / Ollama + setup80-patterns/— RAG, self-refine, PII guardrail, streaming, retry, memory, conversation history90-advanced/— Deep-agents (dynamic + minimal), OCR via tool, observability, evals
- For quick demos without provider keys:
ALLOY_BACKEND=fakewill return deterministic canned responses for typed outputs.
- Set the provider API key and
ALLOY_MODEL:- OpenAI:
export OPENAI_API_KEY=...; export ALLOY_MODEL=gpt-5-mini - Anthropic (Claude):
export ANTHROPIC_API_KEY=...; export ALLOY_MODEL=claude-sonnet-4-20250514- If required by the API, also set:
export ALLOY_MAX_TOKENS=512
- If required by the API, also set:
- Google Gemini:
export GOOGLE_API_KEY=...; export ALLOY_MODEL=gemini-2.5-flash - Ollama (local, optional):
export ALLOY_MODEL=ollama:<model>(ensure the model is running viaollama run <model>)
- OpenAI:
- Most examples only set
temperature=0.2. The model is read fromALLOY_MODELso examples run across providers without code changes.
- Examples aim for 50–150 lines each, minimal dependencies, and deterministic behavior (e.g.,
temperature=0.2). - Use
from alloy import ...for all imports (no submodules).
- Provider: set
ALLOY_MODELand the relevant API key (see 70-providers/README.md). - Tool loops: default
max_tool_turns=10(override via env orconfigure).