xargs for AI agents — fan out prompts over inputs in parallel.
Pipe it. Prompt it. Parallelize it.
Stateless CLI that takes a prompt, fans it out across inputs (files, stdin, globs), runs them through an AI backend (pi, claude, codex), and writes templated markdown output. No sessions, no config files — just Unix pipes and flags.
- Pipe anything —
find,ls -l,grep, or any command output becomes AI input - Parallel execution —
-j 8runs 8 concurrent AI jobs via goroutines - Batch mode — group N inputs into a single prompt with
-b 10 - Multiple backends —
pi(default),claude,codex— swap with--backend - Templated output — Go
text/templatemarkdown files shape every output - Custom skills — pass skill files through to the backend with
-s - Dry run — preview every command before spending tokens
Requires Go 1.21+ and at least one AI CLI backend (pi, claude, or codex).
git clone https://github.com/shadowfax/colang
cd colang
make install# Summarize every Go file, 8 at a time
find . -name "*.go" | colang -p "Summarize this file in 2 sentences" -o summaries/ -j 8
# Preview what would run (no tokens spent)
find . -name "*.ts" | colang -p "Review for bugs" --dry-run -j 4
# Check the results
cat summaries/_summary.mdcolang accepts input three ways, in priority order:
# 1. File arguments (after --)
colang -p "Explain this" -o docs/ -- src/main.go src/lib.go
# 2. Glob pattern
colang -p "Explain this" -o docs/ --glob "src/**/*.go"
# 3. Stdin (piped)
find src -name "*.go" | colang -p "Explain this" -o docs/When stdin lines are valid file paths, colang auto-detects and passes them as files to the backend. Otherwise, the line content is used directly as input.
# Null-delimited (like xargs -0)
find . -name "*.go" -print0 | colang -p "Review" -d '\0' -o out/
# Custom record separator for multi-line inputs
cat data.txt | colang -p "Analyze this record" --record-sep '---' -o out/# pi (default)
colang -p "Summarize" -o out/ -- *.go
# claude
colang --backend claude -p "Summarize" -o out/ -- *.go
# codex
colang --backend codex -p "Summarize" -o out/ -- *.go
# Specific model
colang --model sonnet -p "Summarize" -o out/ -- *.go
# Thinking level (pi only)
colang --thinking high -p "Find bugs in this code" -o out/ -- *.go# Sequential (default)
colang -p "Review" -o out/ -- *.go
# 8 concurrent jobs
colang -p "Review" -o out/ -j 8 -- *.go
# Unlimited (one goroutine per input)
colang -p "Review" -o out/ -j 0 -- *.go
# Stop on first failure
colang -p "Review" -o out/ -j 8 --fail-fast -- *.goProgress reports on stderr:
Running 12 items with pi (concurrency: 8)
[1/12] parser
[2/12] batcher
...
Done: 12 succeeded, 0 failed (34.2s)
Group inputs into batches sent as a single prompt:
# Process 5 files per prompt
find . -name "*.go" | colang -p "Review these files together for consistency" -b 5 -o reviews/ -j 4Batched 20 inputs into 4 batches (size 5)
Running 4 items with pi (concurrency: 4)
Without a template, raw AI output is written as-is. With -t, a Go text/template shapes each output file.
colang -p "Summarize" -t templates/summary.md -o out/ -- *.gotemplates/summary.md:
# {{.Name}}
> Generated {{.Timestamp}} ({{.Duration}})
## Summary
{{.Output}}
---
*Source: {{.Input}}*| Variable | Description |
|---|---|
{{.Input}} |
Original input (file path or content) |
{{.Output}} |
AI response |
{{.Name}} |
Input name (filename sans extension, or input_001) |
{{.Index}} |
Zero-based input index |
{{.Timestamp}} |
ISO 8601 timestamp |
{{.Duration}} |
Processing time |
# Load a skill file (passed through to backend)
colang -p "Refactor this" -s skills/refactor.md -o out/ -- *.go
# Multiple skills
colang -p "Review" -s skills/style.md -s skills/security.md -o out/ -- *.go
# Additional context files
colang -p "Refactor to match style guide" --context style-guide.md -o out/ -- *.goAll outputs go to the -o directory (default ./output/):
output/
├── parser.md # one file per input, named by input
├── batcher.md
├── runner.md
└── _summary.md # run metadata
_summary.md:
# Run Summary
- **Total inputs:** 3
- **Successes:** 3
- **Failures:** 0
- **Duration:** 12.4sPreview everything without running backends or spending tokens:
find . -name "*.go" | colang -p "Review" --dry-run -j 8 --model sonnet -s skill.mdDry run: 4 items, backend: pi
model: sonnet
skills: skill.md
[0] parser
prompt: Review
file: /Users/you/project/parser.go
[1] batcher
prompt: Review
file: /Users/you/project/batcher.go
...
# Generate docs for every Python file
find src -name "*.py" | colang -p "Write docstring documentation for this module" \
-t templates/docs.md -o docs/ -j 8
# Code review with a style guide
colang -p "Review against our style guide" \
--context .style-guide.md \
-s skills/reviewer.md \
-o reviews/ -j 4 -- src/*.ts
# Analyze ls -l output line by line
ls -l src/ | colang -p "What kind of file is this based on the listing?" -o analysis/
# Batch translate markdown files, 3 at a time
find docs -name "*.md" | colang -p "Translate to Spanish" -b 3 -o translated/ -j 2
# Use claude for a different perspective
find . -name "*.rs" | colang --backend claude --model opus \
-p "Identify potential memory safety issues" -o safety/ -j 4
# Prompt from a file
colang -f prompts/review.md -o reviews/ -j 8 -- src/*.gocolang [flags] [-- files...]
Flags:
-p, --prompt <text> Prompt text (required unless -f)
-f, --prompt-file <path> Load prompt from file
-t, --template <path> Output template file (Go text/template)
-o, --output <dir> Output directory (default: ./output)
-j, --jobs <n> Parallel jobs (default: 1, 0 = unlimited)
-b, --batch <n> Batch size (0 = no batching)
-s, --skill <path> Skill file (repeatable)
--context <path> Context file (repeatable)
--backend <name> Backend: pi, claude, codex (default: pi)
--model <pattern> Model to use
--thinking <level> Thinking level (pi only)
--ext <ext> Output file extension (default: .md)
--glob <pattern> Input from glob pattern (supports brace expansion)
-d, --delimiter <char> Input delimiter (default: newline)
--record-sep <string> Multi-line record separator
--dry-run Preview commands without running
--verbose Show backend CLI commands
--fail-fast Stop on first error
-h, --help Show help
Personal tool built for my own workflow. Feel free to fork and adapt.