Quickstart
From zero to a running workflow in under five minutes.
1. Install
curl -fsSL https://raw.githubusercontent.com/dagucloud/dagu/main/scripts/installer.sh | bashirm https://raw.githubusercontent.com/dagucloud/dagu/main/scripts/installer.ps1 | iexdocker pull ghcr.io/dagucloud/dagu:latestbrew install dagunpm install -g --ignore-scripts=false @dagucloud/daguThe script installers run a guided wizard for PATH setup, background service setup, and the first admin account. Homebrew, npm, and Docker install the binary only.
Full options (specific versions, custom directories, service scope, uninstall, CI/non-interactive): Installation Guide.
Verify:
dagu version2. Write your first workflow
A DAG is a YAML file. Save the following as hello.yaml:
steps:
- echo "Hello from Dagu!"
- echo "Running step 2"Steps run sequentially by default. Each step is a shell command.
3. Run it
dagu start hello.yamlOutput:
Succeeded - 2026-04-24T15:23:07+09:00
dag: hello (0s)
├─log: .../logs/hello/.../dag-run....log
│
├─cmd_1 (0s) [succeeded]
│ ├─echo "Hello from Dagu!"
│ │
│ └─stdout: .../cmd_1....out
│ Hello from Dagu!
│
└─cmd_2 (0s) [succeeded]
├─echo "Running step 2"
│
└─stdout: .../cmd_2....out
Running step 2
Result: SucceededTimestamp, duration, and log paths vary by run.
Other useful commands:
dagu validate hello.yaml # Check syntax without running
dagu dry hello.yaml # Show execution plan
dagu status hello # Last run status
dagu history hello # Recent runsRun with Docker instead:
mkdir -p ~/.dagu/dags && cp hello.yaml ~/.dagu/dags/
docker run --rm -v ~/.dagu:/var/lib/dagu ghcr.io/dagucloud/dagu:latest \
dagu start hello4. Open the web UI
dagu start-allVisit http://localhost:8080. The UI shows live run status, logs per step, execution history, and a YAML editor.
On first launch against an empty DAGs directory (~/.config/dagu/dags/), Dagu creates a set of example workflows (example-01-basic-sequential.yaml through example-06-container-workflow.yaml). Set DAGU_SKIP_EXAMPLES=true or skip_examples: true in config.yaml to disable.
Core pieces
Dependencies
type: graph
steps:
- id: extract
command: ./extract.sh
- id: transform_a
command: ./transform_a.sh
depends: extract
- id: transform_b
command: ./transform_b.sh
depends: extract
- id: load
command: ./load.sh
depends: [transform_a, transform_b]type: graph enables parallel execution via depends. type: chain (the default) runs steps in the order they appear.
Parameters
params:
- SOURCE: /data
- DEST: /backup
steps:
- command: tar -czf ${DEST}/backup.tar.gz ${SOURCE}dagu start backup.yaml -- SOURCE=/important DEST=/backupsRetries and error handling
steps:
- command: curl -f https://example.com/data.zip -o data.zip
retry_policy:
limit: 3
interval_sec: 30
- command: ./process.sh data.zip
continue_on:
failure: true
handler_on:
failure:
command: echo "run failed" | mail -s "alert" [email protected]
exit:
command: rm -f data.zipContainers
Run every step in the same container:
container:
image: python:3.11
volumes:
- ./data:/data
steps:
- command: python -c "open('/data/out.txt','w').write('hi')"
- command: python -c "print(open('/data/out.txt').read())"Or run a single step in its own container:
steps:
- name: build
container:
image: node:20-alpine
command: npm run buildScheduling
schedule: "0 2 * * *" # 2 AM daily
overlap_policy: skip # drop new runs while one is active
timeout_sec: 3600
steps:
- command: ./nightly.shWorking directory
DAGs execute in the directory of the YAML file by default. Override with working_dir:
working_dir: /app/project
dotenv: .env # resolved from working_dir
steps:
- command: ls -laNext steps
- Core Concepts — steps, dependencies, execution model
- Deployment Models — local, self-hosted, managed, and hybrid options
- Writing Workflows — full YAML surface
- Step Types — all 18 executors (docker, ssh, http, sql, s3, sub-DAG, …)
- Examples — ready-to-adapt patterns
- CLI Reference — every command and flag
