Lightweight to run
Dagu is a single binary with no required database, message broker, or control-plane stack. Start on one machine and add workers only when you need them.
Dagu turns scripts, cron jobs, containers, HTTP tasks, SQL jobs, and approvals into one visible workflow system without forcing a rewrite.
Guided installer: adds Dagu to your PATH, sets up a background service, and creates the first admin so you can start running workflows.
No SDK required. Your business logic stays untouched.
Single binary, no required database or broker
Run scripts, containers, SSH tasks, and HTTP calls
Runs fully offline. No external services needed.
Self-hosted by default. Existing automation stays intact. Dagu adds oversight and operator controls around it.
Dagu is a single binary with no required database, message broker, or control-plane stack. Start on one machine and add workers only when you need them.
Run schedules, dependencies, retries, queues, parameters, secrets, notifications, SSH steps, container steps, SQL jobs, and distributed execution in readable YAML.
Your scripts, services, SQL, containers, and operational commands stay as they are. Dagu orchestrates around them instead of forcing a framework or SDK into your codebase.
Every run gets status, logs, history, timing, and a visual workflow view, so jobs stop disappearing into crontabs and server log files.
Deployment models
Run Dagu locally, self-host it, use the managed server, or combine cloud operations with private execution.
Local
Run `dagu start-all` on one machine with local file-backed state. No database, broker, or platform stack required.
Self-hosted
Keep the Dagu server, workers, secrets, logs, and execution inside your own environment.
Dagu Cloud
Use a dedicated Dagu server operated by Dagu Cloud in an isolated gVisor instance on GKE.
Hybrid
Let Dagu Cloud operate the server while private workers run Docker, private-network, or data-local steps.
Hybrid execution
Hybrid keeps the Dagu server managed while execution that needs your network, runtime, or data stays under your control.
Teams look at throughput, queues, recovery, governance, API access, and worker execution before they move scattered jobs into one control plane.
Run thousands of workflow runs per day on one machine, depending on hardware, workflow shape, step duration, and queue settings.
Use queues, concurrency limits, and distributed workers to control load and spread jobs across machines.
Cron schedules, catchup, durable automatic retries, timeouts, reruns, event handler scripts, and email notifications keep failures manageable.
Use user management, RBAC, workspaces, approvals, secrets, REST API, CLI, and webhooks for shared production workflows.
Use Cases
A practical index of operations work that starts as scripts, cron jobs, and ad hoc tasks, then needs a workflow system people can run and track.
Dagu fits teams that already have operational work spread across commands, scripts, containers, SQL jobs, HTTP tasks, and remote servers, then need a clearer way to schedule, retry, observe, and manage it.
Keep the existing commands. Add visibility, retries, approvals, and run history around them.
Example / Hidden cron work
Bring existing shell scripts, Python scripts, HTTP calls, and scheduled jobs into Dagu without rewriting them.
Hidden cron estates become visible, retryable workflows with logs, dependencies, history, and operator controls.
The workflow stays concrete enough for engineers and visible enough for operators.
Daily jobs people can maintain
Run PostgreSQL or SQLite queries, S3 transfers, jq transforms, validation steps, and reusable sub-workflows.
Daily data workflows stay declarative, observable, and easy to retry when one step fails.
Distributed media work
Run ffmpeg, thumbnail extraction, audio normalization, image processing, and other compute-heavy jobs across workers.
Conversion work can run across distributed workers while status, history, logs, and artifacts stay in one persistence layer for monitoring, debugging, and retries.
Scheduled remote jobs
Coordinate SSH backups, cleanup jobs, deploy scripts, patch windows, precondition checks, and lifecycle hooks.
Remote operations get schedules, retries, notifications, and per-step logs without requiring operators to SSH into servers for every recovery.
Container-native pipelines
Compose workflows where each step can run a Docker image, Kubernetes Job, shell command, or validation step.
Image-based tasks can be routed to the right workers without building a custom control plane around containers.
Non-engineer operations
Run diagnostics, account repair jobs, data checks, and approval-gated support actions from a simple Web UI.
Non-engineers can operate reviewed workflows while engineers keep commands, logs, and results traceable.
Small devices, visible runs
Run sensor polling, local cleanup, offline sync, health checks, and device maintenance jobs on small devices.
The single binary and file-backed state work well on edge devices while still providing visibility through the Web UI.
Optional AI-assisted operations
Run AI coding agents, agent CLIs, agent-authored YAML workflows, log analysis, repair steps, and human-reviewed automation when model assistance is useful.
AI stays a secondary capability inside the workflow instead of becoming the thing that runs everything.
Common thread
Bring scripts, scheduled jobs, server tasks, and controlled automation into one workflow engine.
Turn existing shell scripts, Docker commands, SSH tasks, and HTTP calls into reliable workflows.
steps:
- name: health-check
command: curl -sf http://app:8080/health
- name: backup
type: ssh
config:
host: db-server
user: admin
command: pg_dump mydb > /backups/daily.sql
- name: notify
type: http
config:
url: "https://hooks.slack.com/..."
method: POST
body: '{"text": "Backup complete"}'
Persistent AI operator for Slack and Telegram.
Debug failures, approve actions, and recover incidents without leaving the conversation.
Dagu focuses on the production layer around your existing work: schedules, dependencies, retries, logs, queues, and controlled execution.
Install Dagu with the guided wizard, then continue in the full installation guide or quickstart docs.
The script installers are the recommended path. Homebrew, npm, and Docker remain available for binary-only or container installs.
The guided installer can finish the first-run setup for you.
Discuss usage, report issues, and follow development.