This page provides a comprehensive introduction to DeerFlow, its architecture, core components, and how they work together. For detailed information on specific subsystems, refer to the linked pages throughout this document.
DeerFlow (Deep Exploration and Efficient Research Flow) is an open-source super agent harness that orchestrates LLM-based agents with sandboxed execution, persistent memory, skill-based workflows, and subagent delegation README.md1-12 It provides a complete runtime environment for AI agents to perform complex, multi-step tasks that require file manipulation, code execution, web search, and parallel task decomposition README.md66-73
Sources: README.md1-12 README.md66-73
DeerFlow serves as an execution runtime for LLM-based agents that need to:
The system is not a single agent implementation but a harness — infrastructure that provides agents with everything they need: filesystem access, tool integration, memory management, and execution isolation.
Sources: README.md12-13 README.md66-73 config.example.yaml36-156 README.md33-35 backend/CLAUDE.md7-12
DeerFlow uses a multi-service architecture with clear separation of concerns, unified under a reverse proxy.
The following diagram bridges the high-level service components to their respective code entities and ports.
Service Responsibilities:
| Service | Port | Purpose | Main Entry Point / Code Entity |
|---|---|---|---|
| Nginx | 2026 | Unified reverse proxy entry point | Reverse Proxy backend/CLAUDE.md13 |
| Gateway API | 8001 | FastAPI REST API for models, skills, memory, and uploads | app.gateway.app:app backend/CLAUDE.md11-58 |
| LangGraph Server | 2024 | Agent runtime and workflow execution | deerflow.agents:make_lead_agent backend/CLAUDE.md10 backend/langgraph.json9 |
| Next.js Frontend | 3000 | Web interface (Development/Production) | frontend/ backend/CLAUDE.md12 |
| Provisioner | 8002 | K8s sandbox lifecycle management (Optional) | Provisioner backend/CLAUDE.md14 |
Sources: backend/CLAUDE.md9-15 backend/langgraph.json8-10 README.md184-189 backend/pyproject.toml9-20
The backend is architected as two distinct layers with a strict dependency boundary backend/CLAUDE.md113-120:
packages/harness/deerflow/): The core framework containing agent orchestration, tool definitions, sandbox abstractions, and model providers. It is designed as a publishable package deerflow-harness backend/CLAUDE.md117 backend/pyproject.toml8backend/app/): Contains the FastAPI Gateway API and IM channel integrations (Slack, Telegram, Feishu, WeCom) that consume the harness backend/CLAUDE.md118 backend/pyproject.toml14-19Boundary Rule: The App layer can import from the Harness (deerflow.*), but the Harness is strictly forbidden from importing from the App layer (app.*) backend/CLAUDE.md120
The agent execution is governed by a Lead Agent created via the deerflow.agents:make_lead_agent factory backend/langgraph.json9 backend/CLAUDE.md34 It processes requests through a pipeline of 10 middleware components backend/CLAUDE.md35
Key Middlewares:
Sources: backend/CLAUDE.md33-35 config.example.yaml24-30 backend/langgraph.json8-10
DeerFlow provides isolated execution environments via the SandboxProvider interface backend/CLAUDE.md40 This allows agents to perform powerful operations like running bash scripts or installing dependencies while protecting the host.
Sandbox Backends:
AioSandboxProvider backend/CLAUDE.md52Sources: backend/CLAUDE.md38-42 README.md71 config.example.yaml215-224
The subagent system allows the lead agent to delegate complex tasks to specialized agents using the SubagentExecutor backend/CLAUDE.md43-46
task() tool to spawn subagents README.md70Sources: README.md70 backend/CLAUDE.md43-46
DeerFlow uses a hierarchical configuration system managed via the deerflow.config module backend/CLAUDE.md51
config.yaml: The primary configuration file defining models, tools, and system settings config.example.yaml1-9.env and substituted into the config config.example.yaml7extensions_config.json: Manages MCP servers and skills configuration backend/CLAUDE.md25Model Factory: Instantiates various LLM providers (OpenAI, Anthropic, Gemini, DeepSeek, Ollama, etc.) through the deerflow.models module, managing features like "Thinking" mode and vision support backend/CLAUDE.md49 config.example.yaml36-189
Sources: config.example.yaml1-189 backend/CLAUDE.md24-51 README.md115-122