Turn one-line briefs into production-ready deliverables.
Swarm Organization is an AI-powered delivery pipeline that takes a single-line project brief and orchestrates it through a complete production loop:
brief -> spec -> plan -> generate -> preview -> verify -> repair -> package
You submit a request. The platform handles the rest: structured specs, code generation, quality verification, and final packaging.
Quick Start | API Reference | Architecture | Model Routing
Most AI demos stop at "generate some text" or "chat with a model."
Swarm Organization explores a different product shape: an order-style delivery system where the user submits a request and the platform orchestrates a full production pipeline around it. No prompt engineering required; just describe what you want.
Use it to validate:
- AI-assisted internal delivery tooling: automate repetitive project scaffolding
- Brief-to-project automation flows: convert requirements to runnable code
- Multi-stage LLM orchestration: coordinate specialized models per pipeline stage
- Verification and repair loops: auto-detect and fix quality issues
- Product direction: test ideas before investing in a heavier backend stack
- 7-Stage Pipeline: spec, plan, generate, runtime, verify, repair, package
- Web Console: control dashboard with workflow visualization, module status, and artifact previews
- HTTP API: RESTful endpoints for task creation, status polling, and artifact retrieval
- Model Routing: per-stage provider/model/fallback configuration via LiteLLM or direct providers
- Quality Verification: automated checks for runtime health, file integrity, and content quality
- Self-Healing: repair loop rebuilds and re-verifies when output fails checks
- Portable Output: generates runnable project starters, previews, reports, and zip packages
flowchart LR
A["User Brief<br/>Web UI / API"] --> B["Spec Builder"]
B --> C["Planner"]
C --> D["Generator"]
D --> E["Runtime"]
E --> F{"Verifier"}
F -->|pass| G["Packager"]
F -->|fail| H["Repairer"]
H --> D
G --> I["Preview / Report / Zip"]
Each stage is a discrete engine with clean boundaries, making the system easy to understand, test, and eventually migrate to the planned Python stack.
| Stage | Engine | Responsibility |
|---|---|---|
| 1. Spec Builder | spec-builder.js |
Turns raw brief into structured project requirements |
| 2. Planner | planner-engine.js |
Produces execution steps, file targets, and verification expectations |
| 3. Generator | generator-engine.js |
Writes the runnable project starter and supporting files |
| 4. Runtime | runtime-engine.js |
Loads generated output and produces preview assets |
| 5. Verifier | verifier-engine.js |
Checks runtime health, required files, sections, and content quality |
| 6. Repairer | repairer-engine.js |
Rebuilds and re-verifies when output fails verification |
| 7. Packager | packager-engine.js |
Emits final report, summary, and downloadable zip package |
- Node.js 18+
npm startThen open:
http://127.0.0.1:3000
Copy .env.example to .env to enable LiteLLM gateway or direct provider routing:
cp .env.example .envIf no gateway or provider keys are configured, the system falls back to deterministic local behavior. The MVP remains fully runnable without any external API keys.
The Web UI is designed as a control console rather than a chat surface. It shows:
- Task intake: submit briefs with delivery type, framework, style, and platform options
- Workflow visualization: real-time pipeline progress through all 7 stages
- Module status: health and state of each engine
- Artifact previews: generated previews, reports, and downloadable packages
- Task history: track all past deliveries
- Model routing state: active providers and fallback chains
- Enter a short project brief
- Pick delivery type, framework, style, and target platform
- Submit the task
- Watch the pipeline progress through
spec -> plan -> generate -> runtime -> verify -> repair -> package - Open the generated preview, report, summary, or zip artifact
Create a task:
curl -X POST http://127.0.0.1:3000/api/tasks \
-H "Content-Type: application/json" \
-d '{
"prompt": "Build a dark tech AI tools directory for university students",
"outputType": "web_project",
"framework": "nextjs",
"style": "dark_tech",
"targetPlatform": "web"
}'Check task status:
curl http://127.0.0.1:3000/api/tasks
curl http://127.0.0.1:3000/api/tasks/<task-id>| Endpoint | Method | Description |
|---|---|---|
/api/health |
GET | Health check |
/api/model-status |
GET | Model routing status |
/api/tasks |
GET | List all tasks |
/api/tasks |
POST | Create a new task |
/api/tasks/:id |
GET | Get task status and details |
/api/metrics |
GET | System metrics |
/api/events |
GET | Event stream |
/artifacts/... |
GET | Download generated artifacts |
Each successful task writes artifacts under deliveries/<task-id>/:
| Path | Description |
|---|---|
project/ |
Generated runnable project starter |
preview/home.svg |
Visual preview asset |
project.zip |
Downloadable project package |
delivery_report.json |
Structured delivery report |
delivery_summary.md |
Human-readable delivery summary |
The backend supports staged model routing for each pipeline stage:
- Spec Builder: structured requirement extraction
- Planner: execution plan generation
- Generator: code and file generation
- Verifier: quality assessment
- Repairer: fix and rebuild
- Finalizer: report and packaging
Configure provider, model, and fallback chains per stage via .env.example.
Supported modes:
- LiteLLM gateway: unified proxy for multiple providers
- Direct provider: native API integration
- Deterministic fallback: no external keys required for local development
src/
core/ Delivery engine, task store, event hub, knowledge base, cost manager, resource monitor
engines/ 7 pipeline engines + model router
llm/ LiteLLM client and provider abstraction
utils/ Shared helpers (env, hash, id, json, zip)
web/ Local Web Console (HTML/CSS/JS)
scripts/ Smoke tests and regression checks
docs/ Architecture notes and assets
- Delivery Engine: orchestrates the 7-stage pipeline with state management
- Task Store: file-based task persistence and status tracking
- Event Hub: real-time event streaming for UI and API consumers
- Knowledge Base: domain knowledge for spec and planning stages
- Cost Manager: token usage tracking and cost estimation
- Resource Monitor: system resource usage and health checks
- Model Router: per-stage provider selection with fallback chains
See docs/architecture.md for detailed architecture documentation and migration direction.
Run local smoke test:
npm run smokeRun backend regression checks:
npm run backend-check- MVP stage: not a production multi-tenant system
- File-based persistence: no database yet
- Web project focus: primary delivery target is generated website starters
- Node.js runtime: clean stage boundaries preserved for planned Python migration
The intended long-term stack:
- Python + FastAPI + Pydantic
- PostgreSQL + Redis
- LangGraph for orchestration
This repository keeps the runtime in Node.js for now so the delivery loop remains executable on a minimal workstation with zero infrastructure dependencies.
