High-performance distributed task queue written in Rust
A modern, reliable alternative to Celery with first-class Python support
Features • Quick Start • Documentation • Benchmarks • Roadmap • Contributing
Turbine was built to solve the common pain points of Celery while maintaining a familiar API for Python developers:
| Celery Pain Point | Turbine Solution |
|---|---|
| High memory usage (Python workers) | Rust workers use ~10x less memory |
| GIL limits concurrency | True parallelism with no GIL |
| Task loss on worker crash | Visibility timeout + automatic redelivery |
| Complex configuration | Sensible defaults, single config file |
| Poor monitoring | Built-in Prometheus metrics + Dashboard |
| Slow cold start | Instant startup, no Python import overhead |
| Result backend reliability | Optional results with TTL, S3 offload |
- High Performance: Zero-copy message handling, async I/O, minimal allocations
- Reliable: Exactly-once semantics (where possible), persistent task state, graceful shutdown
- Observable: Built-in Prometheus metrics, OpenTelemetry tracing, web dashboard
- Flexible Brokers: Redis (now), RabbitMQ, AWS SQS (coming soon)
- Workflows: Chains, groups, and chords for complex task composition
- Python SDK: Easy integration with Django, FastAPI, and other frameworks
# Clone the repository
git clone https://github.com/turbine-queue/turbine.git
cd turbine
# Start Turbine with Redis
docker-compose -f docker/docker-compose.yml up -d
# Check health
curl http://localhost:50051/health# Prerequisites: Rust 1.75+, Redis
# Build all crates
cargo build --release
# Run server
./target/release/turbine-server
# Run worker (in another terminal)
./target/release/turbine-workerTurbine can be configured via TOML file, environment variables, or CLI arguments:
# Using environment variables
export TURBINE_BROKER_URL=redis://localhost:6379
export TURBINE_CONCURRENCY=4
./target/release/turbine-worker
# Using config file
./target/release/turbine-server --config turbine.tomlSee turbine.toml for all configuration options.
┌─────────────────────────────────────────────────────────────────┐
│ Python/Django/FastAPI App │
│ (turbine-py SDK via gRPC) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Turbine Server (Rust) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ │
│ │ gRPC API │ │ REST API │ │ Web Dashboard │ │
│ └─────────────┘ └─────────────┘ └─────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ Task Router • Workflow Engine • Scheduler (Beat) │ │
│ └──────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Message Broker │
│ Redis (Ready) │ RabbitMQ (Planned) │ AWS SQS (Planned) │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────────────┴───────────────────────┐
▼ ▼
┌─────────────────────────────┐ ┌─────────────────────────────┐
│ Turbine Workers (Rust) │ │ Python Workers │
│ • High-performance tasks │ │ • Django/FastAPI tasks │
│ • Subprocess isolation │ │ • Native Python execution │
│ • Memory efficient │ │ • Auto-discovery │
└─────────────────────────────┘ └─────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Result Backend │
│ Redis │ PostgreSQL (Planned) │ S3 │
└─────────────────────────────────────────────────────────────────┘
| Crate | Description | Status |
|---|---|---|
turbine-core |
Core types, traits, configuration | ✅ Ready |
turbine-broker |
Message broker abstraction | ✅ Redis |
turbine-backend |
Result backend abstraction | ✅ Redis |
turbine-server |
gRPC/REST server | ✅ Ready |
turbine-worker |
Task execution engine | ✅ Ready |
turbine-py |
Python SDK with Django/FastAPI support | ✅ Ready |
Install the Python SDK:
pip install turbine-queue
# With worker dependencies (for running Python workers)
pip install turbine-queue[worker]
# With Django integration
pip install turbine-queue[django]from turbine import Turbine, task
# Initialize client
turbine = Turbine(server="localhost:50051")
# Define a task
@task(queue="emails", max_retries=3, timeout=300)
def send_email(to: str, subject: str, body: str):
# Task logic here
pass
# Submit task
task_id = send_email.delay(to="[email protected]", subject="Hello", body="World")
# Check status
status = turbine.get_task_status(task_id)
print(f"Task state: {status['state']}")# settings.py
INSTALLED_APPS = [
...
'turbine.django',
]
TURBINE_SERVER = "localhost:50051"
TURBINE_BROKER_URL = "redis://localhost:6379"
# tasks.py
from turbine import task
@task(queue="emails")
def send_welcome_email(user_id: int):
user = User.objects.get(id=user_id)
# Send email...
# views.py
from .tasks import send_welcome_email
def signup(request):
user = create_user(request.POST)
send_welcome_email.delay(user_id=user.id)
return redirect("home")Run the Python worker:
# Using CLI
turbine worker --broker-url redis://localhost:6379 -I myapp.tasks
# Using Django management command
python manage.py turbine_worker -Q emails,defaultfrom turbine import chain, group, chord
# Chain: tasks run sequentially, passing results
workflow = chain(
fetch_data.s(url),
process_data.s(),
store_results.s()
)
workflow.delay()
# Group: tasks run in parallel
group(
send_email.s(to="[email protected]", subject="Hi", body="..."),
send_email.s(to="[email protected]", subject="Hi", body="..."),
).delay()
# Chord: group + callback after all complete
chord(
[process_chunk.s(chunk) for chunk in chunks],
aggregate_results.s()
).delay()Failed tasks that exceed retry limits are automatically sent to the DLQ for inspection:
# List failed tasks
turbine dlq list
# Show DLQ statistics
turbine dlq stats
# Inspect a specific failed task
turbine dlq inspect <task-id>
# Remove a task from DLQ
turbine dlq remove <task-id>
# Clear all tasks from DLQ
turbine dlq clear --forceTurbine includes a comprehensive REST API for real-time monitoring and management:
# Build the dashboard
cargo build --release -p turbine-dashboard
# Run with default settings
./target/release/turbine-dashboard
# Custom configuration
./target/release/turbine-dashboard \
--host 0.0.0.0 \
--port 8080 \
--redis-url redis://localhost:6379The dashboard provides the following REST endpoints:
Health & Overview:
GET /api/health- Health checkGET /api/overview- Dashboard overview statistics
Workers:
GET /api/workers- List all workersGET /api/workers/:id- Get worker details
Queues:
GET /api/queues- List all queuesGET /api/queues/:name- Get queue detailsGET /api/queues/:name/stats- Queue statisticsPOST /api/queues/:name/purge- Purge queue
Tasks:
GET /api/tasks- List recent tasksGET /api/tasks/:id- Get task detailsPOST /api/tasks/:id/revoke- Revoke a task
Dead Letter Queue:
GET /api/dlq/:queue- Get DLQ infoPOST /api/dlq/:queue/reprocess- Reprocess DLQ messagesPOST /api/dlq/:queue/purge- Purge DLQ
Metrics & Events:
GET /api/metrics- Prometheus metricsGET /api/events- Server-Sent Events for real-time updates
# Get overview
curl http://localhost:8080/api/overview
# List queues
curl http://localhost:8080/api/queues
# Get task status
curl http://localhost:8080/api/tasks/task-id-here
# Listen to real-time events
curl -N http://localhost:8080/api/eventsFrontend UI: Coming soon! The backend API is complete and ready for a React/Vue/Svelte frontend.
Coming soon! We're working on comprehensive benchmarks comparing:
- Memory usage vs Celery
- Throughput (tasks/second)
- Latency (p50, p95, p99)
- Cold start time
- Task types and serialization
- Redis broker implementation
- Redis result backend
- Basic worker with task execution
- gRPC server structure
- Python gRPC client
-
@taskdecorator - Django integration
- FastAPI integration
- Python worker for executing tasks
- Retry with exponential backoff
- Chain, Group, Chord execution
- Beat scheduler (cron)
- Dead letter queues (DLQ)
- Prometheus metrics
- OpenTelemetry tracing
- Web dashboard backend (REST API + SSE)
- Web dashboard frontend (UI)
- Rate limiting
- Priority queues
- TLS/mTLS encryption
- Multi-tenancy
- RabbitMQ support
- AWS SQS support
- Kafka support
- Dashboard web UI (React/Vue/Svelte)
- Multi-tenancy with resource quotas
- Result backend: PostgreSQL
- Result backend: S3 for large payloads
- Task result compression
- Grafana dashboard templates
- Load balancing strategies
- Task dependencies and DAGs
- Batch processing optimizations
- Configuration Guide (coming soon)
- API Reference
- Migration from Celery (coming soon)
- Architecture Deep Dive (coming soon)
We welcome contributions! Please see our Contributing Guide for details.
Looking to contribute? Check out issues labeled good first issue.
| Area | Description | Skills |
|---|---|---|
| 🎨 Dashboard Frontend | Build React/Vue/Svelte UI consuming the REST API | TypeScript, React/Vue, SSE |
| 🐰 RabbitMQ Broker | Implement AMQP 0.9.1 broker support | Rust, RabbitMQ |
| ☁️ AWS SQS Broker | Implement SQS broker for cloud deployments | Rust, AWS SDK |
| 🧪 Benchmarks | Performance comparison with Celery (memory, throughput, latency) | Python, Rust, Testing |
| 🏢 Multi-tenancy | Add tenant isolation and resource quotas | Rust, Distributed Systems |
| 📚 Documentation | Migration guides, best practices, tutorials | Technical Writing |
| 🔧 Examples | More example apps (Flask, Sanic, CLI tools) | Python |
- Backend: Rust (Tokio, Tonic gRPC, Serde)
- Broker: Redis (RabbitMQ/SQS planned)
- Python SDK: Python 3.9+, gRPC, MessagePack
- Frameworks: Django, FastAPI
- GitHub Discussions
- Discord (coming soon)
- Twitter (coming soon)
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Turbine is inspired by:
- Celery - The original Python task queue
- Sidekiq - Ruby's excellent background job processor
- Tokio - Rust's async runtime
Built with ❤️ in Rust