An Event Sourcing implementation using ScyllaDB CDC and Redpanda, built with Rust and the Actix actor model.
This project showcases Event Sourcing and CQRS patterns with CDC streaming:
- Event Sourcing - Events as source of truth
- CQRS - Separate write/read models
- CDC Streaming - Direct CDC consumption for projections
- Outbox Pattern - Reliable event publishing
- Actor Supervision - Fault-tolerant architecture
- DLQ & Retry - Production error handling
- Prometheus Metrics - Observability
graph TD
A[User Command] --> B[OrderCommandHandler]
B --> C[Load OrderAggregate from events]
C --> D[Validate Command]
D --> E[Apply Command to Aggregate]
E --> F[Generate Domain Events]
F --> G[Wrap in Event Envelopes]
G --> H[Atomic Batch: event_store + outbox_messages]
H --> I[ScyllaDB CDC Streams changes]
I --> J[CDC Processor consumes CDC stream]
J --> K[Publish to Redpanda with retry/DLQ]
K --> L[Downstream Services]
M[Query Request] --> N[Read from event_store]
N --> O[Replay to current state]
O --> P[Return current state]
Event Sourcing:
EventStore- Append-only event log (source of truth)OrderAggregate- Domain logic with business rulesOrderCommandHandler- Validates commands, emits eventsEventEnvelope- Metadata (causation, correlation, versioning)
CDC Streaming:
CdcProcessor- Streams fromoutbox_messagestable- Direct CDC consumption for projections (no polling)
- External publishing via Redpanda
Infrastructure:
CoordinatorActor- Actor supervision treeDlqActor- Dead letter queue for failed messagesHealthCheckActor- System health monitoring- Prometheus metrics on
:9090/metrics
- Rust 1.70+
- Docker & Docker Compose
- ScyllaDB (via Docker)
- Redpanda (via Docker)
make devThis command:
- Starts ScyllaDB and Redpanda with
docker-compose up -d - Waits for services to be ready (~25 seconds)
- Initializes all required schemas automatically:
event_store- Event log (source of truth)outbox_messages- CDC-enabled outbox (WITH cdc = {'enabled': true})aggregate_sequence- Optimistic locking and version trackingdead_letter_queue- Failed messages
- Runs the application with logging enabled
If you prefer to run commands separately:
# Clean restart (recommended for first run)
make reset
# Or just start infrastructure
make reset # includes docker-compose and schema initialization
# Run the application
make runWatch the demo execute a complete order lifecycle:
- Create order with items
- Confirm order
- Ship order with tracking
- Deliver order with signature
Each command:
- Validates business rules
- Emits domain events
- Writes atomically to
event_store+outbox_messages - Streams via CDC to Redpanda
- Metrics: http://localhost:9090/metrics
- Redpanda Console: http://localhost:8080 (if configured)
- Logs: Structured logging with tracing
src/
├── event_sourcing/ # Generic Event Sourcing infrastructure
│ ├── core/ # Core abstractions (Aggregate, EventEnvelope)
│ │ ├── aggregate.rs # Generic Aggregate trait
│ │ └── event.rs # EventEnvelope and DomainEvent trait
│ └── store/ # Generic persistence layer
│ ├── event_store.rs # Generic EventStore implementation
│ └── ... # Future store components
├── domain/ # Domain-specific logic
│ ├── order/ # Order aggregate and related types
│ │ ├── aggregate.rs # OrderAggregate implementation
│ │ ├── commands.rs # OrderCommand enum
│ │ ├── events.rs # OrderEvent enum and related events
│ │ ├── errors.rs # OrderError enum
│ │ ├── value_objects.rs # OrderItem, OrderStatus, etc.
│ │ └── command_handler.rs # OrderCommandHandler
│ ├── customer/ # Customer aggregate (example)
│ │ ├── aggregate.rs # CustomerAggregate implementation
│ │ ├── commands.rs # CustomerCommand enum
│ │ ├── events.rs # CustomerEvent enum and related events
│ │ ├── errors.rs # CustomerError enum
│ │ ├── value_objects.rs # Customer-specific value objects
│ │ └── command_handler.rs # CustomerCommandHandler
│ └── ... # Future aggregates (product, payment, etc.)
├── actors/ # Actor system for infrastructure
│ ├── core/ # Abstract actor traits
│ ├── infrastructure/ # Concrete infrastructure actors
│ │ ├── coordinator.rs # Supervision tree manager
│ │ ├── cdc_processor.rs # CDC streaming with real ScyllaDB CDC
│ │ ├── dlq.rs # Dead letter queue
│ │ └── health_monitor.rs # Health monitoring
│ └── mod.rs # Actor module exports
├── db/ # Database interaction
│ └── schema.cql # ScyllaDB schema
├── messaging/ # External messaging
│ └── redpanda_client.rs # Redpanda/Kafka integration
├── utils/ # Utility functions
│ └── retry.rs # Retry with backoff and circuit breaker
├── metrics/ # Prometheus metrics
│ └── metrics.rs # Metrics definitions and server
└── main.rs # Application entry point
User intentions that MAY succeed:
OrderCommand::CreateOrder { customer_id, items }
OrderCommand::ConfirmOrder
OrderCommand::ShipOrder { tracking_number, carrier }Facts that DID happen (immutable):
OrderEvent::Created { customer_id, items }
OrderEvent::Confirmed { confirmed_at }
OrderEvent::Shipped { tracking_number, carrier, shipped_at }Domain model that:
- Validates commands
- Emits events
- Rebuilds state from events
- Enforces business rules
// Command validation
if order.status != OrderStatus::Confirmed {
return Err(OrderError::NotConfirmed);
}Append-only log:
- Events NEVER deleted or modified
- Current state = replay all events
- Full audit trail
- Time travel debugging
Read models built from events:
order_read_model- Current order stateorders_by_customer- Customer's ordersorders_by_status- Operational dashboards- Can be rebuilt at any time
The actual implementation uses the official scylla-cdc Rust library to consume directly from ScyllaDB's CDC log tables in real-time:
event_store + outbox_messages (WITH cdc = {'enabled': true})
↓
CDC Log Tables (hidden, created automatically by ScyllaDB)
↓
scylla-cdc library → OutboxCDCConsumer → Publish to Redpanda
↓
Real-time streaming with generation handling, checkpointing, and retry
Real Implementation Features:
- TRUE STREAMING: No polling, real-time event delivery
- Low latency (~50ms typical)
- Generation handling: Automatically handles schema changes
- Direct consumption from CDC log tables
- Built-in checkpointing and resumption
- Retry with backoff and circuit breaker
- Dead Letter Queue for failed events
- Fault isolation with actor supervision
- Multiple parallel consumers per VNode group
- Generic Event Sourcing Infrastructure (Aggregate trait, EventEnvelope, EventStore)
- Domain-Driven Design with clear aggregate boundaries
- Order and Customer aggregates with full business logic
- Command handlers orchestrating Command → Aggregate → Events → Event Store
- Event metadata (causation, correlation, versioning)
- Optimistic concurrency control with version tracking
- Atomic write to event_store + outbox using ScyllaDB batches
- Real ScyllaDB CDC streaming using scylla-cdc library
- DLQ for failed messages with actor supervision
- Retry with exponential backoff and circuit breaker
- Prometheus metrics and health monitoring
- Actor supervision tree for fault tolerance
- Multi-aggregate support (Order, Customer examples)
- Complete order lifecycle (Create, Confirm, Ship, Deliver, Cancel)
- Read model projections (for optimized queries)
- Aggregate snapshots (for performance with high-event aggregates)
- Event upcasting (for schema evolution)
- Advanced monitoring and alerting
- More aggregate examples (Product, Payment, etc.)
use event_sourcing::store::EventStore;
use domain::order::{OrderCommandHandler, OrderCommand, OrderItem, OrderEvent};
use std::sync::Arc;
// Initialize generic event store with concrete event type
let event_store = Arc::new(EventStore::<OrderEvent>::new(
session.clone(),
"Order", // aggregate type name
"order-events" // topic name
));
// Create command handler
let command_handler = Arc::new(OrderCommandHandler::new(event_store.clone()));
// Execute command
let version = command_handler.handle(
order_id,
OrderCommand::CreateOrder {
order_id,
customer_id,
items: vec![
OrderItem {
product_id: uuid::Uuid::new_v4(),
quantity: 2,
},
],
},
correlation_id,
).await?;
// Events are now in:
// - event_store (permanent, source of truth)
// - outbox_messages (CDC streams this to Redpanda)
// Real CDC processor will consume from outbox_messages CDC log
// and publish to Redpanda with retry and DLQ capabilities# Run tests (using Makefile)
make test
# Run with logs (manual command)
RUST_LOG=debug cargo test
# Test specific module (manual command)
cargo test event_sourcing::aggregate::testsTests cover:
- Aggregate lifecycle
- Business rule enforcement
- Event application
- Status transitions
- Concurrency conflicts
Comprehensive documentation is available in the documentation index, which provides a complete overview of all available documentation:
- Documentation Index - Start here for all project documentation
- Main Tutorial - Complete Event Sourcing tutorial with diagrams
Additional reference: src/db/schema.cql - Annotated database schema
RUST_LOG=info # Log level
SCYLLA_NODES=127.0.0.1:9042 # ScyllaDB contact points
REDPANDA_BROKERS=127.0.0.1:9092 # Redpanda brokers
METRICS_PORT=9090 # Prometheus metrics portCustomize:
- ScyllaDB memory limits
- Redpanda configuration
- Port mappings
cqlsh -f src/db/schema.cqlCheck services are running:
docker-compose psNormal for event sourcing - command handler will retry.
Verify CDC is enabled:
DESC TABLE orders_ks.outbox_messages;
-- Should show: cdc = {'enabled': true}Current implementation uses single inserts for clarity. For production:
Use Batches:
let mut batch = Batch::default();
batch.append_statement("INSERT INTO event_store ...");
batch.append_statement("INSERT INTO outbox_messages ...");
session.batch(&batch, values).await?;- Horizontal: Add more ScyllaDB nodes
- Partitioning: Aggregate ID is partition key
- Snapshots: Every 100 events (schema ready)
- Read models: Independent scaling per projection
Watch:
- Event store write latency
- CDC consumer lag
- DLQ message count
- Circuit breaker state
- Projection lag
This is an educational project. Contributions welcome:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
MIT License - see LICENSE file for details
Built with 🦀 Rust + ScyllaDB + Redpanda