Understand how control plane, data plane, and monitoring layers work together for migration and replication across MySQL, PostgreSQL, files, and S3.
Kafka is powerful — but for database-to-database streaming, it brings operational weight that most teams don't need. DBConvert Streams takes a different approach.
Three planes — control, data, and monitoring — working independently for maximum reliability.
Control Plane
UI + API manage stream lifecycle — create, start, pause, stop
Data Plane
Reader captures events, NATS brokers them, Writers apply in parallel
Monitoring
Metrics collected independently — no impact on data throughput
Four components work together to move data reliably between any supported databases.
Connects to source databases and captures data changes.
Embedded message broker that decouples readers from writers.
Consume events and write to destination systems in parallel.
Central control plane for managing streams and connections.
Every event is captured, delivered, and written — exactly once to the target.
Every event published to NATS JetStream is persisted until acknowledged by the writer. No silent data loss.
Writers track dispatched batch IDs in memory. If NATS redelivers a message, the writer recognizes it and skips the duplicate — no double-writes.
CDC mode preserves operation order within transactions. INSERT, UPDATE, and DELETE arrive in the correct sequence.
If a writer fails to acknowledge a message, JetStream redelivers it automatically. No manual intervention, no lost events.
From source to target in three phases — initialization, transfer, and monitoring.
The API server validates the stream configuration and coordinates startup across components.
Data flows from source through the event hub to target writers in parallel batches.
Real-time statistics track progress, and the system detects completion automatically.
Built to handle interruptions gracefully — failed writes roll back, unprocessed messages get redelivered.
JetStream persists all events until writers acknowledge them. If a writer crashes, unacknowledged events are redelivered automatically.
When a writer fails to process a batch, it sends a Nak to JetStream, which redelivers the message. Connection and DNS errors are retried with backoff.
Running streams can be paused and resumed without losing progress. CDC mode tracks binlog/WAL position so replication continues from where it left off.
Writers are stateless. A failed write does not leave partial data — transactions roll back cleanly on error.
Continuous replication or deterministic batch migration — same engine, different strategies.
Continuous replication
Real-time streaming from database transaction logs. Captures every INSERT, UPDATE, and DELETE as it happens.
Deterministic batch migration
Bulk data transfer via direct table reads. Ideal for one-time migrations and periodic syncs.
Runs inside your VPC. No SaaS dependency. No data leaves your infrastructure. Same architecture across every environment.
All-in-one binary
Docker Compose stack
AWS, GCP, Azure, DO
Inside your VPC
Deploy without Kafka, then use CDC mechanics and monitoring from the same workflow.