Apply continuous log-based changes to databases, files, and S3 from one self-hosted workflow.
Measured Performance
Real numbers from local replication tests.
100 MB/s
local transfer rate
50+ GB
verified, no upper limit
~10M rows
replicated in seconds
Per-stream
parallel processing
<10 ms
processing latency
Example: MySQL → PostgreSQL — 1,000,000 rows in ~4 seconds (~100 MB/s)
Per-stream routing and worker pools for stable throughput, with protocol-native CDC readers and optimized bulk-write paths.
From transaction logs to reliable delivery in four steps.
Connects directly to MySQL binlog or PostgreSQL logical replication slots. Events are decoded as they are written.
Changes are normalized into a unified internal event format. For database targets, schema and type differences are automatically reconciled during setup and write-time conversion.
Events are written to target destinations (databases, files, or S3-compatible storage). Batched writes optimize throughput.
Live metrics, per-table progress, and full stream history are visible in the UI and via API.
Built for continuous CDC on stable infrastructure, with JetStream-backed buffering and durable consumers.
Resilient to short-lived network interruptions and optimized for low-latency streaming. Actual end-to-end latency and ordering guarantees depend on stream configuration and target behavior.
Stream changes to operational databases, analytics stores, and storage layers without additional infrastructure.
Stream production MySQL into a PostgreSQL analytics replica without impacting primary workload.
Run a snapshot migration, then replicate ongoing traffic in parallel. Switch over when ready.
Maintain a continuously synchronized standby database across regions or providers.
Stream database changes into compressed Parquet or CSV files with Hive-style partitioning.
Debezium — log-based CDC with Kafka.
Airbyte — log-based CDC, orchestrated as recurring sync jobs.
DBConvert Streams — continuous CDC without external infrastructure.
Related workflows
Use the adjacent workflow pages when you need initial load, pre-cutover validation, or broader product context.
Load the initial target state first, then switch to CDC for ongoing changes and controlled cutover.
Browse tables, run SQL, compare schemas, and inspect the target before you point live traffic at it.
Use the overview page if you need the full picture across migration, explorer workflows, SQL, and CDC.
Validate source and target state first, then run the CDC path you plan to use for cutover.
Use pricing when you are ready to size production streams and team seats.
See Product Overview