Log-Based CDC for MySQL and PostgreSQL (No Kafka Required)

Capture row changes from MySQL binlog and PostgreSQL WAL in real time

Apply continuous log-based changes to databases, files, and S3 from one self-hosted workflow.

Measured Performance

Performance You Can Measure

Real numbers from local replication tests.

100 MB/s

local transfer rate

50+ GB

verified, no upper limit

~10M rows

replicated in seconds

Per-stream

parallel processing

<10 ms

processing latency

Example: MySQL → PostgreSQL — 1,000,000 rows in ~4 seconds (~100 MB/s)

Per-stream routing and worker pools for stable throughput, with protocol-native CDC readers and optimized bulk-write paths.

Workflow

How It Works

From transaction logs to reliable delivery in four steps.

1. Capture

Connects directly to MySQL binlog or PostgreSQL logical replication slots. Events are decoded as they are written.

2. Normalize

Changes are normalized into a unified internal event format. For database targets, schema and type differences are automatically reconciled during setup and write-time conversion.

3. Deliver

Events are written to target destinations (databases, files, or S3-compatible storage). Batched writes optimize throughput.

4. Monitor

Live metrics, per-table progress, and full stream history are visible in the UI and via API.

Reliability Model

Built for continuous CDC on stable infrastructure, with JetStream-backed buffering and durable consumers.

  • At-least-once delivery
  • Per-table ordering preserved
  • Built-in buffering using JetStream
  • Millisecond end-to-end latency

Resilient to short-lived network interruptions and optimized for low-latency streaming. Actual end-to-end latency and ordering guarantees depend on stream configuration and target behavior.

Flexible Targets

Stream changes to operational databases, analytics stores, and storage layers without additional infrastructure.

  • MySQL
    MySQL
  • PostgreSQL
    PostgreSQL
  • S3 / MinIO
    S3 / MinIO
  • Local files (CSV, JSON, JSONL, Parquet)
    Local files (CSV, JSON, JSONL, Parquet)
  • Snowflake
    Snowflake Coming soon

Common Use Cases

Real-Time Analytics

Stream production MySQL into a PostgreSQL analytics replica without impacting primary workload.

Zero-Downtime Cutover

Run a snapshot migration, then replicate ongoing traffic in parallel. Switch over when ready.

Disaster Recovery

Maintain a continuously synchronized standby database across regions or providers.

Data Archival to S3

Stream database changes into compressed Parquet or CSV files with Hive-style partitioning.

Debezium — log-based CDC with Kafka.

Airbyte — log-based CDC, orchestrated as recurring sync jobs.

DBConvert Streams — continuous CDC without external infrastructure.

Start with a controlled CDC test

Validate source and target state first, then run the CDC path you plan to use for cutover.

Use pricing when you are ready to size production streams and team seats.

See Product Overview