Data Movement Patterns

Reference architectures for migration, CDC, and federated query

DBConvert Streams is not just a database tool — it is a data movement platform. Below are real-world patterns that teams implement using built-in CDC, Convert mode, federated SQL, object storage support, and observability.

No Kafka. No external pipeline stack. No orchestration sprawl.

Migration

Zero-Downtime Database Migration

Move data between MySQL, PostgreSQL, and other supported targets without extended downtime.

How it works

  • Create a stream in Convert mode with source and target connections
  • Schema (tables, keys, indexes, constraints) is converted automatically
  • Data is transferred in parallel with per-table progress
  • After the snapshot completes, switch to CDC for ongoing sync if needed

Built-in capabilities

Automatic DDL conversionParallel bulk writesTable-level row countsCustom SQL filtering
Learn more about data migration

Works with cloud targets: AWS RDS, Azure Database, Google Cloud SQL, DigitalOcean, Neon, and any PostgreSQL/MySQL-compatible managed database.

Replication

Real-Time Replication Without Kafka

Replicate operational databases in real time — without managing Kafka, ZooKeeper, or external brokers.

How it works

  • Source Reader captures transactional changes
  • Embedded NATS JetStream brokers events
  • Parallel writers apply changes to targets
  • At-least-once delivery with deduplication

Built-in capabilities

Binlog / WAL readersEmbedded NATS brokerParallel writersAt-least-once delivery

This pattern replaces complex streaming stacks with a single deployable system.

Learn more about CDC replication
Analytics

Database + File Analytics (Federated SQL)

Run analytical queries across databases and object storage without building ETL pipelines.

Join PostgreSQL tables with Parquet files stored in S3 in a single SQL query.

SELECT u.email, o.total
FROM pg.users u
JOIN s3.orders('s3://bucket/orders/*.parquet') o
  ON u.id = o.user_id
WHERE o.total > 100;

Built-in capabilities

Cross-database JOIN supportSQL over CSV, JSON, JSONL, ParquetGlob patterns for object storageDuckDB-powered vectorized execution

Enables ad-hoc analytics across heterogeneous systems instantly.

Learn more about cross-database SQL
Storage

Database to Object Storage Pipelines

Continuously stream operational data into object storage for archival or analytics.

How it works

  • Configure an S3-compatible target (AWS S3, MinIO, etc.)
  • Use Convert mode for full exports or CDC for incremental updates
  • Output as Parquet, CSV, or JSON with partitioned file layout
  • Per-table upload progress visible in the stream monitor

Built-in capabilities

Parquet / CSV / JSON outputS3-compatible targetsPer-file upload trackingPartitioned writes

Object storage becomes a live data target — not a manual export destination.

Development

Development & Testing Environment Sync

Create realistic, up-to-date development environments without manual dumps.

How it works

  • Use Convert mode with custom SQL to select a data subset
  • Schema is created automatically on the target
  • Re-run the same stream whenever you need a fresh copy
  • Works across database engines (e.g. production MySQL → dev PostgreSQL)

Built-in capabilities

Custom SQL filteringSchema auto-conversionConvert mode snapshotsCross-engine support

Reduces drift between production and development systems.

Schema

Schema Audit & Architecture Review

Understand complex database structures visually before migration or refactoring.

How it works

  • Generate interactive ER diagrams from live schemas
  • Detect junction tables automatically
  • Highlight related tables and dependencies
  • Export diagrams as SVG/PNG/PDF

Built-in capabilities

Interactive ER diagramsDependency detectionSVG / PNG / PDF export

Accelerates migration planning and architectural review.

Learn more about ER diagrams

One Platform. Multiple Patterns.

All of these patterns are enabled by the same core system.

Event Streaming
Parallel Writers
Federated SQL
Observability
Self-Hosted

No separate tools for migration, replication, analytics, and monitoring.

Explore How These Patterns Apply to Your Architecture

Start with a single pattern. Scale to many. One platform handles it all.

Read the Documentation