Zero-downtime database replication using PostgreSQL logical replication with continuous sync.
New to SerenAI? Sign up at console.serendb.com to get started with managed cloud replication.
When replicating to SerenDB targets, this tool runs your replication jobs on SerenAI's cloud infrastructure automatically. Just set your API key and run:
export SEREN_API_KEY="your-api-key" # Get from console.serendb.com
database-replicator init \
--source "postgresql://user:pass@source:5432/db" \
--target "postgresql://user:[email protected]:5432/db"For non-SerenDB targets, use the --local flag to run replication locally.
This guide covers replicating PostgreSQL databases from any PostgreSQL provider (Neon, AWS RDS, Hetzner, self-hosted, etc.) to another PostgreSQL database (including Seren Cloud). The tool uses PostgreSQL's native logical replication for zero-downtime replication with continuous sync.
- Zero downtime: Your source database stays online during replication
- Continuous sync: Changes replicate in real-time after initial snapshot
- Multi-provider: Works with any PostgreSQL-compatible provider
- Selective replication: Choose specific databases and tables
- Interactive mode: User-friendly terminal UI for selecting what to replicate
- Remote execution: Run replications on SerenAI cloud infrastructure
- Production-ready: Data integrity verification, checkpointing, error handling
The tool uses PostgreSQL's logical replication (publications and subscriptions) to keep databases synchronized:
- Initial snapshot: Copies schema and data using pg_dump/restore
- Continuous replication: Creates publication on source and subscription on target
- Real-time sync: PostgreSQL streams changes from source to target automatically
Install the CLI before following the rest of this guide.
- Visit the latest GitHub Release.
- Download the asset that matches your operating system and CPU (Linux x86_64/arm64, macOS Intel/Apple Silicon, or Windows x86_64).
- Extract the archive, then on Linux/macOS run:
chmod +x database-replicator*
sudo mv database-replicator* /usr/local/bin/database-replicator
database-replicator --help- On Windows, run the
.exedirectly or add it to yourPATH.
Requires Rust 1.70 or later.
# Install from crates.io
cargo install database-replicator
# Or build from the repository
git clone https://github.com/serenorg/database-replicator.git
cd database-replicator
cargo build --release
./target/release/database-replicator --helpUse this option if you prefer to pin to a commit, apply local patches, or cross-compile for a custom environment.
- PostgreSQL 12 or later
- Read access to all tables you want to replicate
- For logical replication (optimal):
REPLICATIONprivilege andwal_level = logical - For xmin-based sync (fallback): No special configuration required - just
SELECTprivilege
The tool automatically detects your source database's capabilities and chooses the best sync method:
| Source Configuration | Sync Method | Delete Detection | Latency |
|---|---|---|---|
wal_level = logical |
Logical replication | Real-time | Sub-second |
wal_level = replica (default) |
xmin polling | Periodic reconciliation | Seconds |
For optimal performance (logical replication):
-- On source database
ALTER USER myuser WITH REPLICATION;
GRANT USAGE ON SCHEMA public TO myuser;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO myuser;For xmin-based sync (no source changes required):
-- Just read access is sufficient
GRANT USAGE ON SCHEMA public TO myuser;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO myuser;- PostgreSQL 12 or later
- Superuser or database owner privileges
- Ability to create subscriptions
- Network connectivity to source database (for continuous sync)
Grant required privileges:
-- On target database
ALTER USER myuser WITH SUPERUSER;
-- Or for non-superuser setup:
ALTER USER myuser WITH CREATEDB;
GRANT ALL PRIVILEGES ON DATABASE targetdb TO myuser;- Target must be able to connect to source database
- For AWS RDS source: Enable
rds_replicationrole - For cloud databases: Ensure firewall rules allow connections
- For remote execution: Both databases must be accessible from the internet
The PostgreSQL replication process follows 5 phases:
- Validate - Check source and target databases meet replication requirements
- Init - Perform initial snapshot replication (schema + data) using pg_dump/restore
- Sync - Set up continuous logical replication between databases
- Status - Monitor replication lag and health in real-time
- Verify - Validate data integrity with checksums
Check that both databases meet replication requirements:
database-replicator validate \
--source "postgresql://user:pass@source-host:5432/db" \
--target "postgresql://user:pass@target-host:5432/db"The validate command checks:
- PostgreSQL version (12+)
- Required privileges (REPLICATION, superuser)
wal_level = logicalon source- Network connectivity between databases
- Target database exists or can be created
With filtering:
database-replicator validate \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--include-databases "myapp,analytics"Perform initial snapshot replication with schema and data:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/db" \
--target "postgresql://user:pass@target-host:5432/db"What happens during init:
- Size estimation: Analyzes database sizes and shows estimated replication times
- User confirmation: Prompts to proceed (skip with
--yes) - Globals dump: Replicates roles and permissions with
pg_dumpall --globals-only - Schema dump: Replicates table structures with
pg_dump --schema-only - Data dump: Replicates data with
pg_dump --data-only(parallel, compressed) - Restore: Restores globals, schema, and data to target (parallel operations)
Example output:
Analyzing database sizes...
Database Size Est. Time
──────────────────────────────────────────────────
myapp 15.0 GB ~45.0 minutes
analytics 250.0 GB ~12.5 hours
staging 2.0 GB ~6.0 minutes
──────────────────────────────────────────────────
Total: 267.0 GB (estimated ~13.3 hours)
Proceed with replication? [y/N]:
Common options:
# Skip confirmation prompt (for scripts)
database-replicator init \
--source "..." \
--target "..." \
--yes
# Drop existing target database and recreate
database-replicator init \
--source "..." \
--target "..." \
--drop-existing
# Run locally instead of on cloud infrastructure
database-replicator init \
--source "..." \
--target "..." \
--local
# Disable checkpoint resume (start fresh)
database-replicator init \
--source "..." \
--target "..." \
--no-resumeCheckpointing:
The init command automatically checkpoints after each database finishes. If replication is interrupted, you can rerun the same command and it will skip completed databases and continue with remaining ones.
To discard the checkpoint and start fresh, use --no-resume (a new checkpoint will be created for the fresh run).
Set up continuous replication for ongoing change synchronization:
database-replicator sync \
--source "postgresql://user:pass@source-host:5432/db" \
--target "postgresql://user:pass@target-host:5432/db"Automatic sync method detection:
The sync command automatically detects your source database's wal_level and chooses the optimal sync method:
INFO: Checking source database capabilities...
INFO: Source has wal_level=replica (logical replication not available)
INFO: Using xmin-based sync (no source changes required)
INFO: Starting sync...
For logical replication (wal_level=logical):
- Create publication: Creates publication on source database with all tables
- Create subscription: Creates subscription on target that connects to source
- Initial sync: PostgreSQL performs initial table synchronization
- Continuous replication: Changes stream automatically from source to target
For xmin-based sync (wal_level=replica, the default):
- Detect changes: Queries source for rows modified since last sync using PostgreSQL's
xminsystem column - Apply changes: UPSERTs changed rows to the target database in batches
- Detect deletes: Periodically reconciles primary keys between source and target to find deleted rows
- Remove orphans: Deletes rows from target that no longer exist in source
- Persist state: Saves sync progress to enable resume after interruption
The xmin-based sync runs continuously, polling for changes at configurable intervals.
| Flag | Default | Description |
|---|---|---|
--sync-interval |
3600 (1 hour) | Seconds between sync cycles |
--reconcile-interval |
86400 (1 day) | Seconds between delete detection cycles |
--once |
false | Run a single sync cycle and exit |
--no-reconcile |
false | Disable delete detection entirely |
Examples:
# Sync every 5 minutes, reconcile every 6 hours
database-replicator sync \
--source "postgresql://..." \
--target "postgresql://..." \
--sync-interval 300 \
--reconcile-interval 21600
# Run a single sync cycle (useful for cron jobs)
database-replicator sync \
--source "postgresql://..." \
--target "postgresql://..." \
--once
# High-frequency sync without delete detection
database-replicator sync \
--source "postgresql://..." \
--target "postgresql://..." \
--sync-interval 60 \
--no-reconcileRun sync as a background process that survives terminal disconnection:
# Start sync as a daemon (detaches from terminal)
database-replicator sync \
--source "postgresql://..." \
--target "postgresql://..." \
--daemon
# Check daemon status
database-replicator sync --daemon-status
# Stop a running daemon
database-replicator sync --stopDaemon behavior:
- Logs to
~/.seren-replicator/sync.log - PID stored in
~/.seren-replicator/sync.pid - Survives terminal closure and SSH disconnection
- Gracefully stops on SIGTERM
With filtering:
# Sync only specific databases
database-replicator sync \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--include-databases "myapp,analytics"
# Sync with table exclusions
database-replicator sync \
--source "..." \
--target "..." \
--exclude-tables "myapp.logs,myapp.cache"Note: Table-level predicates (
--table-filter,--time-filter, or config file rules) require PostgreSQL 15+ on the source so publications can useWHEREclauses. Schema-only tables work on all supported versions.
Important Security Note:
PostgreSQL subscriptions store connection strings (including passwords) in the pg_subscription system catalog. To avoid storing passwords in the catalog, configure a .pgpass file on your target PostgreSQL server:
- Create
/var/lib/postgresql/.pgpasswithsource-host:5432:dbname:username:password - Set permissions:
chmod 0600 /var/lib/postgresql/.pgpass - Omit password from source URL when running
sync
See Security section for details.
When your source database doesn't have wal_level=logical configured (which is the case for most managed PostgreSQL services), the sync command automatically falls back to xmin-based incremental sync. This requires no configuration changes on your source database.
How xmin works:
PostgreSQL maintains a hidden system column called xmin on every row. It contains the transaction ID that last inserted or updated that row. By tracking the highest xmin value seen, we can efficiently query for all rows that have changed:
SELECT * FROM table WHERE xmin::text::bigint > $last_seen_xmin;Benefits:
- Zero source configuration: Works with any PostgreSQL database, including managed services like Neon, AWS RDS, and Heroku that don't allow
wal_level=logical - Automatic detection: No flags or configuration needed - just run
syncand it works - Resume support: Progress is persisted to disk, allowing recovery after interruptions
- Efficient batching: Changes are processed in configurable batch sizes to manage memory
Limitations vs. logical replication:
| Aspect | Logical Replication | xmin-Based Sync |
|---|---|---|
| Latency | Sub-second | Polling interval (default 1 hour) |
| Delete detection | Real-time | Periodic reconciliation |
| Network traffic | Streams only changes | Full row on any column change |
| Source config | Requires wal_level=logical | None required |
When xmin-based sync is recommended:
- Source database doesn't support
wal_level=logical(most managed services) - You can't or don't want to modify source database configuration
- Near-real-time sync is acceptable (seconds latency vs. sub-second)
- You want the simplest possible setup
Delete detection:
Since xmin only tracks modified rows (deleted rows are gone), xmin-based sync performs periodic primary key reconciliation to detect deletions:
- Fetches all primary keys from source table
- Compares with primary keys in target table
- Deletes rows from target that no longer exist in source
This reconciliation runs periodically (configurable, default every 10 sync cycles) to balance performance and delete detection latency.
Monitor replication health and lag in real-time:
database-replicator status \
--source "postgresql://user:pass@source-host:5432/db" \
--target "postgresql://user:pass@target-host:5432/db"Output includes:
- Subscription state (streaming, syncing, stopped, etc.)
- Replication lag in bytes and time
- Last received LSN (Log Sequence Number)
- Statistics from both source and target
With filtering:
database-replicator status \
--source "..." \
--target "..." \
--include-databases "myapp"Monitor continuously:
# Check status every 5 seconds
watch -n 5 'database-replicator status --source "$SOURCE" --target "$TARGET"'Validate data integrity by comparing checksums between source and target:
database-replicator verify \
--source "postgresql://user:pass@source-host:5432/db" \
--target "postgresql://user:pass@target-host:5432/db"What happens during verify:
- Compute checksums: Calculates checksums for all tables on both sides
- Compare: Compares checksums to detect any discrepancies
- Report: Shows detailed results per table
With filtering:
database-replicator verify \
--source "..." \
--target "..." \
--include-databases "myapp" \
--exclude-tables "myapp.logs"Selective replication allows you to choose exactly which databases and tables to replicate, giving you fine-grained control over your replication.
Replicate only specific databases:
# Include only specific databases
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--include-databases "myapp,analytics"
# Exclude specific databases
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--exclude-databases "test,staging"Note: Database filters are mutually exclusive - you cannot use both --include-databases and --exclude-databases at the same time.
Replicate only specific tables or exclude certain tables:
# Include only specific tables (format: database.table)
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--include-tables "myapp.users,myapp.orders,analytics.events"
# Exclude specific tables
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--exclude-tables "myapp.logs,myapp.cache,analytics.temp_data"Note: Table filters are mutually exclusive - you cannot use both --include-tables and --exclude-tables at the same time.
Skip data for heavy archives while keeping the schema in sync:
database-replicator init \
--source "$SRC" \
--target "$TGT" \
--schema-only-tables "myapp.audit_logs,analytics.evmlog_strides"Schema-only tables are recreated with full DDL but no rows, which dramatically reduces dump/restore time for historical partitions or archived hypertables.
Filter tables down to the rows you actually need:
database-replicator init \
--source "$SRC" \
--target "$TGT" \
--table-filter "output:series_time >= NOW() - INTERVAL '6 months'" \
--table-filter "transactions:status IN ('active','pending')"Each --table-filter takes [db.]table:SQL predicate. During init, data is streamed with COPY (SELECT ... WHERE predicate); during sync, we create PostgreSQL publications that emit only rows matching those predicates (requires PostgreSQL 15+ on the source).
For time-series tables (e.g., TimescaleDB hypertables) use the shorthand table:column:window:
database-replicator init \
--source "$SRC" \
--target "$TGT" \
--time-filter "metrics:created_at:6 months" \
--time-filter "billing_events:event_time:1 year"Supported window units: seconds, minutes, hours, days, weeks, months, and years. The shorthand expands to column >= NOW() - INTERVAL 'window'.
Combine database, table, and predicate filtering for precise control:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@seren-host:5432/postgres" \
--include-databases "myapp,analytics" \
--exclude-tables "myapp.logs" \
--schema-only-tables "analytics.evmlog_strides" \
--time-filter "analytics.metrics:created_at:6 months"Large replications often need different rules per database. Describe them in TOML and pass --config to both init and sync:
database-replicator init \
--source "$SRC" \
--target "$TGT" \
--config replication-config.tomlExample config file:
[databases.mydb]
# Schema-only tables (structure but no data)
schema_only = [
"analytics.evmlog_strides",
"reporting.archive"
]
# Table filters with WHERE clauses
[[databases.mydb.table_filters]]
table = "events"
schema = "analytics"
where = "created_at > NOW() - INTERVAL '90 days'"
[[databases.mydb.table_filters]]
table = "transactions"
where = "status IN ('active', 'pending')"
# Time filters (shorthand)
[[databases.mydb.time_filters]]
table = "metrics"
schema = "analytics"
column = "timestamp"
last = "6 months"See docs/replication-config.md for the full schema. CLI flags merge on top of the file so you can override a single table without editing the config.
PostgreSQL databases can have multiple schemas (namespaces) with identically-named tables. For example, both public.orders and analytics.orders can exist in the same database. Schema-aware filtering lets you target specific schema.table combinations to avoid ambiguity.
CLI with dot notation:
# Include tables from specific schemas
database-replicator init \
--source "$SRC" \
--target "$TGT" \
--schema-only-tables "analytics.large_table,public.temp"
# Filter tables in non-public schemas
database-replicator init \
--source "$SRC" \
--target "$TGT" \
--table-filter "analytics.events:created_at > NOW() - INTERVAL '90 days'" \
--table-filter "reporting.metrics:status = 'active'"
# Time filters with schema qualification
database-replicator init \
--source "$SRC" \
--target "$TGT" \
--time-filter "analytics.metrics:timestamp:6 months"TOML config with explicit schema field:
[databases.mydb]
# Schema-only tables (structure but no data)
schema_only = [
"analytics.evmlog_strides", # Dot notation
"reporting.archive"
]
# Table filters with explicit schema field
[[databases.mydb.table_filters]]
table = "events"
schema = "analytics"
where = "created_at > NOW() - INTERVAL '90 days'"
# Time filters with schema
[[databases.mydb.time_filters]]
table = "metrics"
schema = "analytics"
column = "timestamp"
last = "6 months"For convenience, table names without a schema qualifier default to the public schema:
# These are equivalent:
--schema-only-tables "users"
--schema-only-tables "public.users"
# TOML equivalent:
schema_only = ["users"] # Defaults to public schema
schema_only = ["public.users"] # Explicit public schemaThis means existing configurations continue to work without modification.
Without schema qualification, filtering "orders" is ambiguous if you have both public.orders and analytics.orders. Schema-aware filtering ensures:
- Precise targeting: Replicate
analytics.orderswhile excludingpublic.orders - No collisions: Different schemas can have identically-named tables
- FK safety: Cascading truncates handle schema-qualified FK relationships correctly
- Resume correctness: Checkpoints detect schema scope changes and invalidate when the replication scope shifts
Interactive mode provides a user-friendly terminal UI for selecting databases and tables to replicate. This is ideal for exploratory replications or when you're not sure exactly what you want to replicate.
Interactive mode is the default for init, validate, and sync commands. Simply run the command without any filter flags:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres"-
Select Databases: A multi-select checklist shows all available databases. Use arrow keys to navigate, space to select, and enter to confirm.
-
Select Tables to Exclude (optional): For each selected database, you can optionally exclude specific tables. If you don't want to exclude any tables, just press enter.
-
Review Configuration: The tool shows a summary of what will be replicated, including:
- Databases to replicate
- Tables to exclude (if any)
-
Confirm: You'll be asked to confirm before proceeding.
Connecting to source database...
✓ Connected to source
Discovering databases on source...
✓ Found 4 database(s)
Select databases to replicate:
(Use arrow keys to navigate, Space to select, Enter to confirm)
> [x] myapp
[x] analytics
[ ] staging
[ ] test
✓ Selected 2 database(s):
- myapp
- analytics
Discovering tables in database 'myapp'...
✓ Found 15 table(s) in 'myapp'
Select tables to EXCLUDE from 'myapp' (or press Enter to include all):
(Use arrow keys to navigate, Space to select, Enter to confirm)
[ ] users
[ ] orders
[x] logs
[x] cache
[ ] products
✓ Excluding 2 table(s) from 'myapp':
- myapp.logs
- myapp.cache
========================================
Replication Configuration Summary
========================================
Databases to replicate: 2
✓ myapp
✓ analytics
Tables to exclude: 2
✗ myapp.logs
✗ myapp.cache
========================================
Proceed with this configuration? [Y/n]:
To use CLI filter flags instead of interactive mode, add the --no-interactive flag:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@seren-host:5432/postgres" \
--no-interactive \
--include-databases "myapp,analytics"Note: The --yes flag (for init command) automatically disables interactive mode since it's meant for automation.
By default, the init command uses SerenAI's managed cloud service to execute replication jobs. This means your replication runs on AWS infrastructure managed by SerenAI, with no AWS account or setup required on your part.
- No network interruptions: Your replication continues even if your laptop loses connectivity
- No laptop sleep: Your computer can sleep or shut down without affecting the job
- Faster performance: Replication runs on dedicated cloud infrastructure closer to your databases
- No local resource usage: Your machine's CPU, memory, and disk are not consumed
- Automatic monitoring: Built-in observability with CloudWatch logs and metrics
- Cost-free: SerenAI covers all AWS infrastructure costs
When you run init without the --local flag, the tool:
- Submits your job to SerenAI's API with encrypted credentials
- Provisions an EC2 worker sized appropriately for your database
- Executes replication on the cloud worker
- Monitors progress and shows you real-time status updates
- Self-terminates when complete to minimize costs
Your database credentials are encrypted with AWS KMS and never logged or stored in plaintext.
Remote execution requires a SerenDB API key for authentication. The tool obtains the API key in one of two ways:
export SEREN_API_KEY="your-api-key-here"
database-replicator init --source "..." --target "..."If SEREN_API_KEY is not set, the tool will prompt you to enter your API key:
Remote execution requires a SerenDB API key for authentication.
You can generate an API key at:
https://console.serendb.com/api-keys
Enter your SerenDB API key: [input]
Getting Your API Key:
- Sign up for SerenDB at console.serendb.com/signup
- Navigate to console.serendb.com/api-keys
- Generate a new API key
- Copy and save it securely (you won't be able to see it again)
Security Note: Never commit API keys to version control. Use environment variables or secure credential management.
Remote execution is the default - just run init as normal:
# Runs on SerenAI's cloud infrastructure (default)
database-replicator init \
--source "postgresql://user:pass@source-host:5432/db" \
--target "postgresql://user:pass@seren-host:5432/db"The tool will:
- Submit the job to SerenDB's managed API
- Show you the job ID and trace ID for monitoring
- Poll for status updates and display progress
- Report success or failure when complete
Example output:
Submitting replication job...
✓ Job submitted
Job ID: 550e8400-e29b-41d4-a716-446655440000
Trace ID: 660e8400-e29b-41d4-a716-446655440000
Polling for status...
Status: provisioning EC2 instance...
Status: running (1/2): myapp
Status: running (2/2): analytics
✓ Replication completed successfully
If you prefer to run replication on your local machine, use the --local flag:
# Runs on your local machine
database-replicator init \
--source "postgresql://user:pass@source-host:5432/db" \
--target "postgresql://user:pass@seren-host:5432/db" \
--localLocal execution is useful when:
- You're testing or developing
- Your databases are not accessible from the internet
- You need full control over the execution environment
- You're okay with keeping your machine running during the entire operation
export SEREN_REMOTE_API="https://dev.api.seren.cloud/replication"
database-replicator init \
--source "..." \
--target "..."# Set 12-hour timeout for very large databases
database-replicator init \
--source "..." \
--target "..." \
--job-timeout 43200- Check your internet connection
- Verify you can reach SerenDB's API endpoint
- Try with
--localas a fallback
- AWS may be experiencing capacity issues in the region
- Wait a few minutes and check status again
- Contact SerenAI support if it persists for > 10 minutes
- Check the error message in the status response
- Verify your source and target database credentials
- Ensure databases are accessible from the internet
- Try running with
--localto validate locally first
For more details on the AWS infrastructure and architecture, see the AWS Setup Guide.
The tool works seamlessly with any PostgreSQL-compatible database provider. Here are examples for common providers:
database-replicator init \
--source "postgresql://user:[email protected]/mydb" \
--target "postgresql://user:pass@seren-host:5432/mydb"database-replicator init \
--source "postgresql://user:[email protected]:5432/mydb" \
--target "postgresql://user:pass@seren-host:5432/mydb"Note: AWS RDS requires the rds_replication role for logical replication:
GRANT rds_replication TO myuser;database-replicator init \
--source "postgresql://user:[email protected]:5432/mydb" \
--target "postgresql://user:pass@seren-host:5432/mydb"database-replicator init \
--source "postgresql://user:[email protected]:5432/mydb" \
--target "postgresql://user:pass@seren-host:5432/mydb"Note: Ensure wal_level = logical in postgresql.conf and restart PostgreSQL.
All providers support standard PostgreSQL connection strings. Add SSL/TLS parameters as needed:
# With SSL mode
--source "postgresql://user:pass@host:5432/db?sslmode=require"
# With SSL and certificate verification
--source "postgresql://user:pass@host:5432/db?sslmode=verify-full&sslrootcert=/path/to/ca.crt"The tool implements secure credential handling to prevent command injection vulnerabilities and credential exposure:
-
.pgpassAuthentication: Database credentials are passed to PostgreSQL tools (pg_dump, pg_dumpall, psql, pg_restore) via temporary.pgpassfiles instead of command-line arguments. This prevents credentials from appearing in process listings (psoutput) or shell history. -
Automatic Cleanup: Temporary
.pgpassfiles are automatically removed when operations complete, even if the process crashes or is interrupted. This is implemented using Rust's RAII pattern (Drop trait) to ensure cleanup happens reliably. -
Secure Permissions: On Unix systems,
.pgpassfiles are created with0600permissions (owner read/write only) as required by PostgreSQL. This prevents other users on the system from reading credentials. -
No Command Injection: By using separate connection parameters (
--host,--port,--dbname,--username) instead of embedding credentials in connection URLs passed to external commands, the tool eliminates command injection attack vectors.
Connection String Format: While you provide connection URLs to the tool (e.g., postgresql://user:pass@host:5432/db), these URLs are parsed internally and credentials are extracted securely. They are never passed as-is to external PostgreSQL commands.
Important Security Consideration: PostgreSQL logical replication subscriptions store connection strings in the pg_subscription system catalog table. This is a PostgreSQL design limitation - subscription connection strings (including passwords if provided) are visible to users who can query system catalogs.
Security Implications:
- Connection strings with passwords are stored in
pg_subscription.subconninfo - Users with
pg_read_all_settingsrole orSELECTonpg_subscriptioncan view these passwords - This information persists until the subscription is dropped
Recommended Mitigation - Configure .pgpass on Target Server:
To avoid storing passwords in the subscription catalog, configure a .pgpass file on your target PostgreSQL server:
-
Create
.pgpassfile in the PostgreSQL server user's home directory (typically/var/lib/postgresql/.pgpass):source-host:5432:dbname:username:password -
Set secure permissions:
chmod 0600 /var/lib/postgresql/.pgpass chown postgres:postgres /var/lib/postgresql/.pgpass
-
Use password-less connection string when running
sync:# Omit password from source URL database-replicator sync \ --source "postgresql://user@source-host:5432/db" \ --target "postgresql://user:pass@target-host:5432/db"
With this configuration, subscriptions will authenticate using the .pgpass file on the target server, and no password will be stored in pg_subscription.
Note: The tool displays a warning when creating subscriptions to remind you of this security consideration.
The tool uses several optimizations for fast, efficient database replication:
- Auto-detected parallelism: Automatically uses up to 8 parallel workers based on CPU cores
- Parallel dump: pg_dump with
--jobsflag for concurrent table exports - Parallel restore: pg_restore with
--jobsflag for concurrent table imports - Directory format: Uses PostgreSQL directory format to enable parallel operations
- Maximum compression: Level 9 compression for smaller dump sizes
- Faster transfers: Reduced network bandwidth and storage requirements
- Per-file compression: Each table compressed independently for parallel efficiency
- Blob support: Includes large objects (BLOBs) with
--blobsflag - Binary data: Handles images, documents, and other binary data efficiently
To prevent connection timeouts when connecting through load balancers (like AWS ELB), the tool automatically configures TCP keepalive:
- Environment variables: Automatically sets
PGKEEPALIVES=1,PGKEEPALIVESIDLE=60,PGKEEPALIVESINTERVAL=10for all PostgreSQL client tools - Connection strings: Adds keepalive parameters to connection URLs for direct connections
No manual configuration needed - the tool handles this automatically.
These optimizations can significantly reduce replication time, especially for large databases with many tables.
Ensure your user has the required privileges:
-- On source
ALTER USER myuser WITH REPLICATION;
GRANT USAGE ON SCHEMA public TO myuser;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO myuser;
-- On target
ALTER USER myuser WITH SUPERUSER;Provider-specific:
- AWS RDS:
GRANT rds_replication TO myuser; - Neon: Full support for logical replication out of the box
- Self-hosted: Ensure
wal_level = logicalin postgresql.conf
The tool handles existing publications gracefully. If you need to start over:
-- On target
DROP SUBSCRIPTION IF EXISTS seren_replication_sub;
-- On source
DROP PUBLICATION IF EXISTS seren_replication_pub;Check status frequently during replication:
# Monitor until lag < 1 second
watch -n 5 'database-replicator status --source "$SOURCE" --target "$TARGET"'If lag is high:
- Check network bandwidth between source and target
- Verify target database has sufficient resources (CPU, memory, disk I/O)
- Consider scaling target database instance
- Check for long-running queries on target blocking replication
Symptom: Operations fail after 20-30 minutes with "connection closed" errors during init filtered copy.
Root Cause: When the target database is behind an AWS Elastic Load Balancer (ELB), the load balancer enforces idle connection timeouts (typically 60 seconds to 10 minutes). During long-running COPY operations, if data isn't flowing continuously, the ELB sees the connection as idle and closes it.
Solution: Increase the ELB idle timeout:
# Using AWS CLI
aws elbv2 modify-load-balancer-attributes \
--region us-east-1 \
--load-balancer-arn <ARN> \
--attributes Key=idle_timeout.timeout_seconds,Value=1800
# Or via Kubernetes service annotation
kubectl annotate service <postgres-service> \
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout=1800Diagnosis Steps:
- Check if target is behind a load balancer (hostname contains
elb.amazonaws.com) - Test basic connectivity:
timeout 10 psql <target-url> -c "SELECT version();" - Check PostgreSQL timeout settings (should be
statement_timeout = 0) - Check how much data is being copied to estimate operation duration
- If target is responsive but operations timeout after predictable intervals, it's likely an ELB/proxy timeout
Alternative: The tool automatically configures TCP keepalive to mitigate this issue, but extremely idle connections may still timeout.
When using filtered snapshots (table-level WHERE clauses or time filters), tables with foreign key relationships are truncated using TRUNCATE CASCADE to handle dependencies. This error means a dependent table would lose data because it's not included in your replication scope.
Problem: You're replicating a filtered table that has foreign key relationships, but some of the FK-related tables are not being copied. TRUNCATE CASCADE would delete data from those tables.
Solution: Include all FK-related tables in your replication scope:
# If you're filtering orders, also include users table
database-replicator init \
--source "$SOURCE" \
--target "$TARGET" \
--config replication.toml # Include all FK-related tablesExample config with FK-related tables:
[databases.mydb]
[[databases.mydb.table_filters]]
table = "orders"
where = "created_at > NOW() - INTERVAL '90 days'"
# Must also include users since orders references users(id)
[[databases.mydb.table_filters]]
table = "users"
where = "id IN (SELECT user_id FROM orders WHERE created_at > NOW() - INTERVAL '90 days')"Alternative: If you don't want to replicate the related tables, remove the foreign key constraint before replication.
Symptom: Connections succeed but queries hang indefinitely. Even simple queries like SELECT version() don't respond.
Diagnosis:
# Test with timeout
timeout 10 psql <target-url> -c "SELECT version();"
# If that hangs, check pod/container status
kubectl get pods -l app=postgres
kubectl logs <postgres-pod> --tail=100
# Check for locked queries (if you can connect)
psql <url> -c "SELECT pid, state, query FROM pg_stat_activity WHERE state != 'idle';"Solution: Restart the PostgreSQL instance or container. Check resource usage (CPU, memory, disk).
Replicate entire database with all tables:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/mydb" \
--target "postgresql://user:pass@target-host:5432/mydb" \
--yes
database-replicator sync \
--source "postgresql://user:pass@source-host:5432/mydb" \
--target "postgresql://user:pass@target-host:5432/mydb"Replicate only specific databases:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--include-databases "production,analytics" \
--yes
database-replicator sync \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--include-databases "production,analytics"Replicate only recent data:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/mydb" \
--target "postgresql://user:pass@target-host:5432/mydb" \
--time-filter "events:created_at:6 months" \
--time-filter "metrics:timestamp:1 year" \
--schema-only-tables "audit_logs,archive_table" \
--yes
database-replicator sync \
--source "postgresql://user:pass@source-host:5432/mydb" \
--target "postgresql://user:pass@target-host:5432/mydb" \
--time-filter "events:created_at:6 months" \
--time-filter "metrics:timestamp:1 year"Create replication.toml:
[databases.production]
# Schema-only tables (no data)
schema_only = [
"audit_logs",
"archive_data"
]
# Filter events to last 90 days
[[databases.production.table_filters]]
table = "events"
where = "created_at > NOW() - INTERVAL '90 days'"
# Filter metrics to last 6 months
[[databases.production.time_filters]]
table = "metrics"
column = "timestamp"
last = "6 months"Run replication:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--config replication.toml \
--yes
database-replicator sync \
--source "postgresql://user:pass@source-host:5432/postgres" \
--target "postgresql://user:pass@target-host:5432/postgres" \
--config replication.tomlRun replication locally instead of on cloud infrastructure:
database-replicator init \
--source "postgresql://user:pass@source-host:5432/mydb" \
--target "postgresql://user:pass@target-host:5432/mydb" \
--local \
--yesNo, logical replication requires the target to be the same or newer version than the source. You can replicate from PostgreSQL 12 → 13, but not 13 → 12.
Yes, but minimally. Logical replication adds some overhead:
- WAL generation (already happens for crash recovery)
- Replication slot maintains WAL files until consumed
- Network bandwidth for streaming changes
For most workloads, the impact is negligible. Monitor disk usage on source to ensure WAL files don't accumulate.
Yes, the tool supports schema-aware filtering. You can replicate schema1.table from source to schema2.table on target by using schema-qualified table names.
PostgreSQL will continue to stream changes as fast as possible. If lag grows too large:
- Check target database resources (CPU, memory, disk I/O)
- Verify network bandwidth between source and target
- Consider scaling target database instance
Use the status command to monitor lag in real-time.
Yes, you can temporarily disable the subscription on the target:
ALTER SUBSCRIPTION seren_replication_sub DISABLE;To resume:
ALTER SUBSCRIPTION seren_replication_sub ENABLE;Drop the subscription on the target, then the publication on the source:
-- On target
DROP SUBSCRIPTION IF EXISTS seren_replication_sub;
-- On source
DROP PUBLICATION IF EXISTS seren_replication_pub;Yes, the tool works with TimescaleDB. Use time-based filters for hypertables:
--time-filter "hypertable_name:time_column:6 months"This replicates only recent data from hypertables, reducing replication time significantly.
Yes, create multiple subscriptions on different target databases, all pointing to the same publication on the source.
- init: One-time snapshot replication (schema + data)
- sync: Continuous replication (streams changes in real-time)
Run init first to copy existing data, then sync to keep databases synchronized.
Yes, sync only streams changes - it doesn't copy existing data. Run init first to perform the initial snapshot, then sync to set up continuous replication.
xmin-based sync is an automatic fallback when your source database doesn't have wal_level=logical configured. It uses PostgreSQL's xmin system column to detect which rows have changed since the last sync.
The tool automatically detects your source's capabilities and chooses the best method - you don't need to configure anything.
Yes! If your source database has the default wal_level=replica, the sync command automatically uses xmin-based sync, which requires no source configuration changes - just SELECT privilege on the tables you want to sync.
Since deleted rows no longer exist, xmin can't track them. Instead, xmin-based sync periodically performs primary key reconciliation:
- Fetches all PKs from source
- Compares with PKs in target
- Deletes rows from target that don't exist in source
This runs every 10 sync cycles by default.
| Feature | Logical Replication | xmin-Based Sync |
|---|---|---|
| Latency | Sub-second | Polling interval (1 hour default) |
| Delete detection | Real-time | Periodic (every ~5 minutes) |
| Source requirements | wal_level=logical | None (SELECT only) |
| Network usage | Minimal (only changes) | Higher (full rows) |
Use logical replication for sub-second latency if your source supports it. Use xmin-based sync for zero-configuration simplicity.
- Main README - Multi-database support overview
- Replication Configuration Guide - Advanced filtering with TOML config files
- AWS Setup Guide - Remote execution infrastructure details
- CI/CD Guide - Automated testing and deployment
- CLAUDE.md - Development guidelines and technical details
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.