Docs/Security & Operations

Troubleshooting

Use this page as the first stop when a workflow is blocked.

Connection issues

Check:

  • host, port, credentials, and SSL/TLS settings
  • database or bucket scope
  • server-side reachability from the DBConvert Streams deployment
  • whether the connection can be opened in Data Explorer

Explorer or SQL issues

Check:

  • whether the expected schemas, tables, or files are visible
  • whether you are in a direct database context or a DuckDB-backed file/multi-source context
  • whether a federated query needs alias-qualified names or explicit casts

Stream issues

Check:

  • source and target compatibility for the selected mode
  • whether the selected mode is Convert or CDC
  • stream logs and run history in Observability
  • whether the target requires staged file delivery or additional credentials

File and object storage issues

Check:

  • local base path permissions
  • S3-compatible credentials and bucket access
  • path validity and manifest behavior for object storage workflows
  • whether the issue is in explorer listing, SQL, or stream execution specifically

Snowflake issues

Start with:

  • endpoint reachability
  • credentials and schema access
  • target-only assumptions from Snowflake Target

Snowflake: timestamp corruption (far-future dates)

Symptom: Timestamps like 2005-05-25 11:30:37 appear as far-future dates like 37366-12-22 after loading Parquet files into Snowflake.

Cause: Snowflake's default Parquet reader ignores Arrow logical type metadata and misinterprets millisecond timestamps as days.

Solution: DBConvert Streams automatically includes USE_LOGICAL_TYPE = TRUE in the Snowflake COPY INTO command. If you run COPY INTO manually, ensure this option is set:

COPY INTO my_table FROM @my_stage/file.parquet
FILE_FORMAT = (TYPE = 'PARQUET' USE_LOGICAL_TYPE = TRUE)

Snowflake: binary data fails to load (UTF-8 error)

Symptom: Invalid UTF8 detected while decoding '0x89PNG...' when loading tables containing BLOB/binary data (e.g., images).

Cause: Snowflake applies strict UTF-8 validation to Parquet files by default. Raw binary data fails this check.

Solution: DBConvert Streams automatically converts binary data to base64-encoded strings and maps BLOB columns to VARCHAR in Snowflake DDL. No manual action is needed — this is handled transparently during stream execution.

PostgreSQL TLS/SSL issues

Connection test passes but stream fails with SSL error

Symptom: Connection test in UI passes, but stream execution fails with:

tls: failed to verify certificate: x509: certificate signed by unknown authority

Cause: This can occur when custom SSL certificates are configured but cloud-specific TLS settings override them during stream execution.

Solution: Ensure your SSL certificates (CA, client cert/key) are configured in the connection settings. DBConvert Streams respects user-provided TLS configuration and only applies cloud-specific defaults as fallbacks when no custom TLS config is set.

If the issue persists:

  1. Verify the CA certificate is valid and matches the server
  2. Check that the SSL mode matches your server's requirements (require, verify-ca, or verify-full)
  3. For cloud databases (AWS RDS, Azure, DigitalOcean), confirm the endpoint hostname matches the certificate

See SSL Configuration for detailed setup.