Stream data from PostgreSQL to Apache Kafka
Sync your PostgreSQL data with Apache Kafka in minutes using Estuary Flow for real-time, no-code integration and seamless data pipelines.
- No credit card required
- 30-day free trial


- 100SOf connectors
- 5500+Active users
- <100MSEnd-to-end latency
- 7+GB/SECSingle dataflow

PostgreSQL connector details
Built for real-time data integration, the Estuary PostgreSQL connector streams inserts, updates, and deletes from PostgreSQL databases using Change Data Capture (CDC) via logical replication. It reads directly from the write-ahead log (WAL) to deliver low-latency, exactly-once data movement into Flow collections. The connector supports self-hosted, RDS, Aurora, Cloud SQL, Azure Database for PostgreSQL, and Supabase, with secure connectivity options such as SSH tunneling and SSL.
- Continuous CDC streaming through PostgreSQL logical replication
- Works with managed and on-prem PostgreSQL instances
- Supports backfill and read-only captures
- Automatically manages replication slots and publications
- Secure setup via SSH or SSL

Apache Kafka connector details
The Apache Kafka materialization connector publishes data from Estuary Flow collections to Kafka topics, enabling downstream systems to consume real-time streams of structured, reliable data.
- Continuous streaming: Streams collection updates to Kafka topics in real-time for event-driven architectures and analytics pipelines.
- Flexible message encoding: Supports both Avro (with schema registry) and JSON formats, giving teams flexibility in serialization strategy.
- Secure authentication: Compatible with SASL/PLAIN, SCRAM-SHA-256, and SCRAM-SHA-512 authentication methods, along with TLS encryption.
- Scalable configuration: Allows you to define topic partitions and replication factors for performance and redundancy.
- Schema registry support: Seamlessly integrates with Confluent Cloud or self-hosted schema registries for Avro schema management.
- At-least-once delivery: Ensures reliable message delivery with future support planned for exactly-once semantics.
💡 Tip: When connecting to Confluent Cloud, use the PLAIN SASL mechanism and provide your schema registry key and secret for authentication.
How to integrate PostgreSQL with Apache Kafka in 3 simple steps using Estuary Flow
Connect PostgreSQL as Your Real-Time Data Source
Set up a real-time source connector for PostgreSQL in minutes. Estuary captures change data (CDC), events, or snapshots — no custom pipelines, agents or manual configs needed.
Configure Apache Kafka as Your Target
Choose Apache Kafka as your target system. Estuary intelligently maps schemas, supports both batch and streaming loads, and adapts to schema changes automatically.
Deploy and Monitor Your End-to-End Data Pipeline
Launch your pipeline and monitor it from a single UI. Estuary Flow guarantees exactly-once delivery, handles backfills and replays, and scales with your data — without engineering overhead.
Estuary Flow in action
See how to build end-to-end pipelines using no-code connectors in minutes. Estuary Flow does the rest.
Why Estuary Flow is the best choice for data integration
Estuary Flow combines the most real-time, streaming change data capture (CDC), and batch connectors together into a unified modern data pipeline:

What customers are saying
Increase productivity 4x
With Flow companies increase productivity 4x and deliver new projects in days, not months. Spend much less time on troubleshooting, and much more on building new features faster. Flow decouples sources and destinations so you can add and change systems without impacting others, and share data across analytics, apps, and AI.
Spend 2-5x less
Estuary customers not only do 4x more. They also spend 2-5x less on ETL and ELT. Flow's unique ability to mix and match streaming and batch loading has also helped customers save as much as 40% on data warehouse compute costs.
Data moved
It's free up to 10 GB/month and 2 connector instances.
GB
Choose number of sources and destinations.
Your price at Estuary
Pricing comparisons
Frequently Asked Questions
- Set Up Capture: In Estuary Flow, go to Sources, click + NEW CAPTURE, and select the PostgreSQL connector.
- Enter Details: Add your PostgreSQL connection details and click SAVE AND PUBLISH.
- Materialize Data: Go to Destinations, choose your target system, link the PostgreSQL capture, and publish.
What is PostgreSQL?
Stream data from PostgreSQL with sub-100 ms latency. Just connect to Postgres and select the tables you want to capture. Estuary Flow immediately starts streaming the write-ahead log (WAL), and incremental micro-snapshots from each table. Data is simultaneously streamed exactly once and stored in parallel inside your own private cloud storage for reuse at any time.
Move data from Postgres to any number of other destinations in parallel - Postgres to BigQuery, Postgres to Databricks, Postgres to Elasticsearch, Postgres to Redshift, Postgres to Snowflake, Postgres to SQL and NoSQL databases - and many more (see integrations).
How do I Transfer Data from PostgreSQL?
What are the pricing options for Estuary Flow?
Estuary offers competitive and transparent pricing, with a free tier that includes 2 connector instances and up to 10 GB of data transfer per month. Explore our pricing options to see which plan fits your data integration needs.
Getting started with Estuary
Free account
Getting started with Estuary is simple. Sign up for a free account.
Sign upDocs
Make sure you read through the documentation, especially the get started section.
Learn moreCommunity
I highly recommend you also join the Slack community. It's the easiest way to get support while you're getting started.
Join Slack CommunityEstuary 101
I highly recommend you also join the Slack community. It's the easiest way to get support while you're getting started.
Watch

Related integrations with PostgreSQL
DataOps made simple
Add advanced capabilities like schema inference and evolution with a few clicks. Or automate your data pipeline and integrate into your existing DataOps using Flow's rich CLI.
