Estuary
icon
REAL-TIME ETL & CDC

Stream into Databricks with your free account

Continously ingest and deliver both streaming and batch change data from 100s of sources using Estuary's custom no-code connectors.

  • <100ms Data pipelines
  • 100+ Connectors
  • 2-5x less than batch ELT
01. Select a source02. Transform in-flight03. Deliver to Databricks
Databricks logo
take a tour
Databricks logo

Databricks connector details

Estuary’s Databricks connector materializes Flow collections into tables in a Databricks SQL Warehouse. Data changes are first written to a Unity Catalog Volume and then applied transactionally to Databricks tables, ensuring reliable delivery of real-time updates. This connector supports both standard merges and delta updates for high-performance CDC workflows.

  • Real-time CDC data sync from Flow collections into Databricks tables
  • Choice of standard (merge) or delta updates to optimize performance and costs
  • Native integration with Databricks Unity Catalog for secure, managed storage
  • Flexible sync scheduling to balance freshness and warehouse costs

For more details about the Databricks connector, check out the documentation page.

How to connect your data source to Databricks in 3 easy steps

1

Connect your data source

Select from more than 100 supported databases and SaaS platforms including PostgreSQL, MySQL, SQL Server, MongoDB, and Kafka.

2

Prepare and transform your data

Apply transformations and schema mapping as data moves whether you are streaming in real time or loading in batches.

3

Sync to Databricks

Continuously or periodically deliver data into your destination with support for change data capture and reliable delivery for accurate insights.

Learn more with some related videos

Dive deeper into Databricks with tutorials and walkthroughs from our YouTube channel.

Get Started Free

Trusted by data teams worldwide

All data connections are fully encrypted in transit and at rest. Estuary also supports private cloud and BYOC deployments for maximum security and compliance.

icon-2

HIGH THROUGHPUT

Distributed event-driven architecture enable boundless scaling with exactly-once semantics.

icon-3

DURABLE REPLICATION

Cloud storage backed CDC w/ heart beats ensures reliability, even if your destination is down.

icon-1

REAL-TIME INGESTION

Capture and relay every insert, update, and delete in milliseconds.

Real-timehigh throughput

Point a connector and replicate changes to Databricks in <100ms. Leverage high-availability, high-throughput Change Data Capture.Or choose from 100s of batch and real-time connectors to move and transform data using ELT and ETL.

  • Ensure your Databricks insights always reflect the latest data by connecting your databases to Databricks with change data capture.
  • Or connect critical SaaS apps to Databricks with real-time data pipelines.

See how you can integrate any source with Databricks:

Details

or choose from these popular data sources:

PostgreSQL logo
PostgreSQL
MySQL logo
MySQL
SQL Server logo
SQL Server
MongoDB logo
MongoDB
Apache Kafka logo
Apache Kafka
BigQuery logo
BigQuery
Snowflake Data Cloud logo
Snowflake Data Cloud

Don't see a connector?Request and our team will get back to you in 24 hours

Pipelines as fast as Kafka, easy as managed ELT/ETL, cheaper than building it.

Feature Comparison

EstuaryBatch ELT/ETLDIY PythonKafka
Price$$$-$$$$$-$$$$$-$$$$
Speed<100ms5min+Varies<100ms
EaseAnalysts can manageAnalysts can manageData EngineerSenior Data Engineer
Scale
Maintenance EffortLowMediumHighHigh
Detailed Comparison

Deliver real-time and batch data from DBs, SaaS, APIs, and more

Connection-1

Popular sources/destinations you can sync your data with

Choose from more than 100 supported databases and SaaS applications. Click any source/destination below to open the integration guide and learn how to sync your data in real time or batches.