Estuary 101: How To Build Right-Time Data Pipelines
Join hosts Dani and Zulf for a fast-paced walkthrough of how to design and ship right-time data pipelines with Estuary.
In this session, you’ll get:
- Context: What “right-time” really means, where Estuary fits among batch vs. streaming and managed vs. self-hosted options, and why unified ingestion reduces cost and complexity.
- Live End-to-End Demo: Connect CDC sources, apply declarative transformations, and materialize data simultaneously into a warehouse, analytical engines, and object storage—plus a look at observability, error recovery, and real-world scenarios like schema drift and backfills.
- Live Q&A: Ask about your specific stack, pipeline designs, and how to scale Estuary for enterprise workloads.
Perfect for data and analytics engineers, architects, and platform owners who want fresher data with fewer moving parts.
More videos

Capture Data from Oracle Using CDC (Docker Demo)
Don’t silo your data in Oracle: learn how to replicate it to a destination of your choice with CDC. We’ll cover archive log configuration and Estuary setup with a demo Oracle instance. Follow along! This example project is available at: https://github.com/estuary/examples/tree/main/oracle-capture - Try it out for free at Estuary: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=oracle_capture - Reference Estuary’s Oracle docs, including instructions for non-container databases: https://docs.estuary.dev/reference/Connectors/capture-connectors/OracleDB/ - Have questions? Contact us on Slack: https://go.estuary.dev/slack Media resources used in this video are from Pexels, Canva, and the YouTube Studio Audio Library. 0:00 Introduction 0:40 Oracle & CDC 2:30 Demo: Project overview 5:42 Demo: Run container 7:22 Demo: Estuary setup 8:50 Wrapping up

Save Webhook Data to Databricks in Real Time
Learn how to stream incoming webhook data to Databricks without setting up and maintaining your own server for webhook captures. Follow along with our 3-minute demo and try out Estuary Flow for free → https://dashboard.estuary.dev/register Learn more from our: - Website: https://estuary.dev/ - Webhook capture docs: https://docs.estuary.dev/reference/Connectors/capture-connectors/http-ingest/ - Databricks materialization docs: https://docs.estuary.dev/reference/Connectors/materialization-connectors/databricks/ - Blog article on webhook setup: https://estuary.dev/blog/webhook-setup/ 0:00 Intro 0:19 Set up webhook capture 1:19 Configure webhook in Square 2:08 Create Databricks materialization 2:43 Outro

How to Stream Data into Snowflake
Ingest data into a Snowflake warehouse using real-time Snowpipe Streaming or using batch COPY INTO commands. Estuary makes Snowflake integration simple with pre-built no-code connectors. Following along? Find the copy/pasteable commands in Estuary’s Snowflake docs: https://docs.estuary.dev/reference/Connectors/materialization-connectors/Snowflake/ - Set up your first data pipeline for free at Estuary: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=snowflake_ingestion - Learn more about Estuary’s Snowflake capabilities: https://estuary.dev/solutions/technology/real-time-snowflake-streaming/ - Read the complete guide to Snowpipe Streaming: https://estuary.dev/blog/snowpipe-streaming-fast-snowflake-ingestion/ - Discover how Snowflake fared in Estuary’s Data Warehouse Benchmark: https://estuary.dev/data-warehouse-benchmark-report/ - Download Snowflake Ingestion Playbook: https://estuary.dev/snowflake-ingestion-whitepaper/ FAQ 1. What is the fastest way to load data into Snowflake? Snowpipe Streaming with row-based ingestion. In Estuary, you can enable it per table using Delta Updates. 2. Why use key pair authentication for Snowflake? It provides strong security, short-lived tokens, and is Snowflake’s recommended approach for service integrations like Estuary. 3. Can I mix real-time and batch ingestion in the same pipeline? Yes. With Estuary’s Snowflake connector, you can run some tables in batch (COPY INTO or Snowpipe) and others in real time with Snowpipe Streaming. Media resources used in this video are from Pexels, Canva, and the YouTube Studio Audio Library. 0:00 Introduction 1:05 Snowflake concerns 1:51 Ingestion options 3:23 Beginning the demo 3:47 Create Snowflake resources 4:28 User auth setup 5:17 Estuary connector config 6:44 Customization options 8:07 Wrapping up

What’s Next for Data Warehouses? Lessons from Our Benchmark and Emerging Trends
Dani and Ben talks about key findings on performance ceilings, cost traps, and failure modes, and explore the major trends reshaping data warehouse architecture, including: - Separation of Compute & Storage – How Snowflake Gen2, Databricks serverless, and open table formats like Iceberg are changing the game. - Lakehouse Reality Check: What’s working for teams adopting Iceberg, schema evolution patterns, and lake-native pipelines. - Flexibility Over Centralization: Moving beyond “one warehouse to rule them all.

Stream CRM Data from HubSpot to MotherDuck
Need blazing-fast analytics to keep up with your customers? Try sending your HubSpot data to MotherDuck: Estuary makes the process simple and straightforward. Register for free at: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=hubspot_motherduck Ready to dive in deeper? Try these resources: 📄 Hubspot capture connector docs: https://docs.estuary.dev/reference/Connectors/capture-connectors/hubspot/ 🐤 MotherDuck materialization docs: https://docs.estuary.dev/reference/Connectors/materialization-connectors/motherduck/ 💬 Our Slack community: https://go.estuary.dev/slack Music sourced from the YouTube Studio Audio Library 0:00 Introduction 0:20 Starting the pipeline 1:01 (Optional) Create HubSpot access token 1:39 Finish capture 2:15 MotherDuck materialization 2:51 Create staging bucket 4:40 MotherDuck credentials 5:20 Complete pipeline & wrap up

Dekaf: How to Use Kafka Minus the Kafka
Have you ever wanted Kafka’s real-time pub/sub benefits without implementing and maintaining a whole Kafka ecosystem yourself? Learn how with Dekaf. We’ll cover some Kafka basics to help explain how Estuary’s Kafka API compatibility layer fits seamlessly into a modern data architecture. - Register for a free Estuary account: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=dekaf_video - Learn more about Dekaf: https://docs.estuary.dev/reference/Connectors/dekaf/ - Find the example kcat command: https://docs.estuary.dev/guides/dekaf_reading_collections_from_kafka/#2-set-up-your-kafka-client - Join us on Slack: https://go.estuary.dev/slack Media resources used in this video are from Pexels, Canva, and the YouTube Studio Audio Library. 0:00 Introduction 0:30 What is Kafka? 1:04 The Kafka Ecosystem 2:39 Integrating with Kafka Consumers... 3:26 ...using Dekaf 4:12 Dekaf Setup 6:16 kcat Test 6:45 Final Thoughts

Create a Webhook-to-Snowflake Data Pipeline
Create a complete data pipeline in 3 minutes that captures Square (or any other platform's) webhooks and materializes to Snowflake. With Estuary Flow, you can create endpoints to receive webhook data without setting up and maintaining your own server. Try it out for free at → https://dashboard.estuary.dev/register Ready for more? - See our site: https://estuary.dev/ - Learn more about webhooks: https://estuary.dev/blog/webhook-setup/ - Read our webhook capture docs: https://docs.estuary.dev/reference/Connectors/capture-connectors/http-ingest/ - Or our Snowflake materialization docs: https://docs.estuary.dev/reference/Connectors/materialization-connectors/Snowflake/ 0:00 Intro 0:19 Set up webhook capture 1:17 Configure webhook in Square 2:06 Create Snowflake materialization 2:48 Outro

Estuary | The Right Time Data Platform
Welcome to Estuary, the Right Time Data Platform built for modern data teams. With Estuary, you can move and transform data between hundreds of systems at sub second latency or in batch, depending on your business needs. • Capture data from source systems using pre built, no code connectors. • Automatically infer schemas and manage both real time and historical events in collections. • Materialize your data to any destination with ease and flexibility. • Choose your deployment model: fully SaaS, Bring Your Own Cloud, or private deployment with enterprise level security. Start streaming an ocean of data and get going today: 🌊 https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=overview_video Learn more: 🌐 On our site: https://www.estuary.dev/?utm_source=youtube&utm_medium=social&utm_campaign=flow_overview 📚 In our docs: https://docs.estuary.dev/?utm_source=youtube&utm_medium=social&utm_campaign=flow_overview Connect with us: 💬 On Slack: https://go.estuary.dev/slack 🧑💻 In GitHub: https://github.com/estuary ℹ️ On LinkedIn: https://www.linkedin.com/company/estuary-tech/ #righttimedata #datapipelines #streamingdata #realtimeanalytics #CDC #dataengineering #Estuary

Capture NetSuite Data Using SuiteAnalytics and Estuary
Learn how to transfer your NetSuite data in minutes using SuiteAnalytics Connect and Estuary. We demo how to set up your capture step-by-step, covering all the resources you'll need to get your data flowing. Once connected, Estuary Flow ingests your NetSuite data in real time — ready to be streamed into cloud warehouses like Snowflake, BigQuery, and more. Whether you're building a NetSuite to Snowflake pipeline, syncing NetSuite to BigQuery, or integrating with other analytics tools, Estuary lets you do it in minutes. Looking for more? - Start building pipelines for free at: https://dashboard.estuary.dev/register - See Estuary's NetSuite SuiteAnalytics capture connector docs: https://docs.estuary.dev/reference/Connectors/capture-connectors/netsuite-suiteanalytics/ - View materialization options for your captured data: https://docs.estuary.dev/reference/Connectors/materialization-connectors/ - Chat with us in Slack: https://go.estuary.dev/slack 0:00 Intro 0:18 NetSuite setup 4:03 Creating the Estuary capture 6:10 Outro

Unify Your Data in Microsoft Fabric with Estuary
Want to get your data into Microsoft Fabric—fast and without writing code? Discover what unified data can do with a Microsoft Fabric integration. We’ll cover what makes this relatively new data platform unique and how you can enhance its capabilities further using Estuary. A step-by-step demo walks through Fabric warehouse connector setup in Estuary so you can get your data flowing. Interested in more? - Register for a free Estuary account: https://dashboard.estuary.dev/register - Learn more about Microsoft Fabric: https://estuary.dev/blog/what-is-microsoft-fabric/ - Find source connectors to go with your Fabric destination: https://docs.estuary.dev/reference/Connectors/capture-connectors/ - Join us on Slack: https://go.estuary.dev/slack Media resources used in this video are from Pexels and the YouTube Studio Audio Library. 0:00 Introduction 0:26 Microsoft Fabric 1:33 Covering gaps with Estuary 2:33 Beginning connector creation 3:23 Creating a warehouse 3:59 Configuring a service principal 5:59 Creating a storage account 6:55 Wrapping up the connector 7:33 Outro

Estuary Overview
Discover the power of Estuary, a platform built to make creating real-time data pipelines easy. In this overview, we’ll show you how Estuary helps you move data from source to destination in real time, with no coding required. 🌐 Check out our website to learn more about Estuary: https://www.estuary.dev/ ➡️ Start building your pipelines for free now: https://dashboard.estuary.dev/register if you’re curious for more, check out our docs or jump into our community Slack to ask questions! 📚 Explore our docs for detailed guides and tutorials: https://docs.estuary.dev/ 💬 Join our Slack community to connect with developers and ask questions: https://estuary-dev.slack.com/ #Estuary #RealtimeETL #DataStreaming #DataOps #dataengineering

Seamless Data Integration, Unlimited Potential
Discover the simplest way to connect and move your data.
Get hands-on for free, or schedule a demo to see the possibilities for your team.


