MongoDB to Snowflake in real-time (no Debezium)
In this video, Jeff from Estuary walks you through how to move data from MongoDB to Snowflake using Estuary, a real-time ETL platform. Learn the key benefits of using Estuary, including low-latency Change Data Capture (CDC) and automatic unpacking of nested documents. You'll also see a step-by-step guide to setting up a MongoDB Atlas database and creating a real-time data pipeline with Estuary.
Key features covered:
- Real-time data replication from MongoDB to Snowflake
- Low-latency data movement and automatic flattening of nested documents
- Backfilling data and setting up materializations to Snowflake in just a few clicks
#MongodbtoSnowflake #changedatacapture
If you have any questions, feel free to join our community Slack. Start building real-time data pipelines with Estuary today!
Sign up for a free account: https://dashboard.estuary.dev/register
Join our Slack community: https://estuary-dev.slack.com/join/shared_invite/zt-86nal6yr-VPbv~YfZE9Q~6Zl~gmZdFQ#/shared-invite/email
Blog: https://estuary.dev/mongodb-to-snowflake/
0:00 – Introduction: Moving Data to Snowflake with Estuary
0:12 – Key Benefits of Using Estuary: Real-Time Data Integration
1:18 – Automatic Flattening of Nested Data
2:08 – Testing Connection to the MongoDB Source
2:25 – Saving and Publishing the Real-Time Pipeline
2:54 – Sending Data to Snowflake and Other Destinations
3:12 – Real-Time Backfill and Data Materialization to Snowflake
More videos

Real-time CDC with MongoDB and Estuary in 3 minutes
Build a Real-Time CDC Pipeline from MongoDB using Estuary: This tutorial demonstrates how to create a real-time change data capture (CDC) pipeline from MongoDB using Estuary. It covers setting up MongoDB Atlas, configuring Estuary, and monitoring data replication in real-time. #MongodbCDC #Changedatacapture Start building for free at: https://dashboard.estuary.dev/register Blog Post MongoDB CDC: https://estuary.dev/mongodb-change-data-capture/ 0:00 – Introduction: Real-Time CDC Pipeline from MongoDB using Estuary 0:07 – Provisioning MongoDB Atlas 0:56 – Creating a Real-Time CDC Pipeline in Estuary 1:17 – Discovering Database Objects for Replication 1:47 – Saving and Publishing the CDC Pipeline 2:36 – Inserting a New Record in MongoDB 3:01 – Verifying Record Update in Estuary

How to Stream Data into Snowflake
Ingest data into a Snowflake warehouse using real-time Snowpipe Streaming or using batch COPY INTO commands. Estuary makes Snowflake integration simple with pre-built no-code connectors. Following along? Find the copy/pasteable commands in Estuary’s Snowflake docs: https://docs.estuary.dev/reference/Connectors/materialization-connectors/Snowflake/ - Set up your first data pipeline for free at Estuary: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=snowflake_ingestion - Learn more about Estuary’s Snowflake capabilities: https://estuary.dev/solutions/technology/real-time-snowflake-streaming/ - Read the complete guide to Snowpipe Streaming: https://estuary.dev/blog/snowpipe-streaming-fast-snowflake-ingestion/ - Discover how Snowflake fared in Estuary’s Data Warehouse Benchmark: https://estuary.dev/data-warehouse-benchmark-report/ - Download Snowflake Ingestion Playbook: https://estuary.dev/snowflake-ingestion-whitepaper/ FAQ 1. What is the fastest way to load data into Snowflake? Snowpipe Streaming with row-based ingestion. In Estuary, you can enable it per table using Delta Updates. 2. Why use key pair authentication for Snowflake? It provides strong security, short-lived tokens, and is Snowflake’s recommended approach for service integrations like Estuary. 3. Can I mix real-time and batch ingestion in the same pipeline? Yes. With Estuary’s Snowflake connector, you can run some tables in batch (COPY INTO or Snowpipe) and others in real time with Snowpipe Streaming. Media resources used in this video are from Pexels, Canva, and the YouTube Studio Audio Library. 0:00 Introduction 1:05 Snowflake concerns 1:51 Ingestion options 3:23 Beginning the demo 3:47 Create Snowflake resources 4:28 User auth setup 5:17 Estuary connector config 6:44 Customization options 8:07 Wrapping up

Dekaf: How to Use Kafka Minus the Kafka
Have you ever wanted Kafka’s real-time pub/sub benefits without implementing and maintaining a whole Kafka ecosystem yourself? Learn how with Dekaf. We’ll cover some Kafka basics to help explain how Estuary’s Kafka API compatibility layer fits seamlessly into a modern data architecture. - Register for a free Estuary account: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=dekaf_video - Learn more about Dekaf: https://docs.estuary.dev/reference/Connectors/dekaf/ - Find the example kcat command: https://docs.estuary.dev/guides/dekaf_reading_collections_from_kafka/#2-set-up-your-kafka-client - Join us on Slack: https://go.estuary.dev/slack Media resources used in this video are from Pexels, Canva, and the YouTube Studio Audio Library. 0:00 Introduction 0:30 What is Kafka? 1:04 The Kafka Ecosystem 2:39 Integrating with Kafka Consumers... 3:26 ...using Dekaf 4:12 Dekaf Setup 6:16 kcat Test 6:45 Final Thoughts

Estuary | The Right Time Data Platform
Welcome to Estuary, the Right Time Data Platform built for modern data teams. With Estuary, you can move and transform data between hundreds of systems at sub second latency or in batch, depending on your business needs. • Capture data from source systems using pre built, no code connectors. • Automatically infer schemas and manage both real time and historical events in collections. • Materialize your data to any destination with ease and flexibility. • Choose your deployment model: fully SaaS, Bring Your Own Cloud, or private deployment with enterprise level security. Start streaming an ocean of data and get going today: 🌊 https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=overview_video Learn more: 🌐 On our site: https://www.estuary.dev/?utm_source=youtube&utm_medium=social&utm_campaign=flow_overview 📚 In our docs: https://docs.estuary.dev/?utm_source=youtube&utm_medium=social&utm_campaign=flow_overview Connect with us: 💬 On Slack: https://go.estuary.dev/slack 🧑💻 In GitHub: https://github.com/estuary ℹ️ On LinkedIn: https://www.linkedin.com/company/estuary-tech/ #righttimedata #datapipelines #streamingdata #realtimeanalytics #CDC #dataengineering #Estuary

Estuary Overview
Discover the power of Estuary, a platform built to make creating real-time data pipelines easy. In this overview, we’ll show you how Estuary helps you move data from source to destination in real time, with no coding required. 🌐 Check out our website to learn more about Estuary: https://www.estuary.dev/ ➡️ Start building your pipelines for free now: https://dashboard.estuary.dev/register if you’re curious for more, check out our docs or jump into our community Slack to ask questions! 📚 Explore our docs for detailed guides and tutorials: https://docs.estuary.dev/ 💬 Join our Slack community to connect with developers and ask questions: https://estuary-dev.slack.com/ #Estuary #RealtimeETL #DataStreaming #DataOps #dataengineering

Estuary 101: How To Build Right-Time Data Pipelines
Join hosts Dani and Zulf for a fast-paced walkthrough of how to design and ship right-time data pipelines with Estuary. In this session, you’ll get: - Context: What “right-time” really means, where Estuary fits among batch vs. streaming and managed vs. self-hosted options, and why unified ingestion reduces cost and complexity. - Live End-to-End Demo: Connect CDC sources, apply declarative transformations, and materialize data simultaneously into a warehouse, analytical engines, and object storage—plus a look at observability, error recovery, and real-world scenarios like schema drift and backfills. - Live Q&A: Ask about your specific stack, pipeline designs, and how to scale Estuary for enterprise workloads. Perfect for data and analytics engineers, architects, and platform owners who want fresher data with fewer moving parts.

What’s Next for Data Warehouses? Lessons from Our Benchmark and Emerging Trends
Dani and Ben talks about key findings on performance ceilings, cost traps, and failure modes, and explore the major trends reshaping data warehouse architecture, including: - Separation of Compute & Storage – How Snowflake Gen2, Databricks serverless, and open table formats like Iceberg are changing the game. - Lakehouse Reality Check: What’s working for teams adopting Iceberg, schema evolution patterns, and lake-native pipelines. - Flexibility Over Centralization: Moving beyond “one warehouse to rule them all.

Capture Data from Oracle Using CDC (Docker Demo)
Don’t silo your data in Oracle: learn how to replicate it to a destination of your choice with CDC. We’ll cover archive log configuration and Estuary setup with a demo Oracle instance. Follow along! This example project is available at: https://github.com/estuary/examples/tree/main/oracle-capture - Try it out for free at Estuary: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=oracle_capture - Reference Estuary’s Oracle docs, including instructions for non-container databases: https://docs.estuary.dev/reference/Connectors/capture-connectors/OracleDB/ - Have questions? Contact us on Slack: https://go.estuary.dev/slack Media resources used in this video are from Pexels, Canva, and the YouTube Studio Audio Library. 0:00 Introduction 0:40 Oracle & CDC 2:30 Demo: Project overview 5:42 Demo: Run container 7:22 Demo: Estuary setup 8:50 Wrapping up

Stream CRM Data from HubSpot to MotherDuck
Need blazing-fast analytics to keep up with your customers? Try sending your HubSpot data to MotherDuck: Estuary makes the process simple and straightforward. Register for free at: https://dashboard.estuary.dev/register/?utm_source=youtube&utm_medium=social&utm_campaign=hubspot_motherduck Ready to dive in deeper? Try these resources: 📄 Hubspot capture connector docs: https://docs.estuary.dev/reference/Connectors/capture-connectors/HubSpot-real-time/ 🐤 MotherDuck materialization docs: https://docs.estuary.dev/reference/Connectors/materialization-connectors/motherduck/ 💬 Our Slack community: https://go.estuary.dev/slack Music sourced from the YouTube Studio Audio Library 0:00 Introduction 0:20 Starting the pipeline 1:01 (Optional) Create HubSpot access token 1:39 Finish capture 2:15 MotherDuck materialization 2:51 Create staging bucket 4:40 MotherDuck credentials 5:20 Complete pipeline & wrap up

Save Webhook Data to Databricks in Real Time
Learn how to stream incoming webhook data to Databricks without setting up and maintaining your own server for webhook captures. Follow along with our 3-minute demo and try out Estuary for free → https://dashboard.estuary.dev/register Learn more from our: - Website: https://estuary.dev/ - Webhook capture docs: https://docs.estuary.dev/reference/Connectors/capture-connectors/http-ingest/ - Databricks materialization docs: https://docs.estuary.dev/reference/Connectors/materialization-connectors/databricks/ - Blog article on webhook setup: https://estuary.dev/blog/webhook-setup/ 0:00 Intro 0:19 Set up webhook capture 1:19 Configure webhook in Square 2:08 Create Databricks materialization 2:43 Outro

Capture NetSuite Data Using SuiteAnalytics and Estuary
Learn how to transfer your NetSuite data in minutes using SuiteAnalytics Connect and Estuary. We demo how to set up your capture step-by-step, covering all the resources you'll need to get your data flowing. Once connected, Estuary ingests your NetSuite data in real time — ready to be streamed into cloud warehouses like Snowflake, BigQuery, and more. Whether you're building a NetSuite to Snowflake pipeline, syncing NetSuite to BigQuery, or integrating with other analytics tools, Estuary lets you do it in minutes. Looking for more? - Start building pipelines for free at: https://dashboard.estuary.dev/register - See Estuary's NetSuite SuiteAnalytics capture connector docs: https://docs.estuary.dev/reference/Connectors/capture-connectors/netsuite-suiteanalytics/ - View materialization options for your captured data: https://docs.estuary.dev/reference/Connectors/materialization-connectors/ - Chat with us in Slack: https://go.estuary.dev/slack 0:00 Intro 0:18 NetSuite setup 4:03 Creating the Estuary capture 6:10 Outro

Unify Your Data in Microsoft Fabric with Estuary
Want to get your data into Microsoft Fabric—fast and without writing code? Discover what unified data can do with a Microsoft Fabric integration. We’ll cover what makes this relatively new data platform unique and how you can enhance its capabilities further using Estuary. A step-by-step demo walks through Fabric warehouse connector setup in Estuary so you can get your data flowing. Interested in more? - Register for a free Estuary account: https://dashboard.estuary.dev/register - Learn more about Microsoft Fabric: https://estuary.dev/blog/what-is-microsoft-fabric/ - Find source connectors to go with your Fabric destination: https://docs.estuary.dev/reference/Connectors/capture-connectors/ - Join us on Slack: https://go.estuary.dev/slack Media resources used in this video are from Pexels and the YouTube Studio Audio Library. 0:00 Introduction 0:26 Microsoft Fabric 1:33 Covering gaps with Estuary 2:33 Beginning connector creation 3:23 Creating a warehouse 3:59 Configuring a service principal 5:59 Creating a storage account 6:55 Wrapping up the connector 7:33 Outro

Streaming Data Lakehouse Tutorial: MongoDB to Apache Iceberg
Learn how to connect MongoDB to Apache Iceberg in Iceberg table format using Estuary. In this step-by-step demo, we show you how to: 1. Set up a MongoDB source and configure secure connections. 2. Create real-time pipelines to load data into Amazon S3. 3. Leverage the AWS S3 Iceberg Connector with AWS Glue for table cataloging. Estuary simplifies real-time data integration with powerful features like advanced security connections, automated materialization, and streamlined pipeline management. Whether you're handling transactional data or syncing complex data streams, Estuary has you covered. 👉 Try Estuary: https://dashboard.estuary.dev/register 👉 Read the Documentation: https://docs.estuary.dev/ #MongoDBtoIceberg 0:00 - Introduction: Overview of the demo and Estuary. 0:07 - Step 1: Setting Up MongoDB Source: Configuring MongoDB as the data source. 0:44 - Step 2: Reviewing Collections: Selecting collections to sync. 1:03 - Step 3: Setting Up S3 Destination: Configuring the AWS S3 Iceberg connector. 1:37 - Step 4: Testing and Publishing Pipeline: Testing the connection and publishing the pipeline. 2:07 - Final Verification: Verifying MongoDB data in S3 as Iceberg tables.

PostgreSQL to Iceberg - Streaming Lakehouse Foundations
Stream Real-Time Data from Postgres to Iceberg with Change Data Capture and Estuary. In this step-by-step tutorial, we demonstrate how to set up and stream real-time data from a PostgreSQL database into Iceberg tables using change data capture (CDC) with Estuary. Learn how to capture, ingest, and materialize data using Estuary's seamless integration. This demo uses a sales database to showcase how changes in a PostgreSQL table are tracked and replicated into an Iceberg table stored in AWS S3. Check out Estuary's Iceberg integration: https://estuary.dev/destination/s3-iceberg/ Join Estuary's community Slack: https://estuary-dev.slack.com/join/shared_invite/zt-86nal6yr-VPbv~YfZE9Q~6Zl~gmZdFQ#/shared-invite/email 00:00 - Introduction: Streaming Data from Postgres to Iceberg 00:18 - Postgres Sales Database Overview 01:08 - Starting Change Data Capture (CDC) with Estuary 02:09 - Materializing Data into Apache Iceberg 04:17 - Backfilling Data into Iceberg 05:21 - Querying Iceberg Tables with Python 06:10 - Conclusion: Demo Recap

PostgreSQL Data Capture Step-by-Step Tutorial
This is a tutorial on how to set up a PostgreSQL data capture using Estuary. https://docs.estuary.dev/reference/Connectors/capture-connectors/PostgreSQL 0:00 Intro 0:52 PostgreSQL Database Set-up 2:45 Estuary Set-up 3:00 Endpoint Config 3:55 Troubleshooting Tip 1: No collection 5:14 Troubleshooting Tip 2: Connection string issue Try Estuary Free: https://www.estuary.dev/ Join our Slack channel with a community of developers: https://estuary-dev.slack.com/ PostgreSQL is an object-relational database management system (ORDBMS) based on POSTGRES, Version 4.2, developed at the University of California at Berkeley Computer Science Department. POSTGRES pioneered many concepts that only became available in some commercial database systems much later. PostgreSQL is an open-source descendant of this original Berkeley code. It supports a large part of the SQL standard and offers many modern features: complex queries foreign keys triggers updatable views transactional integrity multi-version concurrency control Also, PostgreSQL can be extended by the user in many ways, for example by adding new data types functions operators aggregate functions index methods procedural languages And because of the liberal license, PostgreSQL can be used, modified, and distributed by anyone free of charge for any purpose, be it private, commercial, or academic. #data #postgres #postgresql #datapipeline

Seamless Data Integration, Unlimited Potential
Discover the simplest way to connect and move your data.Get hands-on for free, or schedule a demo to see the possibilities for your team.


