
You’ve committed to Microsoft Fabric. The unified analytics stack. The promise of OneLake, Power BI, Data Factory, and Spark notebooks—all woven together in a single SaaS experience. It’s the future of Microsoft-first data architecture, and it’s here.
But now comes the reality check: your SQL Server data still lives behind a firewall, powering operational systems that can’t afford downtime or lag. You need that data, not just once per day, but as it changes. In sync with reality. Ready for dashboards, data science, or compliance use cases. And unfortunately, Fabric’s native tooling wasn’t built for this kind of velocity.
Here’s the gap: Fabric gives you the power to analyze, model, and visualize—but it doesn’t guarantee that the data is current.
Estuary Flow fills the gap with real-time change data capture (CDC) pipelines that stream SQL Server data directly into your Fabric Warehouse. No delays. No duct tape.
In this guide, you’ll learn:
- Why traditional SQL Server connectors fall short
- How Estuary enables real-time sync with Fabric
- A step-by-step setup using Estuary’s Azure Fabric Warehouse connector
Let’s connect SQL Server to Microsoft Fabric the right way.
What is Microsoft Fabric?
Microsoft Fabric is Microsoft’s all-in-one SaaS data platform designed to unify data engineering, data science, analytics, and business intelligence. It consolidates several Azure services—like Power BI, Synapse Analytics, Data Factory, and OneLake—into a single experience with seamless interoperability.
At the heart of Fabric is OneLake, a unified storage layer that supports both structured and unstructured data. Fabric allows teams to ingest data from a variety of sources, transform it using notebooks or dataflows, and visualize it directly with Power BI—without switching tools or managing complex pipelines.
Key Components of Microsoft Fabric
- OneLake: The single data lake storage layer that underpins the entire Fabric ecosystem.
- Lakehouse & Warehouse: Built-in support for open-table formats and SQL-based analytics.
- Data Factory (Pipelines): For orchestrating ETL/ELT workflows.
- Notebooks & Data Science: Native support for Python, Spark, and machine learning.
- Power BI: Integrated dashboarding and reporting without moving data out of the platform.
Why It Matters for SQL Server Users
If your organization already runs on Microsoft technologies (SQL Server, Azure, Power BI), Fabric provides a modern, centralized platform to:
- Streamline analytics workflows across tools and teams
- Eliminate duplication between operational and reporting environments
- Enable collaboration across engineering and business units
But to unlock its full potential, Fabric needs to be continuously fed with high-fidelity data, especially from operational systems like SQL Server. That’s where Estuary comes in: ensuring real-time syncs into Fabric’s data warehouse layer without the need for slow, batch-based pipelines.
Ways to Connect SQL Server to Microsoft Fabric
Getting data from SQL Server into Microsoft Fabric can be achieved through several methods, ranging from built-in batch pipelines to advanced, real-time CDC streaming with Estuary. Here's how each approach compares:
1. Dataflow Gen2 in Microsoft Fabric
Dataflow Gen2 lets you build low-code ingestion pipelines using Power Query, ideal for light transformation tasks.
- Pros:
- Integrated into the Fabric UI
- No-code transformation steps
- Cons:
- Only supports batch refresh
- “Incremental refresh” allows for smaller batches, but may not be able to capture all changes
- Limited scheduling and no CDC
- Not ideal for frequently changing data or data that changes without updating a corresponding date field
- Only supports batch refresh
👉 Best for basic reports where latency isn't critical.
2. Fabric Pipelines (ETL Orchestration)
Fabric’s built-in pipelines offer scheduled ETL capabilities similar to Azure Data Factory.
- Pros:
- Integrated into the Fabric workspace
- More control over flow and triggers
- Cons:
- Still batch-based
- Complex to scale with changing schemas
- No native change data capture (CDC)
👉 Best for teams already managing ETL jobs in Fabric.
3. Azure Data Factory with Microsoft Fabric
You can use Azure Data Factory (ADF)—separate from Data Factory in Fabric—to extract SQL Server data and push it into OneLake or a Fabric Warehouse.
- Pros:
- Mature platform with wide connector support
- Can automate batch processes
- Cons:
- More complex setup with Azure permissions
- No native real-time CDC support
- Expensive at scale for large datasets
👉 Best for existing ADF users who need batch sync to Fabric.
4. Power BI’s Direct Lake Mode
While not a data movement method, Direct Lake Mode lets Power BI query Fabric data instantly without imports.
- Limitation: You still need to land SQL Server data in Fabric first, usually using batch pipelines.
5. Estuary Flow with Real-Time CDC from SQL Server
Unlike the native batch-centric options, Estuary Flow provides a fully managed real-time pipeline that continuously streams changes from SQL Server into Microsoft Fabric Warehouse using CDC (Change Data Capture).
How It Works
- Uses SQL Server's built-in CDC to read inserts, updates, and deletes
- Captures are defined via Estuary’s web app or CLI
- Changes are streamed to Fabric via the Azure Fabric Warehouse materialization connector
Connector Highlights
- Supports SQL Server 2017+, Azure SQL, RDS, and Google Cloud SQL
- Automatic or manual CDC instance management
- Handles schema changes with automatic capture switching
- Optional features like:
- Delta Updates to reduce latency and costs
- Automatic Change Table Cleanup to manage storage
- SSH tunneling or IP allowlists for secure network access
Why It’s Better
Feature | Native Fabric Methods | Estuary Flow |
Real-Time Sync | ❌ | ✅ |
Change Data Capture | ❌ | ✅ |
Schema Evolution Support | ⚠️ | ✅ |
Secure Connectivity (e.g., SSH, Private IP) | ❌ | ✅ |
Batch + Streaming Hybrid Support | ⚠️ | ✅ |
Open-Source Option | ❌ | ✅ via Docker |
👉 Best for organizations that need low-latency insights, high-volume change data, or robust CDC into Fabric Warehouse.
Next Up: You’ll learn how Estuary Flow sets up this CDC pipeline from SQL Server to Microsoft Fabric in real time, with step-by-step instructions and production-ready configurations.
Real-Time CDC from SQL Server to Fabric with Estuary Flow
To stream real-time data from SQL Server into Microsoft Fabric, you’ll set up a pipeline in Estuary Flow that consists of two parts:
- A Capture connector for SQL Server (source)
- A Materialization connector for Azure Fabric Warehouse (destination)
This walkthrough covers both, step by step.
Part 1: Setting Up the SQL Server CDC Capture in Estuary
Estuary’s SQL Server connector uses Change Data Capture (CDC) to continuously read new inserts, updates, and deletes from your database tables.
Step 1: Enable CDC on Your SQL Server
Log into your SQL Server instance and run the following commands:
plaintextUSE your_database;
EXEC sys.sp_cdc_enable_db;
-- Create a user for Estuary Flow
CREATE LOGIN flow_capture WITH PASSWORD = 'yourStrongPassword!';
CREATE USER flow_capture FOR LOGIN flow_capture;
-- Grant required permissions
GRANT SELECT ON SCHEMA :: dbo TO flow_capture;
GRANT SELECT ON SCHEMA :: cdc TO flow_capture;
GRANT VIEW DATABASE STATE TO flow_capture;
-- Enable CDC on each table you want to capture
EXEC sys.sp_cdc_enable_table
@source_schema = 'dbo',
@source_name = 'your_table_name',
@role_name = 'flow_capture';
Repeat the sp_cdc_enable_table
command for each table you'd like to capture.
If you're using Azure SQL, RDS, or Cloud SQL, follow the equivalent setup and ensure Estuary has network access.
Step 2: Create a Capture Connector in Estuary
- In the Estuary Flow UI, go to Captures.
- Click + New Capture.
- Search for and select the real-time SQL Server connector.
- Fill in the configuration:
- Server Address:
host:1433
- Database: your database name
- User:
flow_capture
- Password: the password you set above
- Server Address:
- Click Next. Estuary will discover the tables with CDC enabled.
🛡️ If your database is behind a firewall, you can use SSH tunneling or allowlist Estuary’s IP addresses.
Step 3: Set Primary Keys (if needed)
Estuary will automatically map your tables to Flow collections. If any table is missing a primary key:
- Click the Specification tab on the collection.
- Set the
"key": ["/column_name"]
manually in the JSON editor. - Click Save and Publish.
Part 2: Setting Up the Microsoft Fabric Warehouse Materialization
Once your data is flowing into Flow collections, the next step is to materialize that data into Microsoft Fabric Warehouse.
Step 4: Create Resources in Azure
To connect Flow to Fabric, you need to set up several Azure components:
1. Fabric Warehouse
- Go to Microsoft Fabric.
- In your workspace, click + New > Warehouse.
- Give it a name and create it.
- Click the More Options (•••) menu and select Settings > Connection string. Copy it.
2. Service Principal (App Registration)
- Open Microsoft Entra Admin Center.
- Go to App registrations > New registration.
- Name your app (e.g.,
estuary-flow-sp
) and click Register. - Copy the Application (client) ID.
- Go to Certificates & secrets > New client secret. Copy the value immediately.
3. Assign Access
- In Fabric, go to the Admin portal.
- Search for "Service principals can use Fabric APIs" and enable it.
- Go back to your Fabric workspace.
- Click Manage Access, add your service principal (client ID), and assign Contributor role.
4. Azure Storage Account
- In the Azure Portal, go to Storage Accounts > + Create.
- Choose a name, region, and settings. Click Create.
- After deployment, go to Access Keys under “Security + Networking” and copy key1.
- Inside the storage account, go to Containers > + Container and create one (e.g.,
estuary-staging
).
Step 5: Create a Materialization in Estuary
- In Estuary, go to Materializations > + New Materialization.
- Choose Azure Fabric Warehouse as the connector.
- Fill out the config fields:
- Client ID: from Entra
- Client Secret: from the secret you created
- Warehouse: name of your Fabric warehouse
- Schema: e.g.,
main
- Connection String: from your Fabric warehouse settings
- Storage Account Name: the one you created
- Storage Account Key: copied from Azure
- Container Name: name of your Blob container
- Click Next and bind your Flow collections to destination tables.
- Click Save and Publish.
You can also enable advanced features like Delta Updates or configure sync frequency.
Step 6: Validate Your Pipeline
- Check your Fabric dashboard: your destination tables will populate in near real-time.
- Run a test update in SQL Server and verify it appears in Fabric within seconds.
- Use Power BI to visualize or report directly on your freshly streamed data.
You’re Done!
With Estuary Flow, you’ve now created a secure, scalable, real-time CDC pipeline from SQL Server to Microsoft Fabric—with no batch lag, no data silos, and no manual refreshes.
Best Practices for Real-Time SQL Server Pipelines into Microsoft Fabric
Setting up a streaming pipeline is just the beginning. To ensure high performance, reliability, and cost-efficiency as your workloads scale, follow these best practices when using Estuary Flow to connect SQL Server to Microsoft Fabric.
1. Enable CDC Thoughtfully on SQL Server
- Start with high-impact tables: Avoid enabling CDC on every table by default. Focus on the ones critical for analytics or monitoring.
- Ensure every captured table has a primary key: Estuary uses primary keys to uniquely identify records and handle updates properly.
- Use Automatic Capture Instance Management for easier schema evolution—but only if you can safely grant db_owner to the capture user.
2. Control Schema Evolution Gracefully
- If you add a column to a table, know that SQL Server CDC won’t capture it by default.
- Estuary Flow detects new capture instances and automatically switches to them at the right time.
- Prefer using Automatic Capture Instance Management if your schema evolves frequently.
💡 Tip: You can view and edit the collection schema inside Estuary to validate key mappings or add transformations.
3. Optimize Network Access Securely
- For production workloads, use SSH tunneling or private IP allowlists to connect Estuary to SQL Server or Azure securely.
- If running on Azure SQL or RDS, apply least privilege firewall rules or VPC peering where possible.
- Ensure encryption is enabled on both the SQL Server and the Azure Storage layer used for staging.
4. Choose Delta Updates Carefully
- Delta updates improve performance by skipping document reads—but only use them when:
- Every event has a guaranteed unique key.
- Your workflow doesn't require full document reductions or upserts.
- For high-frequency change streams with immutable rows (e.g., append-only logs), delta updates can cut cost and latency significantly.
5. Plan for Backfill and Downtime Scenarios
- By default, SQL Server CDC keeps change data for 3 days. If your pipeline is down longer than that, changes could be lost.
- Enable Automatic Change Table Cleanup only if:
- You can afford temporary spikes in storage.
- You’ve granted
db_owner
to the capture user.
- For critical pipelines, monitor storage health and flow checkpoints regularly via Estuary’s OpenMetrics API.
6. Use Namespaces and Collection Naming Conventions
- Name Flow collections clearly to match the source table and schema.
- Example:
sqlserver-prod/customer_orders → fabric/main/customer_orders
- Example:
- This simplifies debugging, downstream transformations, and governance.
7. Monitor and Alert Early
- Use Estuary’s metrics endpoint to integrate with Prometheus, Datadog, or other observability tools.
- Set up alerts for:
- Delayed capture offsets
- Materialization errors
- Checkpoint gaps or restarts
Following these best practices will help you build pipelines that are resilient, low-latency, and production-ready, whether you’re streaming transactional data into dashboards or syncing updates for ML workloads.
Want to see the full setup in action? Watch this tutorial:
Conclusion: Bring Your SQL Server Data to Life in Microsoft Fabric
Microsoft Fabric gives you a powerful, unified environment for building modern data applications—but its native SQL Server integration stops short of real-time. For organizations that need low-latency insights, operational observability, or just want to eliminate batch-induced blind spots, the gap is clear.
Estuary Flow bridges that gap.
With built-in support for SQL Server CDC, automatic schema evolution, secure networking, and direct materialization to Microsoft Fabric Warehouse, Estuary lets you:
- Stream changes from SQL Server in real time
- Simplify architecture by eliminating brittle batch jobs
- Confidently scale from prototype to production
Whether you’re syncing customer events, financial transactions, IoT sensor data, or application logs—Estuary ensures your Microsoft Fabric dashboards are powered by live, trustworthy data.
Next Steps
- Try it yourself: Sign up for Estuary Flow — free to get started
- Explore documentation: SQL Server connector | Azure Fabric connector
- Need help? Book a demo with a solutions engineer

About the author
Team Estuary is a group of engineers, product experts, and data strategists building the future of real-time and batch data integration. We write to share technical insights, industry trends, and practical guides.
