
What is Fivetran?
Fivetran is a closed-source ETL/ELT platform. It moves data from sources to destinations. Fivetran was one of the early established players of the modern data stack. A decade ago, startups chose Fivetran due to its ease of use and, in general, because of the limited archaic options available in the market. Many companies that adopted Fivetran are now exploring alternatives like Estuary Flow because of predictable cost, performance, flexibility, and connector quality.
This comprehensive guide will walk you through migrating from Fivetran to Estuary, explicitly focusing on popular source-destination combinations like AWS RDS and Snowflake.
Understanding Fivetran's Limitations and Estuary's Advantages
While Fivetran has been a popular choice of the past for many organizations, several limitations, including row-based pricing vs. volume-based pricing, sudden changes in pricing structure vs. adhering to contractually obligated pricing, the occurrence of preventable errors like null pointer exceptions, skipped syncs, internal server errors and untraceable errors like unknown job error have driven companies to seek alternatives. On the other hand, Estuary does not change pricing on the fly and guarantees 99.99% uptime.
💡 See why Forward switched from Fivetran to Estuary and cut down engineering time while gaining real-time flexibility → Read the case study
Rising Unpredictable Costs of Fivetran: Why Users Are Looking for Alternatives
Fivetran's pricing model is based on Monthly Active Rows (MAR), which counts unique rows modified or added within a month. This pricing model has introduced several challenges and is antithetical to the CDC (Change Data Capture) philosophy, which is the foundation of data streaming.
Due to how Fivetran normalizes data models before loading them into a destination, one update in a source table could fan out into multiple updates in the destination, increasing the monthly MAR count.
Fivetran has updated its pricing structure without notice many times in the past. Fivetran's pricing model has undergone three significant overhauls since 2023 that have significantly impacted customer costs while driving corporate profits.
1. 2023 Account-level MAR Consolidation
In 2023, Fivetran started phasing out its account-wide Monthly Active Row (MAR) discount structure. Previously, customers benefited from aggregated discounts across all connectors. However, this change requires businesses to maintain high volumes on each connector to qualify for tiered discounts. This pricing shift coincided with Fivetran surpassing $200M in annual revenue run rate while simultaneously, customers reported dramatic cost increases of 40-60% for their multi-connector implementations. The timing suggested a strategic move to boost revenue at the expense of customer affordability, particularly impacting organizations using diverse data sources.
2. 2024 Transformation Cost Restructuring
Introducing dbt Core transformation pricing at $2 per model run created shockwaves through mid-market companies. One SaaS provider reported their $8K/month bill ballooning to $23K after migrating 150 models to production.
3. 2025 Connector-Level Pricing Mandate
The March 2025 update eliminated account-wide MAR aggregation, applying discounts only at individual connector levels. Reddit users report 80-120% cost spikes for companies using 5+ connectors, with one enterprise data team facing a $14K to $31K monthly increase. This strategic pricing shift catapulted Fivetran's quarterly revenue beyond the $200M mark. However, it left many startups in a precarious position, bound by existing contracts while facing rapidly escalating costs threatening their operational budgets and growth plans.
How to Migrate from Fivetran: Batch and Real-time Strategies
The two most popular data pipeline strategies are batch processing and real-time streaming. This section will guide you through migrating your existing data workflows—whether they rely on periodic batch jobs or continuous data streams.
Batch Migration from Fivetran to Estuary (Step-by-Step)
In batch processing, seamless migration is key to avoiding data loss or disruption. First, build and configure the Estuary data flow by replicating Fivetran source and destination connections. Next, thoroughly test for data consistency and performance. Once validated, pause the existing Fivetran workflow to prevent new batches. Finally, fully decommissioning Fivetran. This minimizes risk and ensures a smooth transition.
Step 1: Setup Source Configuration in Estuary Flow
- Navigate to the Sources section and create a new capture based on your data source (e.g., Postgres database)
- Plug in relevant mandatory configuration details, including host, username, password, etc.
- Once the mandatory fields are filled, you can proceed to the next tab and choose the tables that must be routed to the destination. However, if you want to define advanced features like SSH tunneling or Backfill Chunk Size, scroll down and fill it out.
- Upon completing the configuration, save and publish the source.
Step 2: Setup Destination Configuration in Estuary Flow
- When setting up your materialization in Estuary Flow, navigate to the “destination” configuration screen.
- In “Endpoint Config,” fill in the details to mimic your existing Fivetran destination configuration.
- Under the “Endpoint Config” section, make sure to map your corresponding source(capture) for this destination(materialization).
- Click save and publish
Step 3: Verification
Once the live pipe is running, after the first batch gets inserted, check whether appropriate schema(s) & table(s) were created as expected. Also, check the veracity of the data inserted in the destination table(s).
Step 4: Freeze Fivetran Syncs
Now that your Estuary pipeline is live and includes free data backfilling, you can confidently pause your Fivetran syncs without worrying about data loss.
Zero-Downtime Streaming Migration from Fivetran to Estuary
Many organizations rely on tools like costly Fivetran to stream data changes from operational databases like MongoDB or PostgreSQL to cloud data warehouses like Databricks or Snowflake for real-time analytics. However, limitations such as higher latency (often minutes) and less robust schema evolution handling can hinder truly real-time use cases.
Imagine your company uses Fivetran to stream production data from PostgreSQL to Snowflake. This pipeline feeds critical dashboards and operational reports, making any downtime or data loss unacceptable. Engineers at Estuary understand the negative implications of streaming downtime.
This section demonstrates how to migrate streaming pipelines to Estuary Flow to achieve sub-100ms latency, enhance operational reliability, and leverage automated schema evolution - all while maintaining uninterrupted data flow to Snowflake.
Step 1: Set Up Parallel Pipeline in Estuary Flow
- Access Estuary Dashboard
- Navigate to the Estuary Flow login page, enter your credentials, and sign in.
- Once logged in, you'll see the main Estuary Flow dashboard with options for Sources, Collections, and Materializations.
- Create a New Capture
- On the left-hand navigation panel, locate and click on Sources to view your available source connections.
- At the top-right corner of the Sources page, click on the Source button to add a new data source.
- A connector catalog window will appear. Select the connector type from this catalog that matches the source type currently configured in your Fivetran pipeline.
- Provide a clear, descriptive name for your new capture in the text box. Use a format that identifies the purpose or origin, such as fivetran_migration_[source_name]
- Configure Source Connection
- Follow these detailed instructions to set up the connection to your source system, ensuring the settings match precisely those used by your existing Fivetran source.
- For database sources:
- Hostname: The network address or hostname where your database is hosted.
- Port: The specific port number your database listens to (typically defaults such as 5432 for PostgreSQL, 3306 for MySQL, or 1433 for SQL Server).
- Database Name: The exact database name you wish to connect to.
- Username: The username associated with your database credentials.
- Password: The password corresponding to the username you've provided.
- Set the necessary replication parameters according to the requirements of your source database type.
- For example, if you use PostgreSQL, you may need to specify the replication slot settings configured previously in Fivetran.
- Once all required fields and replication parameters are correctly entered and verified, click Next to continue to the next step.
- Configure Capture Settings
- In the Collection Names section, carefully review and verify the tables or collections included in your data capture. Ensure these collections match the tables or collections your existing Fivetran pipeline captures.
- Under Schema Evolution, adjust the following options according to your source schema requirements:
- Enable the automatic addition of new collections if you want any new tables or collections added to your source system to be automatically captured without manual intervention.
- Enable Automatically keep schemas up to date if your source schema regularly evolves. This setting ensures that changes such as new columns or updated data types are dynamically recognized and captured.
- Once you’ve verified your settings and selections, click Next to proceed to the Preview step.
- Preview and Create Capture
- Review the sample data preview to ensure it matches the expected output and looks correct.
- If everything appears as it should, click Create Capture to initiate the process.
- Wait for the capture to initialize and begin running. This might take a little time, depending on the data size and system configuration.
- After initialization, check the capture status to confirm it is running smoothly. Ensure that the status is Healthy or Active to verify that everything works as expected.
Step 2: Set Up Materialization in Estuary
- Create New Materialization
To configure a new materialization in Estuary, follow these steps:
- On the left-hand navigation panel, click on Destinations to open your available destinations.
- At the top-right corner of the Destinations page, click the Destination button to initiate the setup for a new destination.
- In the destination selection window that appears, carefully select the destination type identical to the one currently utilized by your Fivetran pipeline.
- Provide a clear, meaningful name for your new materialization. For consistency and easy identification, use the naming format:
fivetran_migration_[destination_name]
- Configure Destination Connection
- Follow these detailed instructions to configure the connection to your destination system accurately:
- Enter the required connection details for your destination carefully, matching your current Fivetran settings. Typically, this includes:
- Hostname: The network address or hostname of your destination system.
- Port: The port number your destination database or service listens on.
- Database Name: The database name you'll use for storing data.
- Username: The username associated with your database or service credentials.
- Password: The password corresponding to the username provided above.
After confirming all details are correctly entered, click Next to continue the configuration.
- Set the Sync Frequency to "0s" in the Sync Schedule section. This configuration enables real-time synchronization, allowing data updates to occur continuously and without noticeable delay.
- Finalize and Start Materialization
- Review the configuration summary to ensure all the details are correct and up to date with your setup.
- Once you're satisfied with the configuration, click Create Materialization to begin the process.
- After creation, wait for the materialization status to show Healthy or Active. This will indicate that data is successfully flowing to the destination and that the materialization process is running as expected.
Step 3: Execute the Cutover Strategy
You can implement the cutover once you've validated that the Estuary Flow pipeline works correctly. This is the most critical phase of the migration:
- Turn on your Estuary pipe and witness the real-time streaming of data with a sub-second latency
- Once you verify the veracity of the data at destination table(s) or API(s), freeze Fivetran Syncs with a clear cutoff point (e.g., 2 am or a weekend when the traffic is low)
- Ensure that Fivetran performs no further syncs to avoid data inconsistencies.
Last but not least, delete all your connectors in Fivetran to complete the migration.
Data Transformation After Switching from Fivetran
Estuary offers seamless native integration with dbt Cloud, similar to Fivetran, allowing users to automatically trigger dbt jobs whenever fresh data becomes available in the destination. This integration creates a smooth transition from Fivetran, making orchestration between the data ingestion and transformation layer frictionless, thereby improving the efficiency of real-time data workflows.
Estuary eliminates manual intervention by automating data transformations, ensuring the entire data pipeline becomes more streamlined and responsive. Estuary's automation accelerates the transformation process and maintains consistency and accuracy in the data, enabling organizations to leverage up-to-date, actionable insights with minimal delays.
When configuring dbt Cloud integration in Estuary Flow, several advanced parameters allow for precise orchestration:
- Job Orchestration Parameters:
- Job ID: Supports multiple job configurations, allowing for complex transformation requirements based on specific use cases.
- Account ID: Links directly to your unique DBT Cloud instance, ensuring the appropriate environment is used.
- Access URL: A configurable endpoint for seamless DBT Cloud API access, allowing flexible integrations.
- API Key: A securely stored credential that provides authentication, ensuring only authorized access to the system.
- JobTriggerMode: Defines the behavior of the job trigger. Options include:
- Skip
- Replace
- Ignore
- causeMessage: A descriptive message to document the cause of the trigger, such as: "Triggered by Estuary data sync.”
- Minimum Run Interval: Specifies the customizable interval for job execution, such as 15 minutes, with options like 5 minutes, 30 minutes, or 1 hour.
- Synchronization Behavior:
- Estuary's integration supports advanced synchronization patterns between materializations and dbt jobs, offering more flexibility and control.
- Unlike Fivetran’s rigid scheduling, Estuary can trigger DBT jobs based on data availability, providing fine-tuned controls for real-time job execution.
In-line SQL Transformations
To create SQL transformations in Estuary, follow these steps to set up and manage your data workflows easily:
- Start by accessing the "Collections" tab within the Estuary platform. This is where all of your data collections are listed.
- Once you're in the "Collections" tab, you'll find an option to create a new transformation. Click on "New Transformation" to start defining your data transformation.
- After clicking "New Transformation," you'll be prompted to choose which data collection you want to transform. This allows you to target specific datasets and apply transformations that suit your needs.
- Next, you'll need to provide a name for the newly derived collection resulting from the transformation. This helps with transformed dataset traceability.
- The final stage involves crafting SQL queries to process and transform your selected data collection. You’ll use SQL to design custom logic that restructures, filters, or enriches your dataset according to specific requirements.
TypeScript Transformations
Unlike Fivetran, which focuses on data extraction and loading with SQL-based transformations via dbt, Estuary enhances data processing by offering streaming SQL and TypeScript transformations, allowing for real-time data refinement to prepare it for subsequent analysis.
Ready to Stop Overpaying for Fivetran?
With Estuary Flow, you can build real-time pipelines in minutes, with guaranteed pricing and sub-second latency. Start your free trial today →

About the author
Dani is a data professional with a rich background in data engineering and real-time data platforms. At Estuary, Daniel focuses on promoting cutting-edge streaming solutions, helping to bridge the gap between technical innovation and developer adoption. With deep expertise in cloud-native and streaming technologies, Dani has successfully supported startups and enterprises in building robust data solutions.
Popular Articles
