Fivetran VS Striim
Read this detailed 2024 comparison of Fivetran vs Striim. Understand their key differences, core features, and pricing to choose the right platform for your data integration needs.
View all comparisonsIntroduction
Do you need to load a cloud data warehouse? Synchronize data in real-time across apps or databases? Support real-time analytics? Use generative AI?
This guide is designed to help you compare Fivetran vs Striim across nearly 40 criteria for these use cases and more, and choose the best option for you based on your current and future needs.
Comparison Matrix
Use cases | |||
---|---|---|---|
Database replication (CDC) - sources | FivetranMySQL, SQL Server, Postgres, Oracle (ELT load only) Single target only. Batch CDC only. | StriimReal-time (and batch) replication (sub-second to hours) | Estuary FlowMySQL, SQL Server, Postgres, AlloyDB, MariaDB, MongoDB, Firestore, Salesforce, ETL and ELT, realtime and batch |
Replication to ODS | Fivetran | Striim | Estuary Flow Requires re-extraction of sources for new destinations |
Op. data integration | Fivetran Focus on batch, some micro-batch connectors. No in-flight transformations. | Striim Real-time replication Transforms via TQL | Estuary Flow Real-time ETL data flows ready for operational use cases. |
Data migration | Fivetran Only lightweight data-cleaning transformations are supported. Can be slow and expensive for large-volume datasets. Automatic Schema Evolution. | Striim (too complex) | Estuary Flow Great schema inference and evolution support. Support for most relational databases. Continuous replication reliability. |
Stream processing | Fivetran Only point-to-point replication. No in-flight transformations or storage. | Striim (using TQL) | Estuary Flow Real-time ETL in Typescript and SQL |
Operational Analytics | Fivetran Higher latency batch ELT only. | Striim (TQL transforms) | Estuary Flow Integration with real-time analytics tools. Real-time transformations in Typescript and SQL. Kafka compatibility. |
AI Pipelines | Fivetran None. | Striim Support for in-flight vector embedding generation. | Estuary Flow Pinecone support for real-time data vectorization. Transformations can call ChatGPT & other AI APIs. |
Connectors | |||
Number of connectors | Fivetran<300 connectors 300+ lite (API) connectors | Striim100+ | Estuary Flow150+ high performance connectors built by Estuary |
Streaming connectors | FivetranBatch only. (Kafka & Kinesis source only) | StriimCosmosDB, MariaDB, MongoDB, MySQL, Oracle, Postgres, SQL Server | Estuary FlowCDC, Kafka, Kinesis, Pub/Sub |
Support for 3rd party connectors | Fivetran | Striim | Estuary Flow Support for 500+ Airbyte, Stitch, and Meltano connectors. |
Custom SDK | Fivetran Lite connectors by request. Cloud function connectors. | Striim | Estuary Flow SDK for source and destination connector development. |
API (for admin) | Fivetran CLI for HVR only, API generally available | Striim | Estuary Flow API and CLI support |
Core features | |||
Batch and streaming | FivetranBatch only | StriimStreaming-centric but can do incremental batch | Estuary FlowBatch and streaming |
Delivery guarantee | FivetranExactly once (batch only) | StriimAt least once | Estuary FlowExactly once (streaming, batch, mixed) |
Load write method | FivetranAppend only or update in place (soft deletes) | StriimAppend-only | Estuary FlowAppend only or update in place (soft or hard deletes) |
DataOps support | Fivetran CLI for HVR, API generally available | Striim CLI, API | Estuary Flow API and CLI support for operations. Declarative definitions for version control and CI/CD pipelines. |
ELT transforms | Fivetran Yes, with tight dbt integration. | Striim dbt Cloud integration | Estuary Flow dbt integration |
ETL transforms | Fivetran | Striim TQL transforms | Estuary Flow Real-time, SQL and Typescript |
Schema inference and drift | Fivetran Great schema inference and evolution support. | Striim With some limits by destination | Estuary Flow Real-time schema inference support for all connectors based on source data structures, not just sampling. |
Store and replay | Fivetran Requires re-extraction of sources for new destinations | Striim (requires re-extract for new destinations) | Estuary Flow Can backfill multiple targets and times without requiring new extract. User-supplied cheap, scalable object storage. |
Time travel | Fivetran | Striim | Estuary Flow Can restrict the data materialization process to a specific date range. |
Snapshots | Fivetran N/A | Striim N/A | Estuary Flow Full or incremental |
Ease of use | Fivetran Easy to use connectors, for transformations, dbt requires some learning | Striim Takes time to learn flows, especially TQL | Estuary Flow streaming transforms may take learning |
Deployment options | |||
Deployment options | FivetranCloud, limited private cloud (5 sources, 4 destinations), self-hosted HVR | StriimOn prem, Private cloud, Public cloud | Estuary FlowOpen source, public cloud, private cloud |
The abilities | |||
Performance (minimum latency) | FivetranTheoretically 15 minutes enterprise, 1 minute business critical. But most deployments are in the 10s of minutes to hour intervals | Striim< 100 ms | Estuary Flow< 100 ms (in streaming mode) Supports any batch interval as well and can mix streaming and batch in 1 pipeline. |
Reliability | FivetranMedium-High. Some issues with CDC. | StriimHigh | Estuary FlowHigh |
Scalability | FivetranMedium-High HVR is high scale | StriimHigh (GB/sec) | Estuary FlowHigh 5-10x scalability of others in production |
Security | |||
Data Source Authentication | FivetranOAuth / HTTPS / SSH / SSL / API Tokens | StriimSAML, RBAC, SSH/SSL, VPN | Estuary FlowOAuth 2.0 / API Tokens SSH/SSL |
Encryption | FivetranEncryption at rest, in-motion | StriimEncryption in-motion | Estuary FlowEncryption at rest, in-motion |
Support | |||
Support | Fivetran Good G2 ratings, but generally slow support. | Striim | Estuary Flow Fast support, engagement, time to resolution, including fixes. Slack community. |
Cost | |||
Vendor costs | Fivetran Highest cost, much higher costs for non-relational data integrations (SaaS apps) | Striim | Estuary Flow 2-5x lower than the others, becomes even lower with higher data volumes. Also lowers cost of destinations by doing in place writes efficiently and supporting scheduling |
Data engineering costs | Fivetran Simplified dbt Good schema inference & evolution automation | Striim Requires proprietary SQL-like language (TQL) | Estuary Flow Focus on DevEx, up-to-date docs, and easy-to-use platform. |
Admin costs | Fivetran Some admin and troubleshooting, CDC issues, frequent upgrades | Striim | Estuary Flow “It just works” |
Estuary Flow
Estuary was founded in 2019. But the core technology, the Gazette open source project, has been evolving for a decade within the Ad Tech space, which is where many other real-time data technologies have started.
Estuary Flow is the only real-time and ETL data pipeline vendor in this comparison. There are some other ETL and real-time vendors in the honorable mention section, but those are not as viable a replacement for Fivetran.
While Estuary Flow is also a great option for batch sources and targets, where it really shines is any combination change data capture (CDC), real-time and batch ETL or ELT, and loading multiple destinations with the same pipeline. Estuary Flow currently is the only vendor to offer a private cloud deployment, which is the combination of a dedicated data plane deployed in a private customer account that is managed as SaaS by a shared control plane. It combines the security and dedicated compute of on prem with the simplicity of SaaS.
CDC works by reading record changes written to the write-ahead log (WAL) that records each record change exactly once as part of each database transaction. It is the easiest, lowest latency, and lowest-load for extracting all changes, including deletes, which otherwise are not captured by default from sources. Unfortunately ELT vendors like Airbyte, Fivetran, Meltano, and Hevo all rely on batch mode for CDC. This puts a load on a CDC source by requiring the write-ahead log to hold onto older data. This is not the intended use of CDC and can put a source in distress, or lead to failures.
Estuary Flow has a unique architecture where it streams and stores streaming or batch data as collections of data, which are transactionally guaranteed to deliver exactly once from each source to the target. With CDC it means any (record) change is immediately captured once for multiple targets or later use. Estuary Flow uses collections for transactional guarantees and for later backfilling, restreaming, transforms, or other compute. The result is the lowest load and latency for any source, and the ability to reuse the same data for multiple real-time or batch targets across analytics, apps, and AI, or for other workloads such as stream processing, or monitoring and alerting.
Estuary Flow also has broad packaged and custom connectivity, making it one of the top ETL tools. It has 150+ native connectors that are built for low latency and/or scale. While may seem low, these are connectors built for low latency and scale. In addition, Estuary is the only vendor to support Airbyte, Meltano, and Stitch connectors as well, which easily adds 500+ more connectors. Getting official support for the connector is a quick “request-and-test” with Estuary to make sure it supports the use case in production. Most of these connectors are not as scalable as Estuary-native, Fivetran, or some ETL connectors, so it’s important to confirm they will work for you. Flow’s support for TypeScript and SQL also enables ETL.
Pros
- Modern data pipeline: Estuary Flow has the best support for schema drift, evolution, and automation, as well as modern DataOps.
- Modern transforms: Flow is also both low-code and code-friendly with support for SQL, TypeScript (and Python coming) for ETL, and dbt for ELT.
- Lowest latency: Several ETL vendors support low latency. But of these Estuary can achieve the lowest, with sub-100ms latency. ELT vendors generally are batch only.
- High scale: Unlike most ELT vendors, leading ETL vendors do scale. Estuary is proven to scale with one production pipeline moving 7GB+/sec at sub-second latency.
- Most efficient: Estuary alone has the fastest and most efficient CDC connectors. It is also the only vendor to enable exactly-and-only-once capture, which puts the least load on a system, especially when you’re supporting multiple destinations including a data warehouse, high performance analytics database, and AI engine or vector database.
- Deployment options: Of the ETL and ELT vendors, Estuary is currently the only vendor to offer open source, private cloud, and public multi-tenant SaaS.
- Reliability: Estuary’s exactly-once transactional delivery and durable stream storage makes it very reliable.
- Ease of use: Estuary is one of the easiest to use tools. Most customers are able to get their first pipelines running in hours and generally improve productivity 4x over time.
- Lowest cost: for data at any volume, Estuary is the clear low-cost winner in this evaluation. Rivery is second.
- Great support: Customers consistently cite great support as one of the reasons for adopting Estuary.
Cons
- On premises connectors: Estuary has 150+ native connectors and supports 500+ Airbyte, Meltano, and Stitch open source connectors. But if you need on premises app or data warehouse connectivity make sure you have all the connectivity you need.
- Graphical ETL: Estuary has been more focused on SQL and dbt than graphical transformations. While it does infer data types and convert between sources and targets, there is currently no graphical transformation UI.
Pricing
Of the various ELT and ETL vendors, Estuary is the lowest total cost option. Estuary only charges $0.50 per GB of data moved from each source or to each target, and $100 per connector per month. So you can expect to pay a minimum of a few thousand per year. But it quickly becomes the lowest cost pricing. Rivery, the next lowest cost option, is the only other vendor that publishes pricing of 1 RPU per 100MB, which is $7.50 to $12.50 per GB depending on the plan you choose. Estuary becomes the lowest cost option by the time you reach the 10s of GB/month. By the time you reach 1TB a month Estuary is 10x lower cost than the rest.
Fivetran
Fivetran was founded in 2012 by data scientists who wanted an integrated stack to capture and analyze data. The name was a play on Fortran and meant to refer to a programming language for big data. After a few years the focus shifted to providing just the data integration part because that’s what so many prospects wanted. Fivetran was designed as an ELT (Extract, Load, and Transform) architecture because in data science you don’t usually know what you’re looking for, so you want the raw data.
In 2018, Fivetran raised their series A, and then added more transformation capabilities in 2020 when it released Data Build Tool (dbt) support. That year Fivetran also started to support CDC. Fivetran has since continued to invest more in CDC with its HVR acquisition.
Fivetran’s design worked well for many companies adopting cloud data warehouses starting a decade ago. While all ETL vendors also supported “EL” and it was occasionally used that way, Fivetran was cloud-native, which helped make it much easier to use. The “EL” is mostly configured, not coded, and the transformations are built on dbt core (SQL and Jinja), which many data engineers are comfortable using.
Pros
- Ease of Use: Fivetran is modern SaaS ELT with an easy-to-use UI, especially in comparison to more traditional eTL tools. It allows you to set up a data pipeline without coding.
- Pre-built Connectors: Fivetran offers nearly 300 native connectors and an additional 300+ “lite” connectors based on APIs.
- Scalability: Fivetran is known for scaling better than many of its competitors.
- Integration with dbt: Fivetran has done a good job of integrating dbt core into the Fivetran platform.
- Focus on replication: Fivetran is good at data extraction and loading (EL), even if it is batch only, making it a strong choice if your primary goal is to efficiently move data into your warehouse for analysis.
- Advanced schema evolution: Fivetran and Estuary are the two leading vendors with support for automating how changes in sources are passed through to destinations.
Cons
- Latency: While Fivetran uses change data capture at the source, it is batch CDC, not streaming. Enterprise-level is guaranteed to be 15 minutes of latency. Business critical is 1 minute of latency, but costs more than 2x the standard edition. Its ELT architecture can also be slowed down by the target load and transformation times.
- Costs: Another major complaint are Fivetran’s high vendor costs, which have been 5x the cost of Estuary as stated by customers. Fivetran costs are based on monthly active rows (MAR) that change at least once per month. This may seem low, but for several reasons (see below and the pricing section) it can quickly add up.
- Unpredictable costs: Another major reason for high costs is that MARs are based on Fivetran’s internal representation of rows, not rows as you see them in the source.
For some data sources you have to extract all the data across tables, which can mean many more rows. Fivetran also converts data from non-relational sources such as SaaS apps into highly normalized relational data. Both make MARs and costs unexpectedly soar. This also does not account for the initial load where all rows count. - Reliability: Another reason for replacing Fivetran is reliability. Customers have struggled with a combination of alerts of load failures, and subsequent support calls that result in a longer time to resolution. There have been several complaints about reliability with MySQL and Postgres CDC, which is due in part because Fivetran uses batch CDC. Fivetran also had a 2.5 day outage in 2022. Make sure you understand Fivetran’s current SLA in detail. Fivetran has had an “allowed downtime interval” of 12 hours before downtime SLAs start to go into effect on the downtime of connectors. They also do not include any downtime from their cloud provider.
- Deployment options: while Fivetran claims private cloud as an option, it’s not really an option. Its private cloud deployment requires some installation work and only supports 8 sources and 5 destinations. There is also a self-hosted option for HVR only.
- Support: Customers also complain about Fivetran support being slow to respond. Combined with reliability issues, this can lead to a substantial amount of data engineering time being lost to troubleshooting and administration.
- DataOps: Fivetran does not provide much control or transparency into what they do with data and schema. They alter field names and change data structures and do not allow you to rename columns. This can make it harder to migrate to other technologies. Fivetran also doesn’t always bring in all the data depending on the data structure, and does not explain why.
- Roadmap: Customers frequently comment Fivetran does not reveal as much of a future direction or roadmap compared to the others in this comparison, and do not adequately address many of the above points.
Pricing
Fivetran's pricing is based on monthly active rows (MAR). This can be very unpredictable because MARs are based on Fivetran’s internal representation of data, not yours. Any non-relational or nested data gets turned into highly normalized rows that raise costs.
Lower latency is also very expensive. To reduce latency from 1 hour to 15 minutes can cost you 33-50% more (1.5x) per million MAR, and 100% (2x) or more to reduce latency to 1 minute, which is rarely deployed. Some connectors require all data to be extracted each time, which also becomes more expensive as you lower latency and increase the number of extracts.
Even then, you still have the latency of the data warehouse load and transformations. The additional costs of frequent ingestions and transformations in the data warehouse can also be expensive and take time. Companies often keep latency high to save money.
While a small deployment (2M MARs/month) can cost $700-$2667, 10M MARs/month get you into $10K a month. It is not unheard of for Fivetran costs to reach 6 digits annually, especially with certain high-cost connectors that end up having many more MARs.
Striim
Striim is a real-time data integration and streaming platform that simplifies the movement of data from various sources, including databases, cloud services, and messaging systems. Striim offers out-of-the-box connectors for real-time data capture, replication, and stream processing, making it a competitive option for enterprise-grade streaming architectures.
Pros
- Low-Latency Streaming: Striim specializes in low-latency data movement.
- Enterprise-Grade Features: Striim offers built-in support for exactly-once processing, data transformations, in-flight processing, and scalability.
- Comprehensive Integration: Striim provides pre-built connectors to a wide array of databases (including Oracle and SQL Server), cloud storage systems, messaging platforms like Kafka, and more.
Cons
- Complex Pricing Model: Striim’s pricing model can be complex, with costs depending on factors such as data volume, number of sources, and the specific features used. It may not be as cost-effective for smaller businesses with modest data needs.
- Vendor Lock-In: Like other managed streaming solutions, Striim can create a dependency on its platform, making migration to alternative solutions or self-hosted setups more challenging.
- Limited Open Source: While Striim provides a wide range of features, it is not an open-source platform, meaning users have less flexibility and control over the code and architecture compared to open-source options like Kafka and Debezium.
Pricing
Striim operates on a subscription model with pricing tiers based on the number of data sources, targets, and data volumes. Pricing is typically custom-quoted based on the organization’s specific needs. Tiers start at $1,000/mo + Compute $0.75 /vcpu/hr & Data Transfer $0.10/GB in, $0.10/GB out.
How to choose the best option
For the most part, if you are interested in a cloud option, and the connectivity options exist, you may choose to evaluate Estuary.
Modern data pipeline: Estuary has the broadest support for schema evolution and modern DataOps.
Lowest latency: If low latency matters, Estuary will be the best option, especially at scale.
Highest data engineering productivity: Estuary is among the easiest to use, on par with the best ELT vendors. But it also has delivered up to 5x greater productivity than the alternatives.
Connectivity: If you're more concerned about cloud services, Estuary or another modern ELT vendor may be your best option. If you need more on-premises connectivity, you might consider more traditional ETL vendors.
Lowest cost: Estuary is the clear low-cost winner for medium and larger deployments.
Streaming support: Estuary has a modern approach to CDC that is built for reliability and scale, and great Kafka support as well. It's real-time CDC is arguably the best of all the options here. Some ETL vendors like Informatica and Talend also have real-time CDC. ELT-only vendors only support batch CDC.
Ultimately the best approach for evaluating your options is to identify your future and current needs for connectivity, key data integration features, and performance, scalability, reliability, and security needs, and use this information to a good short-term and long-term solution for you.
GETTING STARTED WITH ESTUARY
Free account
Getting started with Estuary is simple. Sign up for a free account.
Sign upDocs
Make sure you read through the documentation, especially the get started section.
Learn moreCommunity
I highly recommend you also join the Slack community. It's the easiest way to get support while you're getting started.
Join Slack CommunityEstuary 101
I highly recommend you also join the Slack community. It's the easiest way to get support while you're getting started.
Watch