

Stream into Amazon S3 Parquet with your free account
Continously ingest and deliver both streaming and batch change data from 100s of sources using Estuary's custom no-code connectors.
- <100ms Data pipelines
- 100+ Connectors
- 2-5x less than batch ELT



Amazon S3 Parquet connector details
The Amazon S3 Parquet materialization connector writes delta updates from Estuary Flow collections to an Amazon S3 bucket in Apache Parquet format, providing efficient, columnar storage optimized for analytics and downstream data lake use cases.
- Data format: Outputs batched delta updates as Parquet files for compact, query-ready storage
- Upload scheduling: Configure upload intervals and file size limits to control data batching frequency
- Flexible authentication: Supports both AWS Access Keys and IAM roles for secure access
- Schema-aware typing: Automatically maps Flow collection field types to equivalent Parquet data types
- File versioning: Organizes files by path and version counters for easy traceability and reprocessing
- Scalable and compatible: Works with AWS S3 and S3-compatible APIs, such as MinIO or Wasabi
💡 Tip: Use this connector to build cost-efficient, analytics-ready data lakes by streaming Flow data to S3 in Parquet format, ready for querying in Athena, Snowflake, or Databricks.


HIGH THROUGHPUT
Distributed event-driven architecture enable boundless scaling with exactly-once semantics.

DURABLE REPLICATION
Cloud storage backed CDC w/ heart beats ensures reliability, even if your destination is down.

REAL-TIME INGESTION
Capture and relay every insert, update, and delete in milliseconds.
Real-timehigh throughput
Point a connector and replicate changes to Amazon S3 Parquet in <100ms. Leverage high-availability, high-throughput Change Data Capture.Or choose from 100s of batch and real-time connectors to move and transform data using ELT and ETL.
- Ensure your Amazon S3 Parquet insights always reflect the latest data by connecting your databases to Amazon S3 Parquet with change data capture.
- Or connect critical SaaS apps to Amazon S3 Parquet with real-time data pipelines.
Don't see a connector?Request and our team will get back to you in 24 hours
Pipelines as fast as Kafka, easy as managed ELT/ETL, cheaper than building it.
Feature Comparison
Estuary | Batch ELT/ETL | DIY Python | KAFKA | |
---|---|---|---|---|
Price | $ | $$-$$$$ | $-$$$$ | $-$$$$ |
Speed | <100ms | 5min+ | Varies | <100ms |
Ease | Analysts can manage | Analysts can manage | Data Engineer | Senior Data Engineer |
Scale |

Deliver real-time and batch data from DBs, SaaS, APIs, and more
Build Free Pipeline