Estuary is building the next generation of real-time data integration solutions.

We're creating a new kind of DataOps platform that empowers engineering teams to build real-time, data-intensive pipelines and applications at scale, with minimal friction, in a UI or CLI. We aim to make real-time data accessible to the analyst, while bringing power tooling to the streaming enthusiast. Flow unifies a team's databases, pub/sub systems, and SaaS around their data, without requiring new investments in infrastructure or development.

Estuary develops in the open to produce both the runtime for our managed service and an ecosystem of open-source connectors. You can read more about our story here.

9 years of real-time innovation
Estuary history

about us

History

We didn't start at trying to make real-time data flows more accessible by abstracting away the low-level work… it is just what happened.

Our team researched streaming frameworks while working with billions of daily events, and ultimately realized that we'd have to build our own (Gazette) to have a scalable distributed streaming framework that is built with kappa architectures in mind, required less continuous resource management, and could unify both our batch and real-time pipelines.

We've been innovating to make real-time data more accessible to all ever since.

what’s happening

In the Media

Engineering podcast
Johnny, Dave, and Tobias discuss why we built Gazette, the growth of streaming, and the rise of the real-time data lake.
FirstMark invests in Estuary
Matt Turck of FirstMark announces their $7 million Series A investment in Estuary to simplify streaming + batch unification.
Data landscape
LinkedIn post from our Founder Dave Yaffe where he (and 50+ others) chart out the evolving ecosystem of real-time data integration players

say hello

Meet Our Team

debezium alternatives
Andrew Gale
Account Executive
debezium alternatives
Joseph Shearer
Senior Software Engineer
debezium alternatives
Dave Yaffe
CEO & Co-founder
debezium alternatives
Mahdi Dibaiee
Senior Software Engineer
debezium alternatives
Phil Fried
VP of Engineering
debezium alternatives
Travis Jenkins
Lead Front-End Engineer
debezium alternatives
Will Baker
Senior Software Engineer
debezium alternatives
Mike Danko
Lead Cloud Infrastructure Engineer
debezium alternatives
Johnny Graettinger
CTO & Co-founder
debezium alternatives
William Donnelly
Senior Software Engineer
debezium alternatives
Kiahna Tucker
Front-End Engineer
debezium alternatives
Samantha Jacobus
Recruiter & Office Manager

Our Investors

Firstmark
Operator

Come work with us

Careers

About you: You’re passionate about the complexities and potential of our data-driven world, self-motivated, curious, and adaptable.

About us: We’re a rapidly growing, highly technical team built by successful repeat founders that’s working to take the friction out of data engineering.

Current openings

What's it like

Working at Estuary

Location

We offer offices in both our New York City and Columbus, Ohio locations, as well as the ability to work remotely.

Benefits

We provide 100% employee coverage on healthcare, 401k, competitive equity, and unlimited time vacation leave.

Culture

Like the product we build, our culture is forward-thinking and open. Our team operates on a foundation of trust, is resourceful, collaborative, but also independent.

Community

We offer perks including team happy hours, weekly lunches, and quarterly off-sites. With our rapid growth, now’s an exciting time to come aboard.

Know somebody who would be a good fit? We offer a $2,500 referral bonus.

Apply today

Current Openings

Data Engineer and Technical Writer

Location: Remote

Estuary’s looking for a data engineer who can be a leading technical voice about best practices, and how to build modern data pipelines. Estuary Flow is a real-time ETL and CDC platform as a service that lets data engineers build a unified data pipeline from many sources to many targets in minutes.  

As a technical face of Estuary, you'll play a pivotal role in building the data engineering community, teaching fellow data engineers best practices and how to use Estuary Flow, and helping make customers successful. 

The ideal candidate has not only worked as a data engineer but has also been outspoken in sharing best practices with the data engineering community through blogs, tutorials, webinars or videos, or face-to-face events. 

While you will need to perform many activities that fall under developer advocacy, and you should enjoy doing these activities and working with customers, first and foremost you should be an experienced data engineer. 

Responsibilities:

  • Explain data engineering best practices to the data engineering community through posts, articles, videos, podcasts or webinars.
  • Create data pipelines and tutorials, and give demos to explain best practices or key features to data engineers.
  • Work with the Estuary community to help companies discover, learn, and adopt Estuary Flow.
  • Collaborate closely with engineering  to ensure high-quality product, documentation, tutorials, and articles.

What We're Looking For:

  • 5+ years of combined experience in data engineering including several data pipeline deployments or working with customers on their deployments.
  • Experience publishing articles, giving technical webinars and presentations, or creating tutorials and implementation guides.  
  • Strong speaking and writing skills in English.
  • Experience working in startup environments is preferred.
  • A proactive, entrepreneurial mindset, with experience in founding companies or working on side projects being a bonus!
Get in touch to apply

Software Engineer

Location: Columbus, OH / Remote

Estuary builds connectivity infrastructure, which allows teams to integrate real-time and historical data, eliminating the need for engineering teams to manage data infrastructure.

We are looking for engineers who are passionate about building innovative technology that challenges the status quo of modern data processing tools.  Our team is small but mighty.  This is just the beginning of our story and your work will see a direct impact on our product roadmap and the data industry at large.  

Responsibilities:

  • Design, implement, deploy, test and maintain substantial projects in Rust and Go
  • Participate in the design and development of a large scale distributed system
  • Collaborate with other engineers, founders, and product management to build innovative data infrastructure solutions
  • Participate in the design and development of architectural initiatives
  • Write code and participate in reviews
  • Troubleshoot and debug complex technical issues
  • Stay up to date on current technologies 

Qualifications:

  • 3+ years of industry experience working on large-scale backend software development
  • A strong desire to work collaboratively with a team
  • A passion for continuous learning
  • Eagerness to take on new challenges independently
  • Clear and effective communication, both written and verbal

Bonus:

  • Experience working in large scale, distributed systems and cloud environments
  • A knowledge of programming in Rust and Go
  • Contribution to open-source communities
  • A background in start-up environments 
Get in touch to apply

Let's talk about your data

Have a specific question or comment? Send us a note and a team member will reach out to you shortly.