Estuary

Oracle to Amazon RDS for PostgreSQL Migration Guide: Steps, Tools, Challenges

Migrate from Oracle to Amazon RDS for PostgreSQL with Estuary Flow. Real-time CDC, backfill, and near-zero downtime for enterprise data teams.

Migrate Data from Oracle to Amazon RDS for PostgreSQL
Share this article

Introduction

Enterprises worldwide are rethinking their reliance on Oracle Database. While Oracle has long been a trusted transactional system, its high licensing costs, vendor lock-in, and complex administration make it harder to justify in today’s cloud-first, cost-sensitive environment.

A popular alternative is Amazon RDS for PostgreSQL, a fully managed relational database that combines the reliability of PostgreSQL with the scalability and automation of AWS. For organizations looking to modernize, RDS offers lower costs, reduced operational burden, and compatibility with the PostgreSQL ecosystem.

However, migrating from Oracle to Amazon RDS for PostgreSQL is not a straightforward “export and import.” Schema differences, Oracle-specific features like PL/SQL, and downtime risks during cutover often complicate the process. This is where Estuary Flow helps enterprises migrate with real-time CDC, backfill pipelines, and near-zero downtime.

This guide explains the benefits of RDS, the challenges of Oracle migrations, and a step-by-step approach using Estuary Flow to ensure a smooth transition.


🔹 Quick Start: Oracle to Amazon RDS for PostgreSQL with Estuary Flow

Migrate from Oracle to Amazon RDS for PostgreSQL in just a few guided steps. No code, no complex tooling, and near-zero downtime.

  1. Create Capture → In Estuary Flow, click + New Capture and choose Oracle Database (Real-time).
  2. Create Materialization → Click + New Materialization and select Amazon RDS for PostgreSQL.
  3. Bind & Publish → Map Oracle collections to RDS tables, then publish your pipeline.
  4. Monitor & Cut Over → Keep both databases in sync until final cutover, then switch apps with minimal downtime.

👉 Click to jump to the Step-by-Step section and read the full details.

Migrate Data From Oracle to Any Destination in Real-time

Understanding Amazon RDS for PostgreSQL

Amazon RDS for PostgreSQL is a fully managed database service that simplifies running PostgreSQL in the AWS cloud. Unlike self-managed PostgreSQL or Oracle, RDS automates routine tasks such as backups, patching, and failover, freeing enterprises to focus on applications rather than database operations.

Key Benefits for Enterprises

  • Fully managed operations: AWS handles patching, upgrades, monitoring, and automated backups.
  • Cost efficiency: Pay-as-you-go model with no upfront Oracle-style licensing fees.
  • Reliability: High availability through Multi-AZ deployments, automated failover, and read replicas.
  • Scalability: Elastic storage scaling up to 64 TB and support for multiple read replicas.
  • Ecosystem compatibility: Access to PostgreSQL extensions (PostGIS, pg_partman, etc.), open-source tools, and developer familiarity.

Why Enterprises Choose Amazon RDS for PostgreSQL Instead of Aurora

AWS offers two PostgreSQL-compatible services: Aurora PostgreSQL and Amazon RDS for PostgreSQL. Both are cloud-native, managed solutions, but they serve slightly different enterprise needs. Choosing between them is one of the first decisions organizations face when planning a migration away from Oracle.

When Amazon RDS for PostgreSQL is the Right Choice

  • Cost-sensitive workloads: RDS is typically less expensive than Aurora because it does not use a distributed storage layer. Enterprises with departmental databases, smaller applications, or development environments often prefer RDS to control costs.
  • Simplicity over scale: RDS provides managed PostgreSQL without additional architectural complexity. If an application doesn’t require Aurora’s extreme performance or scaling, RDS is sufficient.
  • Standard PostgreSQL compatibility: RDS runs community PostgreSQL with extensions like PostGIS, pg_partman, and more, making it ideal for teams that want open-source consistency without Oracle’s proprietary constraints.

When Aurora PostgreSQL Might Be Better

  • High-performance, mission-critical apps: Aurora delivers up to 3x faster throughput compared to standard PostgreSQL.
  • Large-scale deployments: Aurora can scale storage automatically and handle millions of transactions per second.
  • Multi-region replication: Aurora Global Database enables near real-time cross-region replication, valuable for enterprises with global footprints.

Learn: Migrate from Oracle to Amazon Aurora for PostgreSQL

Decision Framework

Consideration

Best Fit: RDS for PostgreSQL

Best Fit: Aurora PostgreSQL

CostLower overall TCOHigher cost, but justified by scale
Workload SizeSmall to mediumMedium to very large
ComplexitySimple, straightforward deploymentsEnterprise-grade, distributed systems
Performance NeedsModerateHigh-performance, low-latency apps
Migration Use CaseDepartmental apps, analytics, dev/testGlobal or enterprise-critical systems

Enterprises that want a cost-effective, managed PostgreSQL service for moderate workloads often choose Amazon RDS for PostgreSQL instead of Aurora. It provides the right balance of affordability, reliability, and compatibility, especially for organizations migrating away from Oracle’s heavy licensing model.

See how other enterprises modernized their databases. Success Stories 

👉 Already know the challenges? Skip straight to the Step-by-Step tutorial

Migration Challenges: Oracle to Amazon RDS for PostgreSQL

Migrating from Oracle to Amazon RDS for PostgreSQL can unlock significant cost savings and modernization benefits. However, enterprises quickly realize this is not a simple export–import exercise. The technical and operational differences between the two platforms create hurdles that must be addressed carefully.

1. Schema and Data Type Differences

  • Oracle’s NUMBERCLOB, and BLOB types don’t always map cleanly into PostgreSQL equivalents such as NUMERICTEXT, and BYTEA.
  • Case sensitivity and reserved keywords differ between Oracle and PostgreSQL, which can break queries or cause schema conflicts.
  • Large schemas with hundreds of tables and complex relationships make manual conversion time-consuming and error-prone.

2. PL/SQL Incompatibility

  • Oracle relies heavily on PL/SQL for stored procedures, triggers, and packages.
  • PostgreSQL uses PL/pgSQL, which is similar but not directly compatible.
  • Refactoring procedural code often becomes one of the most resource-intensive aspects of Oracle-to-Postgres migrations.

3. Partitioning and Indexing Gaps

  • Oracle offers advanced partitioning strategies and indexing options that may not translate directly to PostgreSQL.
  • Enterprises often need to redesign partitioning logic to maintain performance after migration.

4. Data Volume and Backfill Complexity

  • Large Oracle instances can hold terabytes of transactional data.
  • Traditional batch migration tools may take hours or even days to perform an initial load.
  • During this time, changes continue to occur in Oracle, leaving the target RDS instance out of sync unless change data capture (CDC) is applied.

5. Downtime Risk for Mission-Critical Systems

  • Enterprises cannot afford hours of downtime while migrating production workloads.
  • Without real-time replication, cutover typically requires a lengthy freeze window, causing business disruption.

The primary challenge is not just moving historical data into RDS PostgreSQL, but also keeping Oracle and RDS perfectly in sync until final cutover. Without CDC and schema-aware pipelines, enterprises risk downtime, data loss, and extended project timelines.

Traditional Migration Approaches (and Their Limitations)

Enterprises that want to move from Oracle to Amazon RDS for PostgreSQL usually start with one of the traditional migration paths. These methods can work, but they often involve long timelines, unexpected downtime, and ongoing maintenance challenges. Let’s review the most common approaches.

1. AWS Schema Conversion Tool (SCT) + AWS Database Migration Service (DMS)

  • How it works:
    • SCT scans Oracle schemas, procedures, and database objects, then attempts to convert them into PostgreSQL-compatible formats.
    • DMS handles the bulk data transfer and can apply ongoing replication through CDC.
  • Limitations:
    • Complex PL/SQL rarely translates cleanly. Large portions of procedural code still need manual rewriting.
    • DMS CDC is resource-heavy and brittle under high transaction volumes, often leading to lag or errors.
    • Full data loads for terabyte-scale databases can take days to complete, increasing cutover risk.
    • Debugging failures requires specialized AWS knowledge or support tickets, slowing projects.

2. Manual Export and Import

  • How it works: Teams export data from Oracle into CSV or SQL dump files and import them into PostgreSQL.
  • Limitations:
    • Works only for small, non-critical workloads.
    • Requires significant downtime windows — often hours or days.
    • Offers no change tracking: any new inserts or updates in Oracle after the export must be reapplied manually.
    • High risk of data inconsistency if the source continues to change during migration.

3. Custom ETL Pipelines

  • How it works: Engineers build scripts or use third-party ETL platforms to extract from Oracle, transform data, and load it into PostgreSQL.
  • Limitations:
    • High upfront investment in development and ongoing maintenance.
    • Typically designed for batch processing, not continuous streaming.
    • Failures are common when schemas evolve, requiring constant developer intervention.
    • Expensive to scale as enterprise data grows.

4. Third-Party Replication Tools

  • Examples: Quest SharePlex, Attunity (Qlik Replicate), HVR, Oracle GoldenGate.
  • Benefits: Mature tools with long histories of Oracle integration and near real-time replication support.
  • Limitations:
    • Licensing costs can run into six figures annually, undermining the savings from leaving Oracle.
    • Proprietary solutions add another vendor dependency.
    • Integration with AWS-native infrastructure like RDS can be less seamless than expected.

Where Traditional Approaches Fall Short

Across all these methods, three recurring issues make Oracle-to-RDS migrations difficult:

  1. Downtime risk: Most tools rely on batch transfers or fragile CDC, which means extended cutover windows for mission-critical apps.
  2. Complexity: Schema differences, PL/SQL refactoring, and Oracle-specific features require heavy manual effort.
  3. Fragility at scale: Traditional CDC pipelines struggle with schema evolution, terabyte-scale data, and the need for exactly-once consistency.

Traditional tools can move data from Oracle into RDS PostgreSQL, but they leave enterprises with downtime, complexity, and operational risk. Organizations need a more modern, real-time solution to achieve near-zero downtime and smooth cutovers — which is exactly what Estuary Flow delivers.

Talk to us about a faster, real-time alternative. Contact Us

Estuary Flow Approach: Oracle to Amazon RDS for PostgreSQL

Stream data from Oracle Database to Amazon RDS for PostgreSQL

Unlike traditional tools, Estuary Flow unifies backfill and real-time CDC in a single pipeline, making Oracle to RDS migrations faster, safer, and easier to manage.

How It Works

  1. Oracle Capture
    • Use Estuary’s Oracle connector with LogMiner to backfill historical data and capture changes in real time.
    • Works with container and non-container Oracle instances (11g and above).
  2. RDS PostgreSQL Materialization
  3. Sync and Cutover
    • Flow keeps Oracle and RDS in sync with exactly-once guarantees.
    • When ready, pause briefly for final sync and switch your applications to RDS — with near-zero downtime.

Why Enterprises Choose Estuary Flow

  • Near-Zero Downtime: Continuous replication eliminates long cutover windows.
  • Unified Pipeline: No need to juggle SCT, DMS, or custom ETL.
  • Schema Evolution: Automatically adapts when Oracle schemas change.
  • Enterprise-Grade Security: Encryption in transit and at rest, BYOC/private deployment, secure networking (SSH, VPC peering, PrivateLink).
  • No-Code Setup: Pipelines are configured in minutes through the Estuary UI, without writing scripts or managing servers.

With Estuary Flow, enterprises can migrate from Oracle to Amazon RDS for PostgreSQL in real time, without downtime, brittle tooling, or hidden costs.

Join our Slack to get step-by-step guidance from Estuary engineers. Slack Invite

Oracle CDC Prerequisites: What and Why

Real-time migration from Oracle to Amazon RDS for PostgreSQL depends on Change Data Capture (CDC). Oracle records every transaction in redo logs, which are then archived. Estuary Flow uses LogMiner to read these logs and continuously sync changes into RDS PostgreSQL until you’re ready to cut over.

Before connecting Oracle to Estuary Flow, make sure the following prerequisites are in place:

  1. Enable Archive Logging and Set Retention
    • Archive logs must be enabled with sufficient retention (several days recommended) so Estuary can resume after interruptions without losing events.
  2. Create a Dedicated Capture User
    • Create a read-only user with CREATE SESSION and SELECT on required tables.
    • CDB/PDB: for CDB setups, the user name should begin with the c## prefix and specify CONTAINER=ALL for privileges.
  3. Grant LogMiner and Catalog Privileges
    • Provide the user with:
      • LOGMINING
      • SELECT_CATALOG_ROLE
      • EXECUTE_CATALOG_ROLE
      • SELECT on views like V$DATABASEV$LOG
  4. Create a Watermarks Table
plaintext
Example: CREATE TABLE FLOW_WATERMARKS ( SLOT varchar(1000) PRIMARY KEY, WATERMARK varchar(4000) );
  • Grant INSERT and UPDATE permissions to the capture user.
  1. Enable Supplemental Logging
    • Example command:
plaintext
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
  • For Amazon RDS Oracle, use the rdsadmin.rdsadmin_util.alter_supplemental_logging procedure.
  1. Version and Platform Notes
    • Supported: Oracle 11g and above, container and non-container.
    • Limitation: Multi-tenant Oracle on Amazon RDS without root access cannot support LogMiner capture. Use a non-CDB or a CDB where root access is permitted.

Key takeaway: These prerequisites ensure Estuary Flow can capture all Oracle transactions safely and reliably, enabling real-time CDC and minimal downtime during migration.

 Want to see Oracle CDC setup in action? Watch this walkthrough of Oracle LogMiner and Estuary Flow in this demo.

Step-by-Step Tutorial: Oracle to Amazon RDS for PostgreSQL

With Estuary Flow, you can migrate from Oracle to Amazon RDS for PostgreSQL in just a few guided steps. No coding, no manual replication, and no downtime-heavy cutovers.

Step 1: Configure the Oracle Source Connector

  1. If you don’t already have an Estuary account, register for free to get started.
  2. In the Estuary Flow web app, click + New Capture and choose Oracle Database (Real-time) as the connector.
Real-time CDC and batch Oracle source connectors
  1. Enter your Oracle details:
    • Address: host:port of your Oracle instance.
    • Database: SID or PDB name (example: ORCL).
    • User & Password: A dedicated read-only user with LogMiner permissions.
      Oracle source connector setup
  2. (Optional) Enable History Mode if you want to capture raw change events instead of final state only.
  3. Save and publish the capture.

👉 Flow automatically backfills your Oracle tables into collections and starts streaming new changes in real time using CDC.

Step 2: Set Up the PostgreSQL Materialization for Amazon RDS

  1. Click + New Materialization in the Estuary dashboard.
  2. Choose Amazon RDS for PostgreSQL as the destination connector.
    Amazon RDS for PostgreSQL materialization connector
  3. Enter your Amazon RDS for PostgreSQL connection details:
    • Address: RDS endpoint (host:port).
    • Database: name of the RDS database (example: postgres).
    • User & Password: credentials with table creation privileges.
    • Schema: defaults to public, but can be customized.
      Amazon RDS for Postgres destination setup
  4. (Optional) Enable Delta Updates for efficient row-level updates or Hard Deletes if Oracle deletes should also remove rows in RDS.

Step 3: Bind the Source to the Destination

  1. Under Source Collections, select the Oracle capture you configured in Step 1.
  2. Assign each collection to a target table in RDS PostgreSQL.
    • Estuary auto-creates tables as needed.
  3. Click Publish to deploy the pipeline.

Once published, Estuary streams historical and real-time data directly from Oracle to RDS PostgreSQL.

Step 4: Monitor and Cut Over

  1. Use the Estuary dashboard to monitor pipeline activity and logs.
  2. Flow keeps both databases in sync with exactly-once guarantees.
  3. When ready, perform a brief cutover by pausing writes to Oracle and switching applications to RDS PostgreSQL.

Result: You now have a real-time, fully automated migration pipeline from Oracle to Amazon RDS for PostgreSQL, with minimal downtime and zero manual coding.

Start your first Oracle to RDS PostgreSQL pipeline now.  Register. If you run into any issues during sync, join the Estuary Slack community to get help directly from Estuary engineers.

Conclusion & Next Steps

Migrating from Oracle to Amazon RDS for PostgreSQL is a strategic move for enterprises that want to reduce costs, escape vendor lock-in, and modernize their data infrastructure. Traditional tools can help, but they often introduce downtime, complexity, and maintenance overhead.

With Estuary Flow, you get a single, no-code pipeline that handles both historical backfill and real-time CDC, ensuring a smooth, near-zero downtime migration. Add in enterprise-grade security, schema evolution support, and exactly-once guarantees, and you have a solution built for mission-critical workloads.

Now is the best time to explore how Estuary can help you move beyond Oracle, modernize on AWS, and simplify operations.

Ready to Get Started?

FAQs

    Why migrate from Oracle to Amazon RDS for PostgreSQL?

    Enterprises migrate to RDS PostgreSQL to reduce licensing costs, eliminate vendor lock-in, and take advantage of AWS’s managed infrastructure with built-in scalability and high availability.
    Estuary Flow combines historical backfill and real-time CDC in one pipeline, automatically handling schema evolution and ensuring exactly-once delivery. This allows Oracle and RDS to stay in sync until final cutover, with near-zero downtime.
    Yes. Estuary Flow provides encryption in transit and at rest, audit logs, private deployment options (BYOC or VPC), and secure networking (SSH, PrivateLink, VPC Peering), making it suitable for regulated industries.
    Estuary Flow supports schema evolution and will automatically adjust pipelines to reflect new fields or modified structures, avoiding pipeline failures common with traditional tools.

Start streaming your data for free

Build a Pipeline
Share this article

Table of Contents

Start Building For Free

About the author

Picture of Team Estuary
Team EstuaryEstuary Editorial Team

Team Estuary is a group of engineers, product experts, and data strategists building the future of real-time and batch data integration. We write to share technical insights, industry trends, and practical guides.

Related Articles

Popular Articles

Streaming Pipelines.
Simple to Deploy.
Simply Priced.
$0.50/GB of data moved + $.14/connector/hour;
50% less than competing ETL/ELT solutions;
<100ms latency on streaming sinks/sources.