Estuary

Oracle to PostgreSQL Migration Guide: 3 Methods, Steps, and Best Practices

Learn how to migrate from Oracle to PostgreSQL with ease. Discover top tools, methods, detailed steps, and solutions to challenges for a smooth, cost-effective migration.

Oracle to PostgreSQL Migration
Share this article

With data volumes growing across apps, customer interactions, and digital transactions, many teams are rethinking how they store and manage operational data.

Oracle Database is a long-time enterprise standard for high-scale transactional workloads, but its licensing costs and proprietary ecosystem can make it expensive and harder to adapt over time. PostgreSQL is a popular alternative because it’s open source, flexible, and widely supported across cloud providers.

In this guide, you’ll learn three practical ways to migrate from Oracle to PostgreSQL:

  • Method 1 (Automated, low downtime): CDC with Estuary for continuous sync during cutover
  • Method 2 (Manual, transitional access): FDW for temporary cross-database querying
  • Method 3 (Manual, batch migration): CSV export/import for one-time moves

By the end, you’ll know how to choose the right approach, prepare for migration, and avoid the most common issues that slow teams down.

Choose your migration approach

Not every Oracle-to-PostgreSQL migration needs the same method. Use this quick guide to pick the right path based on downtime tolerance and whether you need ongoing synchronization.

  • Need near-zero downtime + continuous sync (recommended for production cutovers):

Use CDC (Change Data Capture) with Estuary to stream inserts/updates/deletes from Oracle into PostgreSQL while you validate and plan cutover. 

  • One-time migration and downtime is acceptable:

Use CSV export/import for a simpler, batch-style move (best for smaller datasets or non-critical systems).

  • Need temporary cross-database querying (not a true migration):

Use Foreign Data Wrappers (FDW) to query Oracle from PostgreSQL during evaluation or phased rewrites.

Tip: If you’re migrating a live application, CDC is usually the safest route because it supports phased cutover with validation before you switch traffic.

Pre-migration checklist (do this before you move any data)

Oracle-to-PostgreSQL migrations fail most often because teams underestimate hidden dependencies. Before migrating, confirm:

Inventory what you’re migrating

  • Tables, indexes, constraints
  • Sequences/identity patterns
  • Views, triggers, stored procedures/packages
  • Partitioning strategy and large objects (CLOB/BLOB)

Identify Oracle-specific dependencies

  • Oracle-specific SQL syntax and functions in the application layer
  • PL/SQL logic that must be rewritten for PostgreSQL

Choose a cutover strategy

  • Big bang: migrate once and switch everything at once (higher risk, simpler process)
  • Phased: keep Oracle + PostgreSQL in sync, validate, then switch gradually (lower risk)

Create a validation plan

  • Row counts per table
  • Checksum / hash checks for key tables
  • Sampling + reconciliation queries for business-critical entities

Define rollback criteria

  • What would trigger rollback?
  • How long do you keep Oracle as the fallback system?

An Overview of Oracle

Oracle logo

Oracle Database is a relational database management system widely used for OLTP workloads, data warehousing, and enterprise analytics. It’s known for strong performance, reliability, and a mature ecosystem—especially in large, complex production environments.

Oracle follows a client/server architecture, where the database server manages storage and resources while client connections handle querying and transactions.

Oracle also supports Change Data Capture (CDC), which helps track and replicate inserts, updates, and deletes for continuous synchronization. To learn more, check out our Oracle CDC guide.

In Oracle, the database refers to the physical files that store data, while the instance refers to the memory structures and background processes (including the SGA) that manage those files.

Starting with Oracle Database 21c, Oracle uses a multitenant architecture built around a container database (CDB) that can host one or more pluggable databases (PDBs). This design helps teams consolidate databases and manage them more efficiently.

To handle large-scale workloads, Oracle supports techniques like partitioning and sharding, which help distribute data and traffic across nodes for better scalability.

Migrate Data From Oracle to Any Destination in Real-time

An Overview of PostgreSQL

postgres logo

PostgreSQL is an open-source object-relational database known for strong performance, advanced SQL support, and flexibility. It supports a wide range of data types—including numeric, boolean, date/time, and JSON—making it a solid choice for modern application workloads.

Postgres also provides several indexing options (including B-tree, GIN, GiST, and BRIN) to help optimize query performance depending on your data and access patterns.

Here are some important PostgreSQL features:

  • Extensibility: PostgreSQL supports a broad ecosystem of extensions like PostGIS (spatial data) and tools for query and performance monitoring.
  • Concurrency control (MVCC): PostgreSQL uses Multiversion Concurrency Control to support concurrent reads and writes without sacrificing data integrity.
  • ACID compliance: PostgreSQL is ACID-compliant, which helps ensure reliable transactions and strong consistency—even during failures.
  • Security: PostgreSQL includes role-based access control, encryption options, and flexible authentication methods.

While PostgreSQL is popular for cost efficiency and flexibility, some teams evaluate alternatives like cloud data warehouses (for example, Snowflake) when their primary goal is large-scale analytics rather than OLTP workloads.

Why Should You Migrate from Oracle to PostgreSQL?

Migrating data from Oracle to PostgreSQL can be helpful for organizations that prioritize cost efficiency, ease of use, and better flexibility.

Here are some reasons why you should make the move from Oracle to PostgreSQL:

  • Flexibility: Oracle is a proprietary database and is less flexible in terms of customizability. While you can use Oracle’s extensions to expand its capabilities, they are expensive.

On the other hand, PostgreSQL is open-source, and you can modify its source code. The wide array of extensions in Postgres allows you to expand its functionality considerably. This makes Postgres more flexible than Oracle.

  • Cost Efficiency: Oracle's licensing fees can be expensive and vary according to features and deployment options.

Contrarily, PostgreSQL is open source, involves no licensing costs, and can be used freely for commercial and individual purposes.

  • Multi-cloud Compatibility: You can deploy Oracle databases on cloud platforms, such as AWS, Azure, or GCP. However, deploying them on external cloud platforms is complex and expensive, mainly due to licensing, the need for specific configurations, and higher operational management overhead.

PostgreSQL offers multi-cloud support with greater flexibility. As an open-source database, you can easily deploy it on several cloud provider platforms, including AWS, GCP, and Azure. You need not worry about any licensing constraints as with Oracle.

Challenges in Oracle to PostgreSQL Migration

While migrating from Oracle to PostgreSQL offers cost and flexibility benefits, the process can be technically complex and operationally risky without the right preparation. Here are some of the most common challenges organizations face:

1. Data Type Mismatches

Oracle supports proprietary data types such as NUMBERVARCHAR2, and BLOB, which don’t always have direct equivalents in PostgreSQL. Special handling or data type casting is often required during migration.

2. PL/SQL to PL/pgSQL Differences

Oracle and PostgreSQL use different procedural languages. Converting stored procedures, triggers, and functions from PL/SQL to PL/pgSQL can be time-consuming and may require rewriting logic and exception handling patterns.

3. Application Layer Dependencies

Many enterprise applications include embedded SQL queries and Oracle-specific syntax. These need to be identified and refactored to be compatible with PostgreSQL, which adds to the overall migration effort.

4. Schema Conversion and Constraints

Oracle allows for certain schema behaviors and constraints (like deferred constraints or unique index behavior) that work differently in PostgreSQL. Ensuring schema integrity during the switch requires careful mapping and testing.

5. Data Volume and Downtime

Migrating large Oracle databases to PostgreSQL—especially without real-time sync—can lead to significant downtime. Batch exports or network latency can also delay cutover and risk data loss.

6. Security and Compliance Risks

Oracle often integrates tightly with enterprise identity providers and auditing tools. Ensuring that similar security policies and access controls are enforced in PostgreSQL can be a challenge, particularly in regulated industries.

7. Performance Tuning Gaps

PostgreSQL and Oracle have different default query optimizers, indexing strategies, and cache mechanisms. A well-optimized Oracle query might behave differently in PostgreSQL, requiring performance tuning post-migration.

Oracle to PostgreSQL data type mapping (common conversions + gotchas)

Oracle and PostgreSQL don’t map 1:1 for all data types. Use this table as a starting point:

Oracle typeCommon PostgreSQL equivalentGotchas to watch
NUMBER(p,s)numeric(p,s)If precision/scale varies across rows, verify actual values before choosing constraints.
NUMBER (no p/s)numericOften used flexibly in Oracle; in Postgres you should decide if bigint, integer, or numeric fits.
VARCHAR2(n)varchar(n) or textIf you don’t need strict length enforcement, text is simpler.
CHAR(n)char(n)Watch space-padding behavior differences.
DATEtimestamp (often) or dateOracle DATE includes time-of-day; Postgres date does not. Use timestamp if you need time.
TIMESTAMPtimestamp / timestamptzDecide whether you need timezone-aware storage (timestamptz).
CLOBtextLarge text can be huge; test performance and indexing strategy.
BLOBbyteaConfirm encoding/transfer method; large objects can impact replication and storage.

Migration tip: Don’t just “convert types”, test your most important queries and indexes after migration. PostgreSQL query plans and indexing strategies can differ significantly.

Now let’s walk through each method step-by-step.

Method 1: Oracle to PostgreSQL Migration Using Estuary (CDC)

Automated migration tools can reduce downtime, limit human error, and help teams keep Oracle and PostgreSQL in sync during a phased cutover. Estuary supports Oracle change data capture (CDC) with LogMiner, so inserts, updates, and deletes can be streamed into collections and then materialized into PostgreSQL.

Here are some key features of Estuary:

  • CDC for low-downtime cutovers: Stream inserts, updates, and deletes from Oracle so PostgreSQL stays current while you validate and plan the cutover.
  • Works across common network setups: Supports enterprise-friendly connectivity patterns, including private environments and controlled network access.
  • Built-in transformations (optional): Apply lightweight transformations during migration using derivations when you need to reshape or enrich data.

You can set up both capture and materialization from the UI without writing custom pipeline code.

Real-time vs Batch Oracle connectors (important):

When you search for Oracle in the Estuary UI, you’ll see Real-time and Batch options.

  • Real-time (CDC via LogMiner) is the default choice when you need continuous updates and a low-downtime cutover.
  • Batch is typically for cases where CDC is not possible (permissions constraints, operational restrictions, or snapshot-style requirements).

Let’s look into the details of how you can use Estuary to move your data from Oracle to PostgreSQL.

Prerequisites

  • An Estuary account.
  • Oracle Database 11g or above.
  • PostgreSQL database and user credentials. 
  • Network access from Estuary to Oracle and PostgreSQL (IP allowlist or SSH tunneling depending on your hosting setup). 
  • At least one Estuary collection (the capture will create collections when you select tables). 

Before you start: Oracle CDC setup checklist (do this first)

This is the most common reason Oracle CDC migrations fail: the connector can connect, but CDC cannot read what it needs.

Make sure you complete the OracleDB connector requirements, especially:

  • Create the watermarks table used by the connector.
  • Grant the required privileges for LogMiner and the relevant system views.
  • Enable supplemental logging (required for LogMiner-based CDC to reliably reconstruct row changes).
  • Confirm archive/redo log settings required for log mining.
  • If you use multitenant (CDB/PDB) or hosted Oracle variants (for example, RDS), follow the connector’s specific notes and constraints.

Step 1: Set up Oracle Database as the source (capture)

  • Sign in to your Estuary account.
  • Select Sources from the left-side pane of the dashboard. You will be redirected to the Sources page.
oracle to postgres - estuary new capture page
  • Click the +NEW CAPTURE button to proceed to the Create Capture page.
  • Search for Oracle Database and choose Real-time (CDC) for this tutorial.
oracle to postgres - oracle search connector field
  • You will be redirected to the connector’s configuration page. Enter all the required fields, including:
    • Name: Provide a unique capture name.
    • Server Address: This is the host:port at which you can connect to your database.
    • User: Enter your Oracle database user name for authentication.
    • Password: Provide the password for the specified database user.
oracle to postgres - oracle create capture page
  • Then click Next and Save and publish.

After publish, Estuary will discover the available schemas/tables and begin capturing into collections once you select what to replicate. 

Common gotcha: If discovery or CDC fails after connecting, it is almost always missing grants, missing watermarks table, or supplemental logging not enabled. That is why the checklist above matters.

Step 2: Set up PostgreSQL as the destination (materialization)

  • After your capture is created, click Materialize collections on the success prompt, or go to Destinations and click +New Materialization.
  • Search for PostgreSQL and click Materialization.
oracle to postgres - postgresql search connector page
  • On the configuration page, fill in:
    • Name: A unique materialization name
    • Address: host:port (5432 is default)
    • User / Password: Postgres credentials
Oracle to PostgreSQL - postgresql Materialization page

Then:

  • In Source collections, ensure your Oracle capture collections are selected (or use Source from capture).
  • Click Next and Save and publish.

Important: how tables are created in PostgreSQL

The PostgreSQL materialization connector creates the destination tables based on your bindings and the collection schemas. Tables created manually in advance are not supported, so plan to let the connector create new tables (and then cut over your application or views). 

Ready to simplify your Oracle to PostgreSQL migration? Get started with Estuary today, and also join the Slack community. It’s the easiest way to get support!

Method 2: Oracle to PostgreSQL Migration Using Foreign Data Wrappers

A Foreign Data Wrapper (FDW) lets PostgreSQL query data that lives in an external system—like Oracle—without moving it first. In practice, FDW is best for temporary access, reporting, or a phased rewrite (for example, when you want Postgres apps to read Oracle tables during a transition). It’s usually not the best option for a full migration, unless you also copy data into native PostgreSQL tables.

To connect PostgreSQL to Oracle, you can use oracle_fdw, a PostgreSQL extension that enables Oracle tables to appear as foreign tables in Postgres.

When FDW makes sense

  • You want Postgres to query Oracle data during evaluation.
  • You’re doing a phased cutover and need temporary cross-db reads.
  • You want to validate results in Postgres before moving data permanently.

Step 1: Install and enable oracle_fdw

Log in to PostgreSQL and enable the extension:

plaintext language-sql
CREATE EXTENSION oracle_fdw;

You’ll also need the Oracle client libraries installed and configured on the PostgreSQL host.

Step 2: Create the Oracle server connection

Create a foreign server pointing to Oracle:

plaintext language-sql
CREATE SERVER oradb FOREIGN DATA WRAPPER oracle_fdw OPTIONS (dbserver '//dbserver.mydomain.com:1521/ORADB');

Step 3: Create a user mapping

Map your Postgres user to an Oracle user:

plaintext language-sql
CREATE USER MAPPING FOR pguser SERVER oradb OPTIONS (user 'orauser', password 'orapwd');

Step 4: Create a foreign table in PostgreSQL

Define a foreign table that references the Oracle table:

plaintext language-sql
CREATE FOREIGN TABLE oratab ( id integer OPTIONS (key 'true') NOT NULL, text character varying(30), floating double precision NOT NULL ) SERVER oradb OPTIONS (schema 'ORAUSER', table 'ORATAB');

Now you can query it like a normal Postgres table:

plaintext language-sql
SELECT * FROM oratab;

If you want FDW to support an actual migration

FDW alone doesn’t “move” data. To migrate, you typically copy data into local Postgres tables:

plaintext language-sql
CREATE TABLE oratab_local AS SELECT * FROM oratab;

You can then point applications to oratab_local while keeping FDW for comparison and validation.

Limitations of FDW for Oracle to PostgreSQL migration

  • Not real-time sync: FDW doesn’t provide CDC-style replication by itself.
  • Performance can vary: Complex queries may run slowly because data is still fetched from Oracle.
  • Operational complexity: Requires Oracle client setup, connection management, and performance tuning.

Method 3: Oracle to PostgreSQL Migration Using CSV Method

A CSV-based migration is a simple one-time, batch-style way to move data from Oracle to PostgreSQL. It works best when you can tolerate downtime and don’t need ongoing synchronization (for example, smaller databases, non-critical systems, or initial proof-of-concept migrations).

When to use the CSV method

  • You’re doing a one-time migration and downtime is acceptable.
  • Your dataset is small to medium, or you’re migrating only a few tables.
  • You don’t need real-time updates during cutover.

Step 1: Export Oracle tables to CSV (SQL Developer)

Oracle SQL Developer is the most common option for exporting tables quickly.

  1. Open Oracle SQL Developer and connect to your Oracle database.
  2. In the schema browser, right-click the table you want to export and select Export.
  3. In the Export Wizard:
    • Set Format to CSV
    • Set the output file path
    • Select the columns you want to export
  4. Click Finish to generate the CSV.
Oracle SQL Developer export wizard showing CSV format selection and export file path
In Oracle SQL Developer, set Format to CSV and choose an output path in the Export Wizard.
Oracle SQL Developer export wizard showing column selection for CSV export
Select the columns to export, then continue to generate the CSV file. - Image Source

Tip: If you have many tables, export them in a consistent naming format (for example, schema_table.csv) to keep imports organized.

Alternative: Export using SQLcl (CLI)

If you prefer command-line export or want to script repeatable exports, SQLcl works well:

plaintext language-sql
SET SQLFORMAT CSV SET TERMOUT OFF SET FEEDBACK OFF SPOOL /path/to/table1.csv SELECT * FROM table1; SPOOL OFF
oracle to postgres - SQLcl Method
SQLcl can export query results to CSV for repeatable, scriptable migrations.

Step 2: Import CSV into PostgreSQL using COPY

Create the destination table in Postgres and then load the CSV file using COPY:

plaintext language-sql
COPY table_name (column1, column2, column3) FROM '/file_path/file_name.csv' DELIMITER ',' CSV HEADER;

Common import tips:

  • Ensure the Postgres table column order matches the CSV column order.
  • Watch for quoting/delimiter issues (especially if text fields contain commas).
  • If the CSV contains special characters or newlines, test with a small subset first.

Limitations of the CSV method

  • No real-time sync: Changes in Oracle after export won’t be reflected unless you export again.
  • Downtime risk: Large datasets can take time to export/import, increasing cutover downtime.
  • Manual at scale: You must repeat this per table and handle schema + type differences yourself.

If you need minimal downtime or want Oracle and PostgreSQL to stay in sync during migration, a CDC-based approach (like Estuary) is usually a better fit.

Move Data in Minutes - ETL,  ELT, CDC  - Real-time Data Integration

Summing It Up

Oracle and PostgreSQL are two high-performing databases with distinctive features. However, when you connect Oracle to PostgreSQL, you achieve enhanced versatility and flexibility.

For an Oracle to PostgreSQL data migration, you can use a manual approach. This could be data transfer using a foreign data wrapper or export/import via CSV. However, both techniques are associated with challenges, including a lack of real-time sync, time- and effort-intensive, and security issues.

The alternative approach to loading data from Oracle to PostgreSQL is an automated method that involves the use of Oracle to PostgreSQL migration tools like Estuary. These tools offer pre-built connectors that enable you to transfer data quickly in real-time. By leveraging automation, you can improve the efficiency of your organizational workflow and promote revenue growth.

Sign up for Estuary today to build real-time data pipelines for effective data analytics!

Related Sync With Oracle

FAQs

    What is the best way to migrate from Oracle to PostgreSQL?

    The best approach depends on your technical requirements, data volume, and downtime tolerance. For real-time and low-latency needs, tools like Estuary provide an automated, CDC-based migration. For smaller or less time-sensitive projects, manual methods like Foreign Data Wrappers (FDW) or CSV exports can work.
    Yes, using Change Data Capture (CDC)-based tools like Estuary allows for real-time syncing of Oracle to PostgreSQL, enabling near-zero-downtime migration. This is especially useful for production environments that cannot afford service interruptions.
    The time required depends on the dataset size, chosen method (manual or automated), network latency, and complexity of the schema. Manual methods may take days or weeks, while real-time migration tools can sync large datasets incrementally within hours.

Start streaming your data for free

Build a Pipeline
Share this article
Summarize this page with AI

Table of Contents

Start Building For Free

About the author

Picture of Jeffrey Richman
Jeffrey Richman

With over 15 years in data engineering, a seasoned expert in driving growth for early-stage data companies, focusing on strategies that attract customers and users. Extensive writing provides insights to help companies scale efficiently and effectively in an evolving data landscape.

Related Articles

Popular Articles

Streaming Pipelines.
Simple to Deploy.
Simply Priced.
$0.50/GB of data moved + $.14/connector/hour;
50% less than competing ETL/ELT solutions;
<100ms latency on streaming sinks/sources.