
Introduction
If you want to migrate PostgreSQL to SQL Server, you need to do more than export a few tables. A complete migration includes converting the schema, moving data, updating applications, and planning a clean cutover. You can do this with Microsoft SQL Server Migration Assistant (SSMA), manual export and import using CSV plus SQL Server Management Studio, or a right time data platform such as Estuary.
For small databases and flexible downtime, a simple one time offline migration often works. For more complex schemas, SSMA helps assess compatibility, convert objects, and copy data. When you need minimal downtime or want PostgreSQL and SQL Server to run in parallel for a while, a right time platform such as Estuary uses change data capture to stream changes from PostgreSQL into SQL Server on your schedule.
This guide gives you a concise playbook that works for all three options, then shows how each method looks in practice so you can choose the approach that matches your data size, complexity, and uptime requirements.
At a high level, the way to migrate PostgreSQL to SQL Server is to assess your current database, create an equivalent schema in SQL Server, move your data with a tool such as SQL Server Migration Assistant or CSV plus SQL Server Management Studio, then keep systems in sync or run a planned cutover. You finish by switching applications over to SQL Server and validating that data, queries, and performance meet your requirements. The rest of this guide follows that sequence and shows where a right time platform such as Estuary fits when you want low downtime and continuous sync.
Key takeaways
- A real PostgreSQL to SQL Server migration includes schema conversion, data movement, application changes, cutover planning, and validation, not only table copies.
- You have three main paths: SQL Server Migration Assistant, CSV plus SQL Server Management Studio, or a right time data platform such as Estuary.
- SSMA is ideal when you want Microsoft native tooling to assess and convert schemas for small to medium databases.
- CSV plus SQL Server Management Studio works best for simple one time offline migrations where downtime is acceptable.
- Estuary is the best fit when you need low downtime or ongoing sync, because it uses change data capture and streaming to move data in right time, from sub second to scheduled batch windows.
Why teams move from PostgreSQL to SQL Server
PostgreSQL and SQL Server are both powerful relational databases, but they often sit inside very different ecosystems. In practice, teams usually move from PostgreSQL to SQL Server for business and platform reasons rather than pure technical performance.
Common motivations include:
- Standardizing on the Microsoft stack: Your organization may already rely heavily on Windows Server, Active Directory, and other Microsoft tools. Running SQL Server alongside that stack can simplify operations, support, and licensing.
- Existing SQL Server enterprise agreements: Many enterprises already pay for SQL Server through volume or enterprise licensing. Moving PostgreSQL workloads into SQL Server can help consolidate databases under those licenses and reduce the number of technologies to manage.
- Closer integration with analytics tools: SQL Server works tightly with Power BI, Excel, Azure Synapse, and other Microsoft analytics and reporting tools. If your analytics and BI teams live in those tools, putting data in SQL Server can reduce friction.
- Cloud or infrastructure strategy: Organizations that are consolidating workloads into Azure or a Microsoft focused cloud strategy often prefer SQL Server to keep networking, security, and monitoring consistent.
- Operational maturity and support expectations: Some teams prefer having vendor backed support and a long history of SQL Server expertise in the market, from DBAs, consultancies, and managed service providers.
The important point is that a PostgreSQL to SQL Server migration is usually about aligning with a broader platform direction. That also means you cannot treat it as a simple “export and import” exercise.
PostgreSQL to SQL Server migration strategies and tools
Whatever your reason for migrating, the work always covers more than just data export and import. You must translate PostgreSQL features to SQL Server, move data safely, and keep applications working.
Three migration strategies
- Offline one time migration - Stop writes to PostgreSQL, copy data, point applications to SQL Server, and accept a maintenance window. This is the simplest option and works well for small databases and tolerant downtime.
- Phased migration - Use tools to preload schema and data into SQL Server, test applications against the new database, then perform a final sync and cutover during a shorter maintenance window. This reduces risk at the cost of more preparation.
- Right time migration - Create the schema in SQL Server, perform an initial bulk load, then keep PostgreSQL and SQL Server in sync using change data capture. After a testing period, cut over during a very short window. This is the preferred strategy when downtime must be tightly controlled.
Tool options at a glance
You can implement these strategies with three primary choices:
- SQL Server Migration Assistant for PostgreSQL (SSMA)
Microsoft’s free utility that:
- Assesses PostgreSQL schemas for compatibility.
- Converts supported objects to SQL Server.
- Helps migrate data into the target database.
Best for teams that prefer Microsoft tooling and want a guided conversion process.
- CSV plus SQL Server Management Studio (SSMS)
Manual export and import where you:
- Export tables from PostgreSQL as CSV files.
- Create the target schema in SQL Server.
- Use the SSMS Import and Export Wizard to load the CSV data.
This is simple and transparent, and suits small schemas and one time offline migrations.
- Right time replication with Estuary
Estuary is a right time data platform that connects to PostgreSQL with change data capture and materializes changes into SQL Server. Right time means you choose when data moves, from sub second streaming to near real time to scheduled batch. Estuary is the best choice when you want:
- Minimal downtime and continuous sync during testing.
- A managed way to handle change data capture and streaming without custom scripts.
- Pipelines you can reuse for analytics and other downstream systems after migration.
You can also combine these options, for example using SSMA for an initial schema and data load, then Estuary for ongoing right time replication and a safe cutover.
Next, we will walk through a step by step migration playbook that applies no matter which tool you choose. After that, we will come back to each method and show what it looks like in practice, including a detailed right time PostgreSQL to SQL Server pipeline with Estuary.
Seven step PostgreSQL to SQL Server migration playbook
This playbook is the backbone of your PostgreSQL to SQL Server migration. The tools you choose will change some details, but the overall flow stays the same and applies no matter which approach you use.
- Assess your database
Inventory schemas, tables, data sizes, sequences, views, functions, and triggers. Document which objects are in use and how much downtime the business can tolerate.
- Choose an approach and tools
Map your requirements to SSMA, CSV plus SSMS, Estuary, or a combination. Consider data size, schema complexity, and whether you want offline, phased, or right time migration.
- Create the schema in SQL Server
Use SSMA or manual scripts to create equivalent objects in SQL Server. Handle important mappings such as serial to identity, boolean to bit, and jsonb to nvarchar. Recreate keys, constraints, and indexes.
- Move the data
For one time migrations, use SSMA or CSV plus SSMS to bulk load data. For large or active databases, perform an initial backfill, then add continuous sync for changes that happen after the backfill.
- Keep systems in sync (optional but recommended)
If you are not doing a strict offline cut, use a right time platform such as Estuary to capture inserts, updates, and deletes from PostgreSQL and apply them to SQL Server until both are aligned.
- Cut over applications
Follow a runbook for switching connection strings from PostgreSQL to SQL Server, deploying any SQL Server specific code changes, running smoke tests, and having a clear rollback plan in case of problems.
- Validate and tune
Compare row counts, run critical business queries, and ensure results match expectations. Tune indexes and queries on SQL Server, confirm backups and monitoring are in place, and decide the long term role of the old PostgreSQL database.
PostgreSQL to SQL Server differences cheat sheet
Getting a few key differences right will prevent subtle bugs.
Data types and identity
- serial and bigserial in PostgreSQL
→ INT IDENTITY or BIGINT IDENTITY in SQL Server, with correct seed values. - boolean in PostgreSQL (true, false)
→ BIT in SQL Server (1, 0). - text in PostgreSQL
→ VARCHAR(MAX) or NVARCHAR(MAX) in SQL Server. - json and jsonb in PostgreSQL
→ NVARCHAR in SQL Server, using SQL Server JSON functions for querying. - timestamp with time zone in PostgreSQL
→ usually DATETIMEOFFSET in SQL Server, or DATETIME2 with a clear UTC convention.
Sequences and identity columns
- PostgreSQL uses separate sequence objects and default expressions such as nextval.
- SQL Server typically uses identity columns or sequence objects with NEXT VALUE FOR.
- When migrating, either:
- Convert sequence backed primary keys to identity columns, or
- Use SQL Server sequences if you must share a generator across tables.
Ensure identity or sequence values are set above the current maximum to avoid key collisions.
Constraints and indexes
- Primary keys and foreign keys map cleanly, but confirm cascading rules are preserved.
- PostgreSQL partial indexes map to SQL Server filtered indexes. For example, indexes on active rows only become filtered indexes with a WHERE active = 1 condition.
- PostgreSQL full text search does not migrate directly. If you use it, plan to enable SQL Server full text indexing and adjust queries.
Functions, triggers, and views
- Functions and triggers written in PL or pgSQL must be rewritten in T SQL or moved into application code. Complex logic rarely ports automatically.
- Regular views generally translate well with minor syntax adjustments.
- PostgreSQL materialized views may need to become indexed views or scheduled ETL into tables, depending on your refresh needs.
Keep this cheat sheet next to you while you convert schemas or assess SSMA’s mapping suggestions.
Postgres to SQL Server Migration methods in practice
Method 1: SQL Server Migration Assistant for PostgreSQL
SQL Server Migration Assistant is Microsoft’s tool for assessing and converting PostgreSQL databases. Use it when you want a guided project style migration using familiar SQL Server tooling.
Typical workflow:
- Install SSMA and create a new project that targets your SQL Server.
- Connect SSMA to PostgreSQL and run an assessment of the schema.
- Review the report, fix or accept mappings, then have SSMA create the translated schema in SQL Server.
- Use SSMA to migrate table data from PostgreSQL into the new SQL Server database, then validate row counts and sample data.
SSMA does a good job with common tables, constraints, and indexes. It is less automatic for complex functions, triggers, and advanced PostgreSQL features, which often need manual redesign. For large, constantly changing databases, SSMA works best when combined with a continuous sync approach for the final cutover.
Method 2: CSV plus SQL Server Management Studio
Manual export and import is a simple way to move smaller or straightforward databases.
Basic steps:
- Prepare the schema in SQL Server
Create the database, schemas, and tables using the mappings in the cheat sheet. Set primary keys, foreign keys, and identity columns.
- Export data from PostgreSQL to CSV
Use COPY or similar commands to write each table to a CSV file. Pay attention to delimiters, null representation, and encoding.
- Import CSV files into SQL Server
In SQL Server Management Studio, use the Import and Export Wizard to load each CSV into the matching table. Map columns carefully and handle any data type warnings.
- Validate
Compare row counts and spot check key tables and columns for correctness.
Limitations:
- No automatic schema conversion or compatibility assessment.
- No ongoing sync, so you must plan a downtime window while you migrate and cut over.
- Manual effort grows quickly as the number of tables and the data volume increase.
For small databases and one time offline migrations, this method is still perfectly acceptable.
Method 3: Right time PostgreSQL to SQL Server migration with Estuary
This section shows the Estuary method in more detail, after you have already seen the general migration playbook and the SSMA and CSV options.
Estuary is a right time data platform that uses change data capture (CDC) to continuously replicate changes from PostgreSQL into SQL Server. Right time means you choose when data moves, from sub second streaming to near real time to scheduled batch windows, depending on your workload and cost requirements.
Under the hood, Estuary uses:
- A PostgreSQL capture connector that reads from the database’s write ahead log using logical replication and writes changes into Flow collections.
- A SQL Server materialization connector that turns those collections into tables in a Microsoft SQL Server database.
Prerequisites
Before you set up the pipeline in Estuary, make sure both ends are ready.
PostgreSQL
At a high level you need:
- PostgreSQL 10.0 or later, on a supported platform (self hosted, RDS, Aurora, Cloud SQL, Azure Database for PostgreSQL).
- Logical replication enabled (wal_level = logical).
- A user with the REPLICATION attribute and permissions to read tables you want to capture.
- A publication that includes the tables you want to capture.
- A watermarks table such as public.flow_watermarks for backfills, unless you are using read only capture mode.
Estuary’s docs walk through the exact SQL for different hosting types (self hosted, RDS, Aurora, Cloud SQL, Azure) if you need a step by step setup for those environments.
SQL Server
You’ll need:
- A SQL Server 2017 or later instance (self hosted, Azure SQL, RDS for SQL Server, etc.).
- A database user that can connect to the target database and create tables.
- Network access from Estuary to SQL Server, either via allowlisted IP addresses or SSH tunneling, depending on your security model.
Once those basic requirements are in place, you can build the pipeline in the Estuary UI.
Step 1: Capture changes from PostgreSQL
- Sign in to Estuary
- Create a free Estuary account or log in to your existing one.
- Create a new PostgreSQL capture
- From the dashboard, go to Sources and click + New Capture.
- Search for PostgreSQL in the connector list and select the PostgreSQL capture connector.
- Configure the PostgreSQL endpoint
On the PostgreSQL Create Capture page, fill in:- Name: a unique name for this capture.
- Address: the PostgreSQL host and port (for example, mydb.example.com:5432).
- Database: the logical database name to capture from.
- User and Password: the replication user you prepared.
- If needed, expand advanced options to set things like publication name, replication slot name, watermarks table, SSL mode, or to skip backfills for very large tables.
- Select tables to capture
- Run discovery if prompted so Estuary can list schemas and tables.
- Choose the tables or schemas you want to capture into Flow collections.
- Save and publish the capture
- Click Next and review the bindings (table to collection mappings).
- Click Save and publish to start the capture. Estuary will backfill existing rows by default, then switch to streaming CDC events from PostgreSQL.
Step 2: Materialize collections into SQL Server
- Create a new SQL Server materialization
- From the dashboard, go to Destinations and click + New Materialization.
- Search for SQL Server in the connector list and select the Microsoft SQL Server materialization connector.
- Configure the SQL Server endpoint
On the SQL Server Create Materialization page, provide:- Name: a unique name for this materialization.
- Address: the SQL Server host and port (for example, sql.example.com:1433).
- Database: the name of the database where tables should be created.
- User and Password: the user you created with permissions to create and update tables.
- Map collections to tables
- In the Source collections section, select the collections that come from your PostgreSQL capture.
- For each collection, confirm or adjust the target table name and options such as whether to use delta updates.
- Save and publish the materialization
- Click Next to review your mappings.
- Click Save and publish. Estuary will create the target tables if they do not exist and begin applying changes from the collections into SQL Server.
Step 3: Run migration, test, and cut over
Once both ends are configured and published, you have a live right time pipeline:
- Initial backfill and CDC
- Estuary backfills existing data from PostgreSQL into your collections and SQL Server tables.
- After backfill, the PostgreSQL connector continues streaming inserts, updates, and deletes from the write ahead log, and the SQL Server materialization keeps the destination tables updated.
- Monitor and validate
- Use Estuary’s UI to monitor capture and materialization health, lag, and error status.
- In SQL Server, run row count checks and sample queries to validate that data matches PostgreSQL.
- Test in parallel
- Keep PostgreSQL as the system of record while SQL Server stays closely synchronized.
- Point staging or test environments at SQL Server to exercise application behavior and performance without impacting production.
- Cut over with a short maintenance window
- When you are ready, schedule a small window.
- Pause writes to PostgreSQL or put the application into maintenance mode.
- Let Estuary flush any remaining changes so SQL Server is fully caught up.
- Update application connection strings to use SQL Server and bring the system back online.
- Keep the pipeline running if needed
- After cutover, you can keep Estuary running to continue feeding SQL Server from PostgreSQL or other sources, or to fan the same collections out to additional destinations such as warehouses or analytics systems.
This approach lets you treat “migrate PostgreSQL to SQL Server” as a right time replication problem instead of a one time script. You get a low downtime migration path, continuous validation, and a reusable pipeline you can keep using long after the cutover.
Summary
In summary, the way to migrate PostgreSQL to SQL Server is to follow a simple sequence: assess your current database, choose the right tools, create the target schema, move the data, keep systems in sync if needed, cut over with a runbook, and validate the result.
SQL Server Migration Assistant and CSV plus SQL Server Management Studio are good options for traditional one time migrations, while Estuary provides a right time path that keeps PostgreSQL and SQL Server synchronized and reduces downtime risk.
- Start a right time migration with Estuary
Sign up for Estuary or request a demo so you can see how PostgreSQL to SQL Server replication works in practice and test it with your own schema and data. - Explore related migration guides
Continue with our other step by step migration content, such as moving PostgreSQL to Snowflake or other destinations, and learn how to reuse the same right time pipelines across your data stack.
FAQs
What is the best way to migrate Postgres to SQL Server with minimal downtime?
Can I keep PostgreSQL and SQL Server in sync during the migration?

About the author
With over 15 years in data engineering, a seasoned expert in driving growth for early-stage data companies, focusing on strategies that attract customers and users. Extensive writing provides insights to help companies scale efficiently and effectively in an evolving data landscape.

















