You can create a data migration task that migrates data from a PostgreSQL database to an OceanBase database in MySQL or Oracle compatible mode. This task supports schema migration, full data migration, and incremental synchronization. It seamlessly transfers the existing business data and incremental data of the source database to the target database.
Notice
If a data migration task remains inactive for a long time (with a status of Failed, Paused, or Completed), it may not be recoverable due to factors such as the retention period of incremental logs. Data migration will automatically release tasks that have been inactive for more than 7 days to reclaim resources. We recommend that you configure alerts for your tasks and promptly address any exceptions related to them.
Prerequisites
You have created a source database instance.
You have created an instance and a tenant for the target OceanBase Database. For more information, see Create an instance and Create a tenant.
You have created dedicated database users for data migration in both the source and target databases and granted them the required privileges. For more information, see User privileges.
If you want to perform incremental synchronization, complete the following prerequisite operations:
During incremental synchronization, data transmission does not support automatic synchronization of DDL statements. If the tables to be migrated require DDL statements, execute the DDL statement manually on the target database first, then on the source RDS PostgreSQL instance. To ensure that incremental DML operations after executing the DDL statement can be correctly parsed, create corresponding triggers and record the DDL statements in a table. For more information, see Create a trigger.
After selecting incremental synchronization, the
wal_levelparameter must be set tological. For more information about self-managed PostgreSQL instances, see Modify the log level of a self-managed PostgreSQL instance.
Limitations
Only users with the Project Owner, Project Administrator, or Data Service Administrator project role can create data migration tasks.
Limitations on operations on the source database
Do not perform DDL operations that change the schema during schema migration or full data migration. Otherwise, the data migration task may fail.
The supported PostgreSQL database versions are V10.x, V11.x, V12.x, and V13.x.
Data Transmission Service does not support migrating partitioned tables, unlogged tables, or temporary tables from a PostgreSQL database.
Data Transmission Service supports migrating tables with primary keys or NOT NULL unique keys from a PostgreSQL database to OceanBase Database.
Data Transmission Service only supports migrating databases, tables, and columns with ASCII-compliant names that do not contain special characters (., |, ", ', (, ), =, ;, /, &, or line breaks).
Data Transmission Service does not support migrating data when triggers exist on the target database. The presence of triggers may cause the data migration to fail.
Considerations
After you enable incremental synchronization, the requirements for the table-level replication identifier
REPLICA IDENTITYare as follows:If you select migration objects through the Specify Objects tab, the specified tables must have primary keys or the table-level replication identifier
REPLICA IDENTITYmust be set toFULL. Otherwise, update and delete operations on business data will fail.If you select migration objects through the Match by Rule tab, all tables in the subscribed databases (including selected, unselected, and newly added tables) must have primary keys or the table-level replication identifier
REPLICA IDENTITYmust be set toFULL. Otherwise, update and delete operations on business data will fail.If the primary keys or unique keys of the source and target tables do not fully align, the table-level replication identifier
REPLICA IDENTITYof the corresponding tables must be set toFULL.In PostgreSQL's default mode, full before-images are not returned. To ensure data quality during data migration, the corresponding tables will be processed sequentially, which may affect the efficiency of incremental synchronization. Therefore, it is recommended to set the table-level replication identifier
REPLICA IDENTITYof all tables toFULL.
The command to modify the table-level replication identifier
REPLICA IDENTITYtoFULLis as follows.Notice
If row filtering conditions are set for the migrated table objects, the corresponding tables must be enabled in
FULLmode.ALTER TABLE table_name REPLICA IDENTITY FULL;When migrating data from a PostgreSQL database to an OceanBase Database in Oracle compatible mode, the names of tables and fields will be converted to uppercase based on the default strategy of the data migration service. For example, if the source table name is "a", it will be converted to "A" by default. You can use the table or field names in lowercase (a), uppercase (A), or quoted uppercase ("A"), but cannot use them in lowercase ("a").
The incremental component of a PostgreSQL database automatically creates publications and slots. However, you need to monitor the disk usage of the PostgreSQL database log files. By default, the data migration service updates the confirmed_flush_lsn of the slots every 10 minutes, so each incremental component will retain PostgreSQL database log files for at least 10 minutes.
Note
If you want to modify the notification interval or the duration for which PostgreSQL can retain log files, contact technical support.
During data migration, if the presence of slots prevents the cleanup of PostgreSQL database log files, you need to completely delete the data migration task before cleaning up the log files. Whether PostgreSQL database log files can be recycled depends on whether the earliest slot restart_lsn is within the log file range.
If a table does not have a primary key or all columns have NOT NULL unique keys, duplicate data may appear when the data is migrated to the target.
If the character set of the source database is UTF-8, it is recommended to use a compatible character set (such as UTF-8 or UTF-16) for the target database to avoid issues like garbled characters caused by incompatible character sets.
Verify that the precision of column types such as DECIMAL, FLOAT, and DATETIME during migration meets your expectations. If the precision of the target column type is less than that of the source column type, truncation may occur, leading to inconsistencies between the source and target data.
If you want to modify the unique index of the target database, you need to restart the data migration task. Otherwise, data inconsistencies may occur.
Clock desynchronization between nodes or between the client terminal and the server can lead to inaccurate incremental synchronization latency.
For example, if the clock is earlier than the standard time, the latency may be negative. If the clock is later than the standard time, the latency may be positive.
In scenarios involving table aggregation:
It is recommended to map the relationships between the source and target databases using matching rules.
It is recommended to create the table structure manually in the target database. If you use the data migration service to create the table structure, skip failed objects in the schema migration step.
If the table structures of the source and target databases are not fully consistent, data inconsistencies may occur. Known scenarios include:
- When users manually create table structures, implicit conversion issues may arise due to exceeding the supported scope of data migration, leading to type mismatches between columns in the source and target databases.
When the length of data at the target is shorter than that at the source, data truncation may occur, leading to inconsistencies between the data at the source and target.
If you only select Incremental Synchronization when creating a data migration task, the local incremental logs of the source database must be retained for more than 48 hours.
If you select Full Migration + Incremental Synchronization when creating a data migration task, the local incremental logs of the source database must be retained for at least 7 days. Otherwise, the data migration task may fail due to the inability to obtain incremental logs, or the data at the source and target may become inconsistent.
If there are table objects with different cases at the source or target, the case sensitivity of the source or target database may cause the data migration results to be inconsistent with expectations.
If the unique constraint column allows NULL values, data loss may occur. In PostgreSQL databases, when multiple NULL values are synchronized to OceanBase Database, only the first NULL value is successfully inserted, while subsequent NULL values are discarded due to conflicts with the unique constraint column.
Supported source and target instance types
| Cloud vendor | Source | Target |
|---|---|---|
| AWS | Self-managed PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| AWS | Self-managed PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| AWS | RDS PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| AWS | RDS PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| AWS | Aurora PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| AWS | Aurora PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| AWS | Self-managed PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| AWS | Self-managed PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| AWS | RDS PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| AWS | RDS PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| AWS | Aurora PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| AWS | Aurora PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| Huawei Cloud | Self-managed PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| Huawei Cloud | Self-managed PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| Huawei Cloud | RDS PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| Huawei Cloud | RDS PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| Huawei Cloud | Self-managed PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| Huawei Cloud | Self-managed PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| Huawei Cloud | RDS PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| Huawei Cloud | RDS PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| Google Cloud | Self-managed PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| Google Cloud | Self-managed PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| Google Cloud | Cloud PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| Google Cloud | Cloud PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| Google Cloud | Self-managed PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| Google Cloud | Self-managed PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| Google Cloud | Cloud PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| Google Cloud | Cloud PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| Alibaba Cloud | Self-managed PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| Alibaba Cloud | Self-managed PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| Alibaba Cloud | RDS PostgreSQL | OceanBase MySQL Compatible (Transactional) |
| Alibaba Cloud | RDS PostgreSQL | OceanBase MySQL Compatible (Self-managed database) |
| Alibaba Cloud | Self-managed PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| Alibaba Cloud | Self-managed PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
| Alibaba Cloud | RDS PostgreSQL | OceanBase Oracle Compatible (Transactional) |
| Alibaba Cloud | RDS PostgreSQL | OceanBase Oracle Compatible (Self-managed database) |
Data type mappings
Data type mappings from PostgreSQL Database to OceanBase Database in MySQL compatible mode
PostgreSQL Database OceanBase Database in MySQL compatible mode INT INTEGER INT2 SMALLINT INT4 INTEGER INT8 BIGINT SMALLINT SMALLINT INTEGER INTEGER BIGINT BIGINT DECIMAL(M, D) DECIMAL NUMERIC(M, D) NUMERIC M can be at most 65 and D can be at most 30. If you omit D, it is set to 0 by default. If you omit M, it is set to 10 by default.
SMALLSERIAL SMALLINT SERIAL INTEGER BIGSERIAL BIGINT REAL FLOAT FLOAT FLOAT/DOUBLE FLOAT4 FLOAT FLOAT8 DOUBLE DOUBLE PRECISION DOUBLE CHAR The length of a CHAR column in PostgreSQL cannot exceed 10,485,760. If you do not specify the length, the default value is 1.
CHAR/LONGTEXT The length of a column in OceanBase Database can be specified as a value ranging from 0 to 255.
VARCHAR The length of a VARCHAR column in PostgreSQL cannot exceed 10,485,760. If you do not specify the length, the column accepts values of any length.
VARCHAR/LONGTEXT The length of a VARCHAR column in OceanBase Database can be specified as a value ranging from 0 to 65,535.
CHARACTER VARYING The length of a CHARACTER VARYING column in PostgreSQL cannot exceed 10,485,760. If you do not specify the length, the column accepts values of any length.
VARCHAR/LONGTEXT CHAR VARYING The length of a CHAR VARYING column in PostgreSQL cannot exceed 10,485,760. If you do not specify the length, the column accepts values of any length.
VARCHAR/LONGTEXT DATE DATE TIME [(p)] [WITHOUT TIME ZONE] TIME TIME [(p)] [WITH TIME ZONE] p indicates the precision of the decimal point, which ranges from 0 to 6.
TIME TIMESTAMP [(p)] [WITHOUT TIME ZONE] DATETIME TIMESTAMP [(p)] WITH TIME ZONE TIMESTAMP INTERVAL [ fields ] [ (p) ] TIME BOOLEAN BOOLEAN UUID VARCHAR(36) MONEY DECIMAL(19,2) CIDR VARCHAR(43) INET VARCHAR(43) MACADDR VARCHAR(17) MACADDR8 VARCHAR(23) BYTEA LONGBLOB BIT BIT TEXT LONGTEXT TSVECTOR LONGTEXT TSQUERY LONGTEXT XML LONGTEXT JSON TEXT/JSON In OceanBase Database in MySQL compatible mode V3.2.2 and later, JSON is used.
POINT POINT Only supported in OceanBase Database in MySQL compatible mode V3.2.4 and V4.1.0.
LINE LINESTRING Only supported in OceanBase Database in MySQL compatible mode V3.2.4 and V4.1.0.
LSEG LINESTRING Only supported in OceanBase Database in MySQL compatible mode V3.2.4 and V4.1.0.
BOX POLYGON Only supported in OceanBase Database in MySQL compatible mode V3.2.4 and V4.1.0.
PATH LINESTRING Only supported in OceanBase Database in MySQL compatible mode V3.2.4 and V4.1.0.
POLYGON POLYGON Only supported in OceanBase Database in MySQL compatible mode V3.2.4 and V4.1.0.
CIRCLE POLYGON Only supported in OceanBase Database in MySQL compatible mode V3.2.4 and V4.1.0.
Data type mappings from PostgreSQL Database to OceanBase Database in Oracle compatible mode
PostgreSQL Database OceanBase Database in Oracle compatible mode INT NUMBER(11,0) INT2 NUMBER(6,0) INT4 NUMBER(11,0) INT8 NUMBER(20,0) SMALLINT NUMBER(6,0) INTEGER NUMBER(11,0) BIGINT NUMBER(20,0) DECIMAL(M, D) NUMBER(M,D) M ranges from 1 to 38, and D ranges from -84 to 127.
NUMERIC(M, D) NUMBER(M,D) - If M exceeds 38 on the source side, it is set to 38 on the target side.
- If D exceeds 38 on the source side, it is set to 19 on the target side.
- If D is less than -84 on the source side, it is set to -84 on the target side.
SMALLSERIAL NUMBER(6,0) SERIAL NUMBER(11,0) BIGSERIAL NUMBER(20,0) REAL BINARY_FLOAT FLOAT FLOAT/BINARY_DOUBLE FLOAT4 FLOAT FLOAT8 BINARY_DOUBLE DOUBLE PRECISION BINARY_DOUBLE CHAR | CHARACTER The length of a CHAR or CHARACTER column in PostgreSQL cannot exceed 10,485,760 bytes. If no length is specified, the default length is 1.
CHAR/CLOB VARCHAR The length of a VARCHAR column in PostgreSQL cannot exceed 10,485,760 bytes. If no length is specified, the column accepts any length and is converted to a CLOB on the target side.
VARCHAR2/CLOB CHARACTER VARYING The length of a CHARACTER VARYING column in PostgreSQL cannot exceed 10,485,760 bytes. If no length is specified, the column accepts any length.
VARCHAR2/CLOB CHAR VARYING The length of a CHAR VARYING column in PostgreSQL cannot exceed 10,485,760 bytes. If no length is specified, the column accepts any length.
VARCHAR2/CLOB DATE DATE TIME [(p)] [WITHOUT TIME ZONE] TIMESTAMP(p) TIME [(p)] [WITH TIME ZONE] TIMESTAMP(p) WITH TIME ZONE TIMESTAMP [(p)] [WITHOUT TIME ZONE] TIMESTAMP(p) If p exceeds 9 on the source side, it is set to 9 on the target side.
TIMESTAMP [(p)] WITH TIME ZONE TIMESTAMP(P) WITH TIME ZONE If p exceeds 9 on the source side, it is set to 9 on the target side.
INTERVAL [ fields ] [ (p) ] DATE BOOLEAN NUMBER(1) UUID VARCHAR2(36) MONEY NUMBER(19,2) CIDR VARCHAR2(43) INET VARCHAR2(43) MACADDR VARCHAR2(17) MACADDR8 VARCHAR2(23) BYTEA BLOB BIT(n) RAW(n) TEXT CLOB TSVECTOR CLOB TSQUERY CLOB XML CLOB JSON TEXT/JSON In OceanBase Database in Oracle compatible mode V4.1.0 and later, JSON is used.
POINT SDO_GEOMETRY Supported in OceanBase Database in Oracle compatible mode V4.2.2 and later.
LINE SDO_GEOMETRY Supported in OceanBase Database in Oracle compatible mode V4.2.2 and later.
LSEG SDO_GEOMETRY Supported in OceanBase Database in Oracle compatible mode V4.2.2 and later.
BOX SDO_GEOMETRY Supported in OceanBase Database in Oracle compatible mode V4.2.2 and later.
PATH SDO_GEOMETRY Supported in OceanBase Database in Oracle compatible mode V4.2.2 and later.
POLYGON SDO_GEOMETRY Supported in OceanBase Database in Oracle compatible mode V4.2.2 and later.
CIRCLE SDO_GEOMETRY Supported in OceanBase Database in Oracle compatible mode V4.2.2 and later.
Procedure
- Create a data migration task.

Log in to the OceanBase Cloud console.
In the left-side navigation pane, choose Services > Migrations.
On the Migrations page, click the Migrate Data tab.
In the upper-right corner of the Migrate Data tab, click Create Task.
Enter a custom name for the migration task in the Edit Task Name field.
We recommend that you use a combination of Chinese characters, numbers, and letters. The name cannot contain spaces and must be no longer than 64 characters.
On the Configure Source and Target page, configure the parameters.
- In the Source section, configure the parameters.
If you need to reference an existing data source, click Quick Fill next to Source and select the desired data source from the drop-down list. After selection, the configurations in the Source section will be automatically filled. If you want to save the current configuration as a new data source, click the Save icon in the upper-right corner of the Source section.
You can also click Manage Data Sources in the Quick Fill drop-down list to go to the Data Sources page, where you can view and manage data sources. This page provides unified management of different types of data sources. For more information, see Data Sources.
Parameter Description Cloud vendor Currently supports AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. Database type Select the source database type as PostgreSQL. Instance type - If you select AWS as the cloud vendor, the instance type supports RDS PostgreSQL, Aurora PostgreSQL, and Self-managed PostgreSQL.
- If you select Huawei Cloud as the cloud vendor, the instance type supports RDS PostgreSQL and Self-managed PostgreSQL.
- If you select Google Cloud as the cloud vendor, the instance type supports Cloud PostgreSQL and Self-managed PostgreSQL.
- If you select Alibaba Cloud as the cloud vendor, the instance type supports RDS PostgreSQL and Self-managed PostgreSQL.
Region Select the region where the source database is located. Connection type Includes Endpoint and Public IP. - If you select Endpoint as the connection type, you need to add the displayed account ID to the allowlist of your endpoint service to allow the endpoint to connect to the endpoint service. For more information, see Select Private Connection.
- When Cloud Vendor is set to AWS, if you selected Acceptance required for the Require acceptance for endpoint parameter when creating the endpoint service, a prompt will appear when the data migration service first connects via private link, asking you to go to the AWS console and perform the Accept Endpoint Connection Request action.
- When the cloud vendor is set to Google Cloud, you need to add the authorized project to Published Services. After adding authorization, manual authorization is no longer required during data source testing.
- If you select Public IP as the connection type, you need to add the displayed data source IP address to the allowlist of your PostgreSQL database instance to ensure connectivity. For more information, see Select Public Connection.
Note
The page will display the IP address to be added to the allowlist only after you have selected the regions for both the source and target.
Connection information - If you select Endpoint as the connection type, enter the endpoint service name.
- If you select Public IP as the connection type, enter the IP address and port number of the database host.
Database Name The name of the PostgreSQL database. Database account The username of the PostgreSQL database used for data migration. Password The password of the database user. In the Target section, configure the parameters.
If you need to reference an existing data source, click **<UI-TERM>Quick Fill</UI-TERM>** on the right side of **<UI-TERM>Target</UI-TERM>**, and select the desired data source from the drop-down list. After you select the data source, the configuration items in the **<UI-TERM>Target</UI-TERM>** section will be automatically filled. If you want to save the current configuration as a new data source, click **<UI-TERM>Save</UI-TERM>** in the upper-right corner of the **<UI-TERM>Target</UI-TERM>** section.
You can also click **<UI-TERM>Manage Data Sources</UI-TERM>** in the **<UI-TERM>Quick Fill</UI-TERM>** drop-down list to go to the **<UI-TERM>Data Sources</UI-TERM>** page, where you can view and manage data sources. This page provides unified management for different types of data sources. For more information, see [Data Sources](../../300.tp-instance/700.data-sources-management-tp/100.create-data-sources.md).
| Parameter | Description |
|-----------|---------------|
| Cloud vendor | Currently supports **AWS**, **Huawei Cloud**, **Google Cloud**, and **Alibaba Cloud**. You can choose the same cloud vendor as the source or migrate data across cloud vendors.<main id="notice" type='explain'><h4>Note</h4><p>By default, cross-cloud data migration is not enabled. If you need to use this feature, contact OceanBase Cloud technical support.</p></main>|
| Database type | Select **OceanBase MySQL Compatible** or **OceanBase Oracle Compatible** as needed. |
| Instance type | Currently supports **<UI-TERM>Dedicated (Transactional)</UI-TERM>** and **<UI-TERM>Self-managed Database</UI-TERM>**.|
| Region | Select the region where the target database is located.|
| Connection type | Includes **Endpoint** and **Public IP**.<ul><li>If you select **Endpoint**, add the displayed account ID to the allowlist of your endpoint service to enable the endpoint connection. For more information, see [Select private network connection](../../300.tp-instance/700.data-sources-management-tp/400.select-endpoint/200.aws-endpoint.md).<li>If you select **Public IP**, add the displayed data source IP address to the allowlist of your OceanBase database instance to ensure connectivity. For more information, see [Select public network connection](../../300.tp-instance/700.data-sources-management-tp/500.select-public-ip/200.aws-public.md).<main id="notice" type='explain'><h4>Note</h4><p>This parameter is only displayed if the instance type is set to Self-managed database. The data source IP address to be added to the allowlist will be displayed after you select the source and target regions.</p></main></li></ul>|
| Connection information | This parameter is only displayed if the instance type is set to Self-managed database.<ul><li>If you select **<UI-TERM>Connection type</UI-TERM>** as **Endpoint**, enter the endpoint service name.</li><li>If you select **<UI-TERM>Connection type</UI-TERM>** as **Public IP**, enter the IP address and port number of the database host.</li></ul> |
| Instance | The ID or name of the OceanBase database instance. You can view the ID or name of the target instance on the **<UI-TERM>Instances</UI-TERM>** page.<main id="notice" type='explain'><h4>Note</h4><p>If the cloud vendor is Alibaba Cloud, you can also select an Alibaba Cloud primary account instance with cross-account authorization. For more information, see <a href="../300.migrate-data/410.cross-account-authorization.md">Alibaba Cloud account authorization</a>.</p></main>|
| Tenant | The ID or name of the OceanBase database tenant. You can expand the target instance on the **<UI-TERM>Instances</UI-TERM>** page to view the ID or name of the target tenant.|
|Database account|The username of the OceanBase database user for data migration.|
|Password|The password of the database user.|
When you select **<UI-TERM>Instance type</UI-TERM>** as **<UI-TERM>Self-managed database</UI-TERM>**, you can decide whether to enable advanced settings as needed.
<main id="notice" type='notice'>
<h4>Notice</h4>
<p>If your new migration task requires incremental synchronization, make sure to enable the sys tenant account and OBLogProxy.</p>
</main>
| Parameter | Description |
|--------------|--------------|
| Sys Tenant Account | If you enable the sys tenant account, you need to enter the sys account and password.<ul><li>**<UI-TERM>Sys Account</UI-TERM>**: the name of the sys user. This user is mainly used to read incremental logs and database object structure information from OceanBase Database. Create this user under the sys tenant of the business cluster.<li>**<UI-TERM>Password</UI-TERM>**: the password of the sys user.</ul>|
| OBLogProxy | If you enable the incremental log proxy service, you need to fill in **<UI-TERM>OBLogProxy Connection Information</UI-TERM>**. This parameter is the OceanBase Database incremental log proxy service, which provides real-time incremental project intervention and management capabilities in the form of a service, making it convenient for applications to intervene in OceanBase Database incremental logs. At the same time, it can meet the subscription needs of incremental logs under network isolation. The format is `OBLogProxy IP: OBLogProxy Port`. |
Click Test and Continue.
On the Select Type & Objects page, configure the parameters.
Note
Currently, only one-way synchronization is supported when migrating data from a PostgreSQL database to an OceanBase Database in MySQL compatible mode.
In the Migration Type section, select the migration type for the current data migration task.
Migration Type includes Schema Migration, Full Migration, and Incremental Synchronization.
Parameter Description Schema Migration Schema migration requires you to define the character set mapping relationship yourself. Data migration will only copy the data (schema) from the source database to the target database without affecting the source data (schema). Full Migration After a full migration task starts, the data migration service will migrate the existing data of the source table to the corresponding table in the target database. Incremental Synchronization After an incremental synchronization task starts, data migration will synchronize the changed data (addition, modification, or deletion) of the source database to the corresponding table in the target database. Incremental Synchronization includes DML Synchronization and DDL Synchronization, which you can customize according to your needs. For more information, see Customize DML/DDL. In the Select Migration Objects section, configure the method for selecting the migration object.
You can select the migration objects by using either Specify Objects or Match by Rule.
In the Select Migration Scope section, select the objects to be migrated.
If you choose Specify Objects, data migration supports Table-level and Database-level. Table-level migration allows you to select one or more tables or views from one or more databases as migration objects. Database-level migration allows you to select an entire database as a migration object. If you select table-level migration for a database, database-level migration is no longer supported for that database. Conversely, if you select database-level migration for a database, table-level migration is no longer supported for that database.
After selecting Table-level or Database-level, select the objects to be migrated in the left pane and click > to add them to the right pane.
Data migration supports importing objects through text and allows you to rename target objects, set row filters, view column information, and remove individual or all migrated objects.

Note
If you select Database-level, only the database name is displayed in the right-side list, and no specific object can be displayed.
Operation Description Import objects Click Import Objects in the upper-right corner of the right-side list in the selection area. For more information, see Import migration objects. Rename Data migration allows you to rename migrated objects. For more information, see Rename databases and tables. Row filtering Data migration allows you to filter rows by using the WHEREclause. For more information, see Filter data by using SQL conditions. You can also view the column information of migrated objects in the View Columns section.Remove/Clear All During data mapping, you can remove individual or multiple objects temporarily selected to the target side. - Remove an individual migration object
Click the Remove icon next to the target object in the right-side list of the selection area to remove it. - Remove all migration objects
Click Clear All in the upper-right corner of the right-side list of the selection area. In the dialog box that appears, click OK to remove all migration objects.
- Remove an individual migration object
Select Match by Rule. For more information, see Configure matching rules for migrating databases.
Click Next. On the Migration Options page, configure the parameters.
Full migration
On the Select Type & Objects step, select One-Way Synchronization > Full Migration to display the following parameters.

Parameter Description Read Concurrency This parameter specifies the number of concurrent threads for reading data from the source during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the source and affect business operations. Write Concurrency This parameter specifies the number of concurrent threads for writing data to the target during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the target and affect business operations. Rate Limiting for Full Migration You can decide whether to limit the full migration rate as needed. If you enable this option, you must also set the RPS (maximum number of data rows that can be migrated to the target per second during full migration) and BPS (maximum amount of data that can be migrated to the target per second during full migration). Note
The RPS and BPS values specified here are only for throttling and limiting capabilities. The actual performance of full migration is limited by factors such as the source, target, and instance specifications.
Handle Non-empty Tables in Target Database This parameter specifies the strategy for handling records in target table objects. Valid values: Stop Migration and Ignore. - If you select Stop Migration, data migration will report an error when target table objects contain data, indicating that migration is not allowed. Please handle the data in the target database before resuming migration.
Notice
If you click Restore after an error occurs, data migration will ignore this setting and continue to migrate table data. Proceed with caution.
- If you select Ignore, when target table objects contain data, data migration will adopt the strategy of recording conflicting data in logs and retaining the original data.
Notice
If you select Ignore, full verification will use the IN mode to pull data, which means it cannot verify scenarios where the target contains data not present in the source. This will result in a certain level of performance degradation.
Post-Indexing This parameter specifies whether to allow index creation to be postponed after full migration is completed. If you select this option, note the following items. Notice
Before you select this option, make sure that you have selected both Schema Migration and Full Migration on the Select Migration Type page.
- Only non-unique key indexes support index creation after migration.
If index creation after migration is allowed, we recommend that you adjust the following tenant parameters based on the hardware conditions and current business traffic of the OceanBase database.
// Limit the size of the temporary file buffer. ALTER SYSTEM SET _temporary_file_io_area_size = '10' tenant = 'xxx'; // Disable throttling for OceanBase Database V4.x. ALTER SYSTEM SET sys_bkgd_net_percentage = 100;- If you select Stop Migration, data migration will report an error when target table objects contain data, indicating that migration is not allowed. Please handle the data in the target database before resuming migration.
Incremental synchronization
In the Select Type & Objects step, select One-Way Synchronization > Incremental Synchronization to display the following parameters.
Parameter Description Write Concurrency This parameter specifies the maximum number of concurrent writes during incremental synchronization, with a maximum limit of 512. Excessive concurrency may overload the target system, impacting business operations. Rate Limiting for Incremental Migration You can choose to enable or disable rate limiting for incremental synchronization. If enabled, set the RPS (maximum number of rows that can be synchronized per second during incremental synchronization) and BPS (maximum data volume that can be synchronized per second during incremental synchronization). Note
The RPS and BPS values set here serve as throttling limits. The actual performance of incremental synchronization is influenced by factors such as source and target configurations, instance specifications, etc.
Incremental Synchronization Start Timestamp - If you selected Full Migration when choosing the migration type, this parameter will not be displayed.
- If you did not select Full Migration but selected Incremental Synchronization, it is set to the incremental synchronization start time by default, and is not modifiable.
Advanced options
The parameters in this section will only be displayed if the target OceanBase Database is V4.3.0 or later, and Schema Migration was selected on the Select Type & Objects page.

The storage types of the objects in the target table include Default, Rowstore, Columnstore, and Mixed Row and Column Storage. This configuration determines the storage type for objects during schema migration or incremental synchronization.
Note
The Default option is adaptive to other options based on the target parameters. It writes the corresponding schema structure to the table objects during schema migration according to the specified storage type.
Click Pre-check to perform a pre-check on the data migration task.
In the Pre-check step, the system checks whether the read and write permissions of the database user and the network connection meet the requirements. You can only start the data migration task after all checks pass. If an error occurs during the pre-check:
You can troubleshoot and fix the issue, then rerun the pre-check until it succeeds.
Alternatively, you can click Skip in the Actions column of the failed pre-check item. A dialog box will appear, informing you of the specific impact of skipping this operation. After confirming that it is acceptable, click OK in the dialog box.
- After the pre-check succeeds, click Purchase to go to the Purchase Data Migration Instance page.
After the purchase is successful, you can start the data migration task. For more information about how to purchase a data migration instance, see Purchase a data migration instance. If you do not need to purchase a data migration instance at this time, click Save to go to the details page of the data migration task. You can manually purchase a data migration instance later as needed.
You can click Configure Validation Task in the upper-right corner of the data migration details page to compare the data differences between the source and target databases. For more information, see the "Create a data validation task" topic.
The data migration service allows you to modify the migration objects when the task is running. For more information, see View and modify migration objects. After the data migration task is started, it is executed based on the selected migration types. For more information, see the "View migration details" section in View details of a data migration task.