You can create a data migration task to migrate incremental data from an OceanBase Database (in either MySQL or Oracle compatible mode) to a PostgreSQL database.
Notice
If a data migration task remains inactive for a long time (with the task status set to Failed, Paused, or Completed), it may not be recoverable due to factors such as the retention period of incremental logs. Data migration will automatically release tasks that have been inactive for more than 7 days to reclaim resources. We recommend that you configure alerts for tasks and promptly address any exceptions related to the tasks.
Prerequisites
You have created an instance and a tenant for the target OceanBase Database. For more information, see Create an instance and Create a tenant.
You have created a database instance at the target.
You have created database users for data migration at both the source and target ends and granted the necessary privileges to these users. For more information, see User privileges.
Limitations
Only users with the Project Owner, Project Administrator, or Data Service Administrator project role can create a data migration task.
Limitations on operations of the source database
Do not perform DDL operations that change the schema during schema migration or full migration. Otherwise, the data migration task may fail.
The supported PostgreSQL database versions are V10.x, V11.x, V12.x, and V13.x.
Data Migration supports only migrating objects whose names do not contain special characters (including .|"'`()=;/& and line breaks) and whose names are in ASCII code.
Data Migration does not support triggers at the target. If triggers exist at the target, data migration may fail.
Considerations
In incremental synchronization scenarios for tables without primary keys, when you migrate data by using full-column matching for UPDATE and DELETE operations, the following issues may occur:
Performance issues.
Without a primary key index, each UPDATE and DELETE operation requires a full table scan.
Data consistency issues.
In PostgreSQL, the LIMIT clause is not supported for UPDATE and DELETE operations. When multiple rows match the full-column conditions, the number of rows updated or deleted at the source may be greater than that at the target. For example, suppose table t1 has two columns, c1 and c2, and contains two rows where c1=1 and c2=2. When one row is deleted from the source, the WHERE condition
WHERE c1 = 1 AND c2 = 2will delete both rows at the target, causing inconsistency between the source and target.
When you migrate the tsvector type field to a PostgreSQL database, the data written to the corresponding field in OceanBase Database must conform to the tsvector format. For example:
If you write 'a b c' to an RDS PostgreSQL instance from OceanBase Database, it will be converted to "'a' 'b' 'c'".
If you write 'a:1 b:2 c:3' to an RDS PostgreSQL instance from OceanBase Database, it will be converted to "'a':1 'b':2 'c':3".
If you write non-tsvector data such as "'a':cccc" to an RDS PostgreSQL instance from OceanBase Database, the write will fail. For more information about the tsvector format, see PostgreSQL official documentation.
Supported source and target instance types
| Cloud vendor | Source | Target |
|---|---|---|
| AWS | OceanBase MySQL Compatible (Transactional) | Self-managed PostgreSQL |
| AWS | OceanBase MySQL Compatible (Self-managed database) | Self-managed PostgreSQL |
| AWS | OceanBase MySQL Compatible (Transactional) | RDS PostgreSQL |
| AWS | OceanBase MySQL Compatible (Self-managed database) | RDS PostgreSQL |
| AWS | OceanBase MySQL Compatible (Transactional) | Aurora PostgreSQL |
| AWS | OceanBase MySQL Compatible (Self-managed database) | Aurora PostgreSQL |
| AWS | OceanBase Oracle Compatible (Transactional) | Self-managed PostgreSQL |
| AWS | OceanBase Oracle Compatible (Self-managed database) | Self-managed PostgreSQL |
| AWS | OceanBase Oracle Compatible (Transactional) | RDS PostgreSQL |
| AWS | OceanBase Oracle Compatible (Self-managed database) | RDS PostgreSQL |
| AWS | OceanBase Oracle Compatible (Transactional) | Aurora PostgreSQL |
| AWS | OceanBase Oracle Compatible (Self-managed database) | Aurora PostgreSQL |
| Huawei Cloud | OceanBase MySQL Compatible (Transactional) | Self-managed PostgreSQL |
| Huawei Cloud | OceanBase MySQL Compatible (Self-managed database) | Self-managed PostgreSQL |
| Huawei Cloud | OceanBase MySQL Compatible (Transactional) | RDS PostgreSQL |
| Huawei Cloud | OceanBase MySQL Compatible (Self-managed database) | RDS PostgreSQL |
| Huawei Cloud | OceanBase Oracle Compatible (Transactional) | Self-managed PostgreSQL |
| Huawei Cloud | OceanBase Oracle Compatible (Self-managed database) | Self-managed PostgreSQL |
| Huawei Cloud | OceanBase Oracle Compatible (Transactional) | RDS PostgreSQL |
| Huawei Cloud | OceanBase Oracle Compatible (Self-managed database) | RDS PostgreSQL |
| Google Cloud | OceanBase MySQL Compatible (Transactional) | Self-managed PostgreSQL |
| Google Cloud | OceanBase MySQL Compatible (Self-managed database) | Self-managed PostgreSQL |
| Google Cloud | OceanBase MySQL Compatible (Transactional) | Cloud PostgreSQL |
| Google Cloud | OceanBase MySQL Compatible (Self-managed database) | Cloud PostgreSQL |
| Google Cloud | OceanBase Oracle Compatible (Transactional) | Self-managed PostgreSQL |
| Google Cloud | OceanBase Oracle Compatible (Self-managed database) | Self-managed PostgreSQL |
| Google Cloud | OceanBase Oracle Compatible (Transactional) | Cloud PostgreSQL |
| Google Cloud | OceanBase Oracle Compatible (Self-managed database) | Cloud PostgreSQL |
| Alibaba Cloud | OceanBase MySQL Compatible (Transactional) | Self-managed PostgreSQL |
| Alibaba Cloud | OceanBase MySQL Compatible (Self-managed database) | Self-managed PostgreSQL |
| Alibaba Cloud | OceanBase MySQL Compatible (Transactional) | RDS PostgreSQL |
| Alibaba Cloud | OceanBase MySQL Compatible (Self-managed database) | RDS PostgreSQL |
| Alibaba Cloud | OceanBase Oracle Compatible (Transactional) | Self-managed PostgreSQL |
| Alibaba Cloud | OceanBase Oracle Compatible (Self-managed database) | Self-managed PostgreSQL |
| Alibaba Cloud | OceanBase Oracle Compatible (Transactional) | RDS PostgreSQL |
| Alibaba Cloud | OceanBase Oracle Compatible (Self-managed database) | RDS PostgreSQL |
Data type mappings
Data type mappings from OceanBase Database in MySQL compatible mode to PostgreSQL Database
OceanBase MySQL Compatible PostgreSQL Database TINYINT SMALLINT BOOLEAN BOOLEAN SMALLINT SMALLINT SMALLINT UNSIGNED INTEGER MEDIUMINT INTEGER INTEGER/INT INTEGER INTEGER UNSIGNED BIGINT BIGINT BIGINT DECIMAL(M,D) DECIMAL
NUMERIC(M,D) NUMERIC(10,0)
The maximum value of M is 65, and the maximum value of D is 30. If you omit D, it defaults to 0. If you omit M, it defaults to 10.
NUMERIC(M,D) NUMERIC(M,D) NUMERIC NUMERIC(10,0) FLOAT(p) - When p is less than 25, it is converted to FLOAT
- When p is greater than or equal to 25, it is converted to DOUBLE PRECISION
DOUBLE DOUBLE PRECISION BIT BIT VARYING(1) BIT(n) BIT VARYING(n) DATE DATE DATETIME DATETIME(6)
Default value: 0. Range: [0,6]
TIMESTAMP TIMESTAMP(6)
TIMESTAMP TIMESTAMP WITH TIME ZONE TIMESTAMP(p) TIMESTAMP(p) WITH TIME ZONE TIME(p) TIME(p) YEAR INTEGER CHAR CHAR(n CHAR) VARCHAR VARCHAR(n) BINARY BYTEA VARBINARY BYTEA TINYBLOB TEXT BLOB TEXT MEDIUMBLOB TEXT LONGBLOB TEXT TINYTEXT TEXT TEXT TEXT MEDIUMTEXT TEXT LONGTEXT TEXT ENUM VARCHAR SET VARCHAR JSON JSON Data type mappings from OceanBase Database in Oracle compatible mode to PostgreSQL Database
Category OceanBase Database in Oracle compatible mode PostgreSQL Database Numeric types NUMBER(p,s) NUMBER(p,s) NUMBER NUMERIC FLOAT(P) FLOAT(P) BINARY_FLOAT REAL BINARY_DOUBLE DOUBLE PRECISION Character types CHAR(n) CHAR(n) NCHAR(n) CHAR (n) NVARCHAR2(n) VARCHAR (n) VARCHAR2(n) VARCHAR (n) VARCHAR(n) VARCHAR (n) Time types DATE TIMESTAMP TIMESTAMP TIMESTAMP(6) TIMESTAMP(n) When n is greater than or equal to 6, it is converted to TIMESTAMP(6). Otherwise, it is converted to TIMESTAMP(n). TIMESTAMP WITH TIME ZONE TIMESTAMP(6) WITH TIME ZONE TIMESTAMP(n) WITH TIME ZONE When n is greater than or equal to 6, it is converted to TIMESTAMP(6) WITH TIME ZONE. Otherwise, it is converted to TIMESTAMP(n) WITH TIME ZONE. TIMESTAMP WITH LOCAL TIME ZONE TIMESTAMP(6) WITH TIME ZONE TIMESTAMP(n) WITH LOCAL TIME ZONE When n is greater than or equal to 6, it is converted to TIMESTAMP(6) WITH TIME ZONE. Otherwise, it is converted to TIMESTAMP(n) WITH TIME ZONE. Binary types RAW(n) VARCHAR(n) Row identifier types ROWID CHAR(18) UROWID VARCHAR(18) Large object types BLOB BYTEA CLOB TEXT JSON type JSON JSON
Procedure
Create a data migration task.

Log in to the OceanBase Cloud console.
In the left-side navigation pane, choose Services > Migrations.
On the Migrations page, click the Migrate Data tab.
In the upper-right corner of the Migrate Data tab, click Create Task.
In the text box for editing the task name, enter a custom name for the migration task.
We recommend that you use a combination of Chinese characters, digits, and letters. The name must not contain spaces, and its length cannot exceed 64 characters.
On the Configure Source and Target page, configure the parameters.
In the Source section, configure the parameters.
If you want to reference an existing data source, click Quick Fill next to Source and select the desired data source from the drop-down list. After you make your selection, the configurations in the Source section will be automatically filled in. If you want to save the current configuration as a new data source, click the Save icon in the upper-right corner of the Source section.
You can also click Manage Data Sources in the Quick Fill drop-down list to go to the Data Sources page, where you can view and manage data sources. This page provides centralized management for different types of data sources. For more information, see Data Sources.
Parameter Description Cloud vendor Currently supports AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. Database type Select OceanBase MySQL Compatible or OceanBase Oracle Compatible as needed. Instance type Currently supports Dedicated (Transactional) and Self-managed Database. Region Select the region where the source database is located. Connection type Includes Endpoint and Public IP. - If you select Endpoint, add the displayed account ID to the allowlist of your endpoint service to enable the endpoint connection. For more information, see Select private network connection.
- If you select Public IP, add the displayed data source IP address to the allowlist of your OceanBase database instance to ensure connectivity. For more information, see Select public network connection.
Note
This parameter is only displayed if the instance type is set to Self-managed database. The data source IP address to be added to the allowlist will be displayed after you select the source and target regions.
Connection information This parameter is only displayed if the instance type is set to Self-managed database. - If you select Connection type as Endpoint, enter the endpoint service name.
- If you select Connection type as Public IP, enter the IP address and port number of the database host.
Instance The ID or name of the OceanBase database instance. You can view the ID or name of the target instance on the Instances page. Note
If the cloud vendor is Alibaba Cloud, you can also select an Alibaba Cloud primary account instance with cross-account authorization. For more information, see Alibaba Cloud account authorization.
Tenant The ID or name of the OceanBase database tenant. You can expand the target instance on the Instances page to view the ID or name of the target tenant. Database account The username of the OceanBase database user for data migration. Password The password of the database user. When Instance Type is set to Self-Managed Database, you can decide whether to enable advanced settings as needed.
Notice
If your new migration task requires incremental synchronization, make sure to enable the sys tenant account and OBLogProxy.
Parameter Description Sys Tenant Account If you enable the sys tenant account, you need to enter the sys account and password. - Sys Account: the name of the sys user. This user is mainly used to read incremental logs and database object structure information from OceanBase Database. Create this user under the sys tenant of the business cluster.
- Password: the password of the sys user.
OBLogProxy If you enable the incremental log proxy service, you need to fill in OBLogProxy Connection Information. This parameter is the OceanBase Database incremental log proxy service, which provides real-time incremental project intervention and management capabilities in the form of a service, making it convenient for applications to intervene in OceanBase Database incremental logs. At the same time, it can meet the subscription needs of incremental logs under network isolation. The format is OBLogProxy IP: OBLogProxy Port.In the Configure Source and Target step, configure the parameters in the Target section.
If you want to reference an existing data source that has been created and saved, click Quick Fill on the right side of the Target section and select the target data source from the drop-down list. After selection, the configurations in the Target section will be automatically filled. If you want to save the current configuration as a new data source, click the Save icon in the upper-right corner of the Target section.
You can also click Manage Data Sources in the Quick Fill drop-down list to go to the Data Sources page, where you can view and manage data sources. This page allows you to centrally manage different types of data sources. For more information, see Data Sources.
Parameter Description Cloud vendor Currently supports AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. Note
By default, cross-cloud data migration is not enabled. To enable this feature, contact OceanBase Cloud technical support.
Database type Select the target database type as PostgreSQL. Instance type - If you select AWS as the cloud vendor, the instance type supports RDS PostgreSQL, Aurora PostgreSQL, and Self-managed PostgreSQL.
- If you select Huawei Cloud as the cloud vendor, the instance type supports RDS PostgreSQL and Self-managed PostgreSQL.
- If you select Google Cloud as the cloud vendor, the instance type supports Cloud PostgreSQL and Self-managed PostgreSQL.
- If you select Alibaba Cloud as the cloud vendor, the instance type supports RDS PostgreSQL and Self-managed PostgreSQL.
Region Select the region where the target database is located. Connection type Includes Endpoint and Public IP. - If you select Endpoint as the connection type, you need to add the displayed account ID to the allowlist of your endpoint service to allow the endpoint to connect to the endpoint service. For more information, see Select Private Connection.
- When Cloud Vendor is set to AWS, if you selected Acceptance required for the Require acceptance for endpoint parameter when creating the endpoint service, a prompt will appear when the data migration service first connects via private link, asking you to go to the AWS console and perform the Accept Endpoint Connection Request action.
- When the cloud vendor is set to Google Cloud, you need to add the authorized project to Published Services. After adding authorization, manual authorization is no longer required during data source testing.
- If you select Public IP as the connection type, you need to add the displayed data source IP address to the allowlist of your PostgreSQL database instance to ensure connectivity. For more information, see Select Public Connection.
Note
The page will display the IP address to be added to the allowlist only after you have selected the regions for both the source and target.
Connection information - If you select Endpoint as the connection type, enter the endpoint service name.
- If you select Public IP as the connection type, enter the IP address and port number of the database host.
Database Name The name of the PostgreSQL database. Database account The username of the PostgreSQL database used for data migration. Password The password of the database user.
Click Test and Continue.
On the Select Type & Objects page, configure the parameters.
Note
Currently, when migrating data from a PostgreSQL database to a MySQL-compatible tenant of OceanBase Database, only one-way synchronization is supported.
In the Migration Type section, select the migration type for the current data migration task.
Currently, only Incremental Synchronization is supported, which includes DML Synchronization and DDL Synchronization. You can customize the configuration as needed. For more information, see Customize DML/DDL settings.
In the Select Migration Objects section, configure the method for selecting migration objects.
You can select migration objects by using Specify Objects or Match by Rule.

In the Select Migration Scope section, select the objects to be migrated.
If you choose Specify Objects, data migration supports Table-level and Database-level. Table-level migration allows you to select one or more tables or views from one or more databases as migration objects. Database-level migration allows you to select an entire database as a migration object. If you select table-level migration for a database, database-level migration is no longer supported for that database. Conversely, if you select database-level migration for a database, table-level migration is no longer supported for that database.
After selecting Table-level or Database-level, select the objects to be migrated in the left pane and click > to add them to the right pane.
Data migration supports importing objects via text and allows renaming target objects, setting row filters, viewing column information, and removing individual or all migration objects.

Note
If you select Database-level, the right pane displays only the database names and does not show specific objects.
Action Description Import objects In the right-side list of the selected region, click Import Objects in the upper-right corner. For more information, see Import migration objects. Rename Data migration supports renaming migration objects. For more information, see Rename databases and tables. Row filtering Data migration supports row filtering by using the WHEREclause. For more information, see Filter data by using SQL conditions. You can also view the column information of the migration object in the View Columns section.Remove/Clear All Data migration supports removing temporarily selected single or multiple objects to the target. - Remove a single migration object
In the right-side list of the selected region, click the Remove icon after the target object to remove the migration object. - Remove all migration objects
In the right-side list of the selected region, click Clear All in the upper-right corner. In the dialog box that appears, click OK to remove all migration objects.
- Remove a single migration object
Select Match by Rule. For more information, see Configure matching rules for migration objects.
Click Next. On the Migration Options page, configure the parameters for incremental synchronization.
Parameter Description Write Concurrency This parameter specifies the maximum number of concurrent writes during incremental synchronization, with a maximum limit of 512. Excessive concurrency may overload the target system, impacting business operations. Rate Limiting for Incremental Migration You can choose to enable or disable rate limiting for incremental synchronization. If enabled, set the RPS (maximum number of rows that can be synchronized per second during incremental synchronization) and BPS (maximum data volume that can be synchronized per second during incremental synchronization). Note
The RPS and BPS values set here serve as throttling limits. The actual performance of incremental synchronization is influenced by factors such as source and target configurations, instance specifications, etc.
Incremental Synchronization Start Timestamp - If you selected Full Migration when choosing the migration type, this parameter will not be displayed.
- If you did not select Full Migration but selected Incremental Synchronization, specify the start time for data migration after a specific point in time. By default, it is set to the current system time. For more information, see Set the incremental synchronization start time.
Click Pre-check to perform a pre-check on the data migration task.
In the Pre-check step, the system checks whether the read and write permissions of the database user and the network connection meet the requirements. You can only start the data migration task after all checks pass. If an error occurs during the pre-check:
You can troubleshoot and fix the issue, then rerun the pre-check until it succeeds.
Alternatively, you can click Skip in the Actions column of the failed pre-check item. A dialog box will appear, informing you of the specific impact of skipping this operation. After confirming that it is acceptable, click OK in the dialog box.
After the pre-check succeeds, click Purchase to go to the Purchase Data Migration Instance page.
After the purchase is successful, you can start the data migration task. For more information about how to purchase a data migration instance, see Purchase a data migration instance. If you do not need to purchase a data migration instance at this time, click Save to go to the details page of the data migration task. You can manually purchase a data migration instance later as needed.
You can click Configure Validation Task in the upper-right corner of the data migration details page to compare the data differences between the source and target databases. For more information, see the "Create a data validation task" topic.
The data migration service allows you to modify the migration objects when the task is running. For more information, see View and modify migration objects. After the data migration task is started, it is executed based on the selected migration types. For more information, see the "View migration details" section in View details of a data migration task.