You can create a data migration task to migrate data from an an Oracle-compatible tenant of OceanBase Database to Oracle database. By performing schema migration, full migration, and incremental synchronization, you can seamlessly migrate existing business data and incremental data from the source database to the Oracle Database.
Notice
If a data migration task is inactive for a long time (task status is Failed, Paused, or Completed), the task may not be recoverable due to factors such as the retention period of incremental logs. Data migration will actively release inactive tasks older than 7 days to free up resources. We recommend that you configure alerts for the task and address any related issues promptly.
Background information
Data Migration Service also allows you to aggregate data from multiple tables in an Oracle-compatible tenant of OceanBase Database to the same table in an Oracle database, without schema migration, only full migration and incremental synchronization. The following limitations apply to this feature:
For full migration and incremental synchronization, the source database must have the corresponding columns. If this requirement is not met, an error will be returned during data migration.
The source table must have a primary key column.
The target table can have columns that do not exist in the source table.
Prerequisites
The corresponding schema has been created in the Oracle database.
You have created the target OceanBase instance and tenant. For more information, see Create an instance and Create a tenant.
You have created dedicated database users for data migration in the source and the target, and granted required privileges to the users. For more information, see User privileges.
Limitations
Only users with the Project Owner, Project Admin, or Data Services Admin role can create data migration tasks.
Limitations on the source database
Do not perform DDL operations to modify the schema or database structure during schema migration and full migration. Otherwise, the data migration task may be interrupted.
At present, the data migration service supports Oracle Database 10g, 11g, 12c, 18c, and 19c. Oracle Database 12c and later versions support container database (CDB) and pluggable database (PDB). It also supports OceanBase Database (in the Oracle compatible mode) V2.x, V3.x, and V4.x.
The data migration service does not support incremental data migration for a table whose data in all columns is of any of the following large object (LOB) types: BLOB, CLOB, and NCLOB.
Data Migration Service does not support migrating non-partitioned secondary tables from an Oracle-compatible tenant of OceanBase Database to an Oracle database.
Data Migration Service does not support expression-based indexes.
Data Migration Service supports multi-table aggregation only for tables with primary keys.
Data Migration Service supports migrating databases, tables, and column objects with ASCII-compliant names that do not contain special characters (spaces, line breaks, or .|"'`()=;/&).
If the target database is a database, Data Migration Service does not support triggers (Triggers) in the target database. If triggers exist, data migration may fail.
Data source identifiers and user accounts must be globally unique in the data migration service system.
You cannot create a database object with a name exceeding 30 bytes in an Oracle database version 11g or earlier. When you migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database, make sure that object names in the source database do not exceed 30 bytes.
Considerations
If you use OceanBase Database V4.x, we recommend that you enable log archiving. If you enable log archiving, the data migration service implements incremental synchronization by consuming archive logs after clogs are recycled.
If the UTF-8 character set is used in the source, we recommend that you use a compatible character set, such as UTF-8 or UTF-16, in the target to avoid garbled characters.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
The field lengths vary depending on the scenarios of the Oracle-compatible mode of OceanBase Database and the Oracle character set.
If the source table contains the field name OMS_PK_INCRMT, the incremental synchronization task cannot be recovered after it is interrupted.
If you change the unique index at the target without enabling DDL synchronization, you must restart the data migration task. Otherwise, data may be inconsistent.
Check the objects in the recycle bin of the Oracle-compatible mode of OceanBase Database. If the number of objects exceeds 100, internal table queries may time out. Therefore, you must clean up the objects in the recycle bin.
Check the objects in the recycle bin of the Oracle database. If the number of objects exceeds 100, internal table queries may time out. Therefore, you must clean up the objects in the recycle bin.
Check whether the recycle bin is enabled.
SELECT Value FROM V$parameter WHERE Name = 'recyclebin';Check the number of objects in the recycle bin.
SELECT COUNT(*) FROM RECYCLEBIN;
If the source is OceanBase Database and DDL synchronization is enabled, we recommend that you restart the data migration task if the source database or table is renamed (RENAME) to avoid data loss during incremental synchronization.
If you select only Incremental Synchronization when you create a data migration task, make sure that the local incremental logs of the source database are retained for more than 48 hours.
If you select Full Migration and Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for at least 7 days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because the data migration service cannot obtain incremental logs.
Supported source and target instance types
| Cloud vendor | Source | Target |
|---|---|---|
| AWS | OceanBase Database (Oracle compatible, transactional) | Oracle RDS |
| AWS | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
| Huawei Cloud | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
| Google Cloud | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
| Alibaba Cloud | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
Data type mappings
| OceanBase Database (Oracle Compatible Mode) | Oracle database |
|---|---|
| CHAR(n CHAR) | CHAR(n CHAR) |
| CHAR(n BYTE) | CHAR(n BYTE) |
| NCHAR(n) | NCHAR(n) |
| NCHAR(n BYTE) | NCHAR(n) |
| VARCHAR2(n) | VARCHAR2(n) |
| NVARCHAR2(n) | NVARCHAR2(n) |
| NVARCHAR2(n BYTE) | NVARCHAR2(n) |
| NUMBER(n) | NUMBER(n) |
| NUMBER(p, s) | NUMBER(p,s) |
| RAW | RAW |
| CLOB | CLOB |
| BLOB | BLOB |
| FLOAT(n) | FLOAT (n) |
| BINARY_FLOAT | BINARY_FLOAT |
| BINARY_DOUBLE | BINARY_DOUBLE |
| DATE | DATE |
| TIMESTAMP | TIMESTAMP |
| TIMESTAMP WITH TIME ZONE | TIMESTAMP WITH TIME ZONE |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP WITH LOCAL TIME ZONE |
| INTERVAL YEAR(p) TO MONTH | INTERVAL YEAR(p) TO MONTH |
| INTERVAL DAY(p) TO SECOND | INTERVAL DAY(p) TO SECOND |
| ROWID | ROWID |
| UROWID | UROWID |
Procedure
Create a data migration task.

Log in to the OceanBase Cloud console.
In the left-side navigation pane, select Data Services > Migrations.
On the Migrations page, click the Migrate Data tab.
On the Migrate Data tab, click Create Task in the upper-right corner.
In the task name field, enter a custom migration task name.
We recommend that you use a combination of Chinese characters, numbers, and English letters. The name cannot contain spaces and must be less than 64 characters in length.
On the Configure Source & Target page, configure the parameters.
In the Source Profile section, configure the parameters.
If you want to reference the configurations of an existing data source, click Quick Fill next to Source Profile and select the required data source from the drop-down list. Then, the parameters in the Source Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source.
Parameter Description Cloud Vendor At present, supported cloud vendors are AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. Database Type The type of the source. Select OceanBase Oracle Compatible. Instance Type Includes Dedicated (Transactional) and Self-managed Database. Currently, only migration of data from Dedicated (Transactional) to Oracle is supported. Region The region of the source database. Instance The ID or name of the instance to which the Oracle-compatible tenant of OceanBase Database belongs. You can view the ID or name of the instance on the Instances page. Note
When your cloud vendor is Alibaba Cloud, you can also select a cross-account authorized instance of an Alibaba Cloud primary account. For more information, see Alibaba Cloud account authorization.
Tenant The ID or name of the Oracle-compatible tenant of OceanBase Database. You can expand the information about the target instance on the Instances page and view the ID or name of the tenant. Database Account The name of the migration user in the Oracle-compatible tenant of OceanBase Database. Password The password of the database user. In the Target section, configure the parameters.
If you want to reference the configurations of an existing data source, click Quick Fill next to Target Profile and select the required data source from the drop-down list. Then, the parameters in the Target Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source.
Parameter Description Cloud Vendor We support AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. You can choose the same cloud vendor as the source, or perform cross-cloud data migration. Notice
Cross-cloud vendor data migration is disabled by default. If you need to use this feature, please contact our technical support.
Database Type The type of the target. Select Oracle. Instance Type - When you select AWS as the cloud vendor, the instance types supported are RDS Oracle and Self-managed Oracle.
- When you select Huawei Cloud or Google Cloud as the cloud vendor, the instance type supported is Self-managed Oracle.
Region The region of the target database. Connection Type Available connection types are Endpoint and Public IP. - If you select Endpoint connection type, you need to first add the account ID displayed on the page to the whitelist of your endpoint service. This allows the endpoint from that account to connect to the endpoint service. For more information, see the corresponding topic under Connect via private network.
- When you select Cloud Vendor as AWS, if you selected Acceptance required for the parameter Require acceptance for endpoint when you created the endpoint service, the data migration service will prompt you to go to the AWS console to perform the Accept endpoint connection request operation in the AWS console when the data migration service first connects to the PrivateLink.
- When your Cloud Vendor is Google Cloud, add authorized projects to Published Services. After authorization, no manual authorization is needed when you test the data source connection.
Note
You need to select the source and target regions before the page displays the data source IP addresses that need to be added to the whitelist.
Connection Details - When you select Connection Type as Endpoint, enter the endpoint service name.
- When you select Connection Type as Public IP, enter the IP address and port number of the database host machine.
Service Name The service name of the Oracle database. Database Account The name of the Oracle database user for data migration. Password The password of the database user.
Click Test and Continue.
On the Select Type & Objects page, configure the parameters.
In the Sync Topology section, select One-way Sync.
Data migration supports One-way Sync and Two-way Sync. This topic describes how to perform one-way synchronization. For more information about two-way synchronization, see Configure a two-way synchronization task.
In the Select Type & Objects step, select the migration type of the current data migration task.
Options of Migration Type are Schema Migration, Full Migration, and Incremental Synchronization.
Parameter Description Schema Migration For schema migration, you must define the character set mapping. Data migration only copies the schema of the source database to the target database and does not affect the source data. Full Migration After a full migration task is created, data migration services migrate the existing data in the source database to the corresponding table in the target database. Incremental Synchronization After an incremental synchronization task is created, data migration synchronizes the data that has changed in the source database (new, modified, or deleted) to the corresponding table in the target database. Incremental Synchronization includes DML Synchronization and DDL Synchronization, which you can customize based on your needs. For more information, see Customize DML/DDL settings. In the Select Migration Objects section, specify your way to select migration objects.
You can select migration objects in two ways: Specify Objects and Match by Rule.
In the Select Migration Scope section, select migration objects.
If you select Specify Objects, data migration supports Table-level and Database-level. Table-level migration allows you to select one or more tables or views from one or more databases as migration objects. Database-level migration allows you to select an entire database as a migration object. If you select table-level migration for a database, database-level migration is no longer supported for that database. Conversely, if you select database-level migration for a database, table-level migration is no longer supported for that database.
After selecting Table-level or Database-level, select the objects to be migrated in the left pane and click > to add them to the right pane.
Data migration allows you to import migration objects and supports renaming, setting row filtering, viewing column information, and removing all migration objects from the target.

Note
If you select Database-level:
The list on the right displays only database names and does not show specific objects.
If you select Incremental Synchronization > DDL Synchronization as the migration type, new tables in the source database can be synchronized to the target database.
Action Description Import Objects In the right-side list, click Import in the upper-right corner. For more information, see Import migration objects. Rename Data migration allows you to rename migration objects. For more information, see Rename database tables. Row Filtering Data migration allows you to filter rows based on the WHEREcondition. For more information, see Filter data based on SQL conditions. You can also view the column information of migration objects in the View Column section.Remove/Clear All Data migration allows you to remove temporarily selected migration objects from the destination. - Remove a single migration object
In the right-side list, click the Remove icon next to the target object to remove the migration object. - Remove all migration objects
In the right-side list, click Clear All in the upper-right corner. In the dialog box that appears, click OK to remove all migration objects.
If you select Match by Rule, for more information, see Configure database-to-database matching rules.
Click Next. On the Migration Options page, configure the parameters.
Full migration
In the Select Type & Objects step, One-way Sync > Full Migration must be selected to display the following parameters.

Parameter Description Read Concurrency This parameter specifies the number of concurrent threads for reading data from the source during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the source and affect business operations. Write Concurrency This parameter specifies the number of concurrent threads for writing data to the target during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the target and affect business operations. Rate Limiting for Full Migration You can decide whether to limit the full migration rate based on your needs. If you enable this option, you must also set the RPS (maximum number of data rows that can be migrated to the target per second during full migration) and BPS (maximum amount of data that can be migrated to the target per second during full migration). Note
The RPS and BPS values specified here are only for throttling and limiting capabilities. The actual performance of full migration is limited by factors such as the source, target, and instance specifications.
Handle Non-empty Tables in Target Database This parameter specifies the strategy for handling records in target table objects. Valid values: Stop Migration and Ignore. - If you select Stop Migration, data migration will report an error when target table objects contain data, indicating that migration is not allowed. Please handle the data in the target database before resuming migration.
Notice
If you click Restore after an error occurs, data migration will ignore this setting and continue to migrate table data. Proceed with caution.
- If you select Ignore, when target table objects contain data, data migration will adopt the strategy of recording conflicting data in logs and retaining the original data.
Post-Indexing This parameter specifies whether to allow index creation to be postponed after full migration is completed. If you select this option, note the following items. Notice
Before you select this option, make sure that you have selected both Schema Migration and Full Migration on the Select Migration Type page.
- Only non-unique key indexes support index creation after migration.
If index creation after migration is allowed, we recommend that you adjust the following tenant parameters based on the hardware conditions and current business traffic of the OceanBase database.
// Limit the size of the temporary file buffer. ALTER SYSTEM SET _temporary_file_io_area_size = '10' tenant = 'xxx'; // Disable throttling for OceanBase Database V4.x. ALTER SYSTEM SET sys_bkgd_net_percentage = 100;- If you select Stop Migration, data migration will report an error when target table objects contain data, indicating that migration is not allowed. Please handle the data in the target database before resuming migration.
Incremental synchronization
In the Select Type & Objects step, One-way Sync > Incremental Synchronization must be selected to display the following parameters.

Parameter Description Write Concurrency This parameter specifies the number of concurrent threads for writing data to the target during incremental synchronization. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the target and affect business operations. Rate Limiting for Incremental Migration You can decide whether to limit the incremental migration rate based on your needs. If you enable this option, you must also set the RPS (maximum number of data rows that can be synchronized to the target per second during incremental synchronization) and BPS (maximum amount of data that can be synchronized to the target per second during incremental synchronization). Note
The RPS and BPS values specified here are only for throttling and limiting capabilities. The actual performance of incremental synchronization is limited by factors such as the source, target, and instance specifications.
Incremental Synchronization Start Timestamp - If you selected Full Migration as the migration type, this parameter is not displayed.
- If you did not select Full Migration but selected Incremental Synchronization, specify the data to migrate after a specific point in time. The default value is the current system time. For more information, see Set the incremental synchronization timestamp.
Click Next to perform a precheck on the data migration task.
In the Pre-check step, data migration checks whether the database user has the read and write permissions and whether the database is connected to the network. You can start the data migration task only after all checks pass. If the precheck fails:
You can troubleshoot and fix the issue and then perform the precheck again until the precheck succeeds.
You can also click Skip in the Actions column of the failed precheck item. A dialog box appears, prompting you to confirm whether you want to skip this operation. Click OK in the dialog box to confirm.
After the pre-check succeeds, click Purchase to go to the Purchase Data Migration Instance page.
After the purchase succeeds, you can start the data migration task. For more information about how to purchase a data migration instance, see Purchase a data migration instance. If you do not need to purchase a data migration instance at this time, click Save to go to the details page of the data migration task. You can manually purchase a data migration instance later as needed.
You can click Configure Validation Task in the upper-right corner of the details page to compare the data differences between the source database and the target database. For more information, see Create a data validation task.
The data migration service allows you to modify the migration objects when the task is running. For more information, see View and modify migration objects. After the data migration task is started, it is executed based on the selected migration types. For more information, see the "View migration details" section in View details of a data migration task.