This topic describes how to use data transmission service to migrate data from OceanBase Database in Oracle compatible mode to an Oracle database.
Notice
If a data migration task is inactive for a long time (task status is Failed, Paused, or Completed), the task may not be recoverable due to factors such as the retention period of incremental logs. Data migration will actively release inactive tasks older than 7 days to free up resources. We recommend that you configure alerts for the task and address any related issues promptly.
Background information
When you create a data migration task in the OceanBase Cloud console to migrate data from an OceanBase database in the Oracle compatible mode to an Oracle database, you can seamlessly migrate existing business data and incremental data from the source database to the Oracle database through schema migration, full migration, and incremental synchronization.
Data Migration Service also allows you to aggregate data from multiple tables in an OceanBase database in the Oracle compatible mode to the same table in an Oracle database, without schema migration, only full migration and incremental synchronization. The following limitations apply to this feature:
For full migration and incremental synchronization, the source database must have the corresponding columns. If this requirement is not met, an error will be returned during data migration.
The source table must have a primary key column.
The target table can have columns that do not exist in the source table.
Prerequisites
The corresponding schema has been created in the Oracle database.
You have created the target OceanBase instance and tenant. For more information, see Create an instance and Create a tenant.
Dedicated database users for data migration tasks have been created in the OceanBase database in the Oracle compatible mode and the Oracle database, and the users have been granted the corresponding privileges. For more information, see User privileges.
Limitations
Only users with the Project Owner, Project Admin, or Data Services Admin role can create data migration tasks.
Limitations on the source database
Do not perform DDL operations to modify the schema or database structure during schema migration and full migration. Otherwise, the data migration task may be interrupted.
At present, the data migration service supports Oracle Database 11g and 19c and OceanBase Database V3.x and V4.x in the Oracle compatible mode.
The data migration service does not support incremental data migration for a table whose data in all columns is of any of the following large object (LOB) types: BLOB, CLOB, and NCLOB.
Data Migration Service does not support migrating non-partitioned secondary tables from an OceanBase database in the Oracle compatible mode to an Oracle database.
Data Migration Service does not support expression-based indexes.
Data Migration Service supports multi-table aggregation only for tables with primary keys.
Data Migration Service supports migrating databases, tables, and column objects with ASCII-compliant names that do not contain special characters (spaces, line breaks, or .|"'`()=;/&).
If the target database is a database, Data Migration Service does not support triggers (Triggers) in the target database. If triggers exist, data migration may fail.
Data source identifiers and user accounts must be globally unique in the data migration service system.
You cannot create a database object with a name exceeding 30 bytes in an Oracle database version 11g or earlier. When you migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database, make sure that object names in the source database do not exceed 30 bytes.
Considerations
If you use OceanBase Database V4.x, we recommend that you enable log archiving. If you enable log archiving, the data migration service implements incremental synchronization by consuming archive logs after clogs are recycled.
If the UTF-8 character set is used in the source, we recommend that you use a compatible character set, such as UTF-8 or UTF-16, in the target to avoid garbled characters.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
The field lengths vary depending on the scenarios of the Oracle-compatible mode of OceanBase Database and the Oracle character set.
If the source table contains the field name OMS_PK_INCRMT, the incremental synchronization task cannot be recovered after it is interrupted.
If you change the unique index at the destination without enabling DDL synchronization, you must restart the data migration task. Otherwise, data may be inconsistent.
Check the objects in the recycle bin of the Oracle-compatible mode of OceanBase Database. If the number of objects exceeds 100, internal table queries may time out. Therefore, you must clean up the objects in the recycle bin.
Check the objects in the recycle bin of the Oracle database. If the number of objects exceeds 100, internal table queries may time out. Therefore, you must clean up the objects in the recycle bin.
Check whether the recycle bin is enabled.
SELECT Value FROM V$parameter WHERE Name = 'recyclebin';Check the number of objects in the recycle bin.
SELECT COUNT(*) FROM RECYCLEBIN;
If the source is OceanBase Database and DDL synchronization is enabled, we recommend that you restart the data migration task if the source database or table is renamed (RENAME) to avoid data loss during incremental synchronization.
If you select only Incremental Synchronization when you create a data migration task, make sure that the local incremental logs of the source database are retained for more than 48 hours.
If you select Full Migration and Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for at least 7 days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because the data migration service cannot obtain incremental logs.
Supported instance types for the source and destination
| Cloud vendor | Source | Destination |
|---|---|---|
| AWS | OceanBase Database (Oracle compatible, transactional) | Oracle RDS |
| AWS | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
| Huawei Cloud | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
| Google Cloud | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
| Alibaba Cloud | OceanBase Database (Oracle compatible, transactional) | Self-managed Oracle |
Data type mappings
| OceanBase Database in Oracle compatible mode | Oracle database |
|---|---|
| CHAR(n CHAR) | CHAR(n CHAR) |
| CHAR(n BYTE) | CHAR(n BYTE) |
| NCHAR(n) | NCHAR(n) |
| NCHAR(n BYTE) | NCHAR(n) |
| VARCHAR2(n) | VARCHAR2(n) |
| NVARCHAR2(n) | NVARCHAR2(n) |
| NVARCHAR2(n BYTE) | NVARCHAR2(n) |
| NUMBER(n) | NUMBER(n) |
| NUMBER(p, s) | NUMBER(p,s) |
| RAW | RAW |
| CLOB | CLOB |
| BLOB | BLOB |
| FLOAT(n) | FLOAT (n) |
| BINARY_FLOAT | BINARY_FLOAT |
| BINARY_DOUBLE | BINARY_DOUBLE |
| DATE | DATE |
| TIMESTAMP | TIMESTAMP |
| TIMESTAMP WITH TIME ZONE | TIMESTAMP WITH TIME ZONE |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP WITH LOCAL TIME ZONE |
| INTERVAL YEAR(p) TO MONTH | INTERVAL YEAR(p) TO MONTH |
| INTERVAL DAY(p) TO SECOND | INTERVAL DAY(p) TO SECOND |
| ROWID | ROWID |
| UROWID | UROWID |
Procedure
Create a data migration task.

Log in to the OceanBase Cloud console.
In the left-side navigation pane, choose Data Services > Data Migration.
On the Data Migration page, click the Data Migration tab.
On the Data Migration tab, click Create Task in the upper-right corner.
On the Configure Source & Target page, configure the parameters.
In the task name field, enter a custom migration task name.
We recommend that you use a combination of Chinese characters, numbers, and English letters. The name must not contain spaces and must be less than 64 characters in length.
If you want to reference the configurations of an existing data source, click Quick Fill next to Source Profile and select the required data source from the drop-down list. Then, the parameters in the Source Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source document.
You can also click Manage Data Source in the drop-down list of Quick Fill to go to the Data Source page to view and manage data sources. This page manages data sources of different types. For more information, see Data sources.
Parameter Description Cloud Vendor At present, supported cloud vendors are Amazon Cloud, Huawei Cloud, Google Cloud, and Alibaba Cloud. Database Type The type of the source. Select OceanBase Oracle Compatible. Instance Type Includes Dedicated (Transactional) and Self-managed database. Currently, only migration of data from Dedicated (Transactional) to Oracle is supported. Region The region of the source database. Instance The ID or name of the instance to which the Oracle-compatible tenant of OceanBase Database belongs. You can view the ID or name of the instance on the Instances page. Note
When your cloud vendor is Alibaba Cloud, you can also select a cross-account authorized instance of an Alibaba Cloud primary account. For more information, see Alibaba Cloud account authorization.
Tenant The ID or name of the Oracle-compatible tenant of OceanBase Database. You can expand the information about the target instance on the Instances page and view the ID or name of the tenant. Database Account The name of the migration user in the Oracle-compatible tenant of OceanBase Database. Password The password of the database user. In the Target section, configure the parameters.
If you want to reference the configurations of an existing data source, click Quick Fill next to Target Profile and select the required data source from the drop-down list. Then, the parameters in the Target Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source document.
Parameter Description Cloud Vendor We support Amazon Cloud, Huawei Cloud, Google Cloud, and Alibaba Cloud. You can choose the same cloud vendor as the source, or perform cross-cloud data migration. Notice
Cross-cloud vendor data migration is disabled by default. If you need to use this feature, please contact our technical support.
Database Type The type of the target. Select Oracle. Instance Type - When you select Amazon Cloud as the cloud vendor, the instance types supported are RDS Oracle and Self-managed Oracle.
- When you select Huawei Cloud or Google Cloud as the cloud vendor, the instance type supported is Self-managed Oracle.
Region The region of the source database. Connection Type Available connection types are Endpoint and Public IP. - If you choose Endpoint connection type, you need to first add the authorized account ID displayed on the page to the whitelist of your endpoint service. This allows the endpoint from that account to connect to the endpoint service. For more information, see Adding private network whitelist.
- If your Cloud Vendor is Amazon Cloud, when you create an Endpoint Service, if the parameter Acceptance required for endpoint connections is set to Enabled, the data migration service will prompt you to Accept endpoint connection request in the Amazon Cloud console upon first connecting via private connection.
- When your Cloud Vendor is Google Cloud, add authorized projects to Published Services. After authorization, no manual authorization is needed when you test the data source connection.
Connection Details - When you select Connection Type as Endpoint, enter the endpoint service name.
- When you select Connection Type as Public IP, enter the IP address and port number of the database host machine.
Service Name The service name of the Oracle database. Database Account The name of the Oracle database user for data migration. Password The password of the database user.
Click Test and Continue.
On the Select Type & Objects page, configure the parameters.
In the Sync Topology section, select One-way Sync.
Data migration supports One-way Sync and Two-way Sync. This topic describes how to perform one-way synchronization. For more information about two-way synchronization, see Configure a two-way synchronization task.
In the Select Type & Objects step, select the migration type of the current data migration task.
Options of Migration Type are Schema Migration, Full Migration, Incremental Synchronization, and Full Verification.

Parameter Description Schema Migration For schema migration, you must define the character set mapping. Data migration only copies the schema of the source database to the destination database and does not affect the source data. Full Migration After a full migration task is created, data migration services migrate the existing data in the source database to the corresponding table in the destination database. Incremental Synchronization After an incremental synchronization task is created, data migration synchronizes the data that has changed in the source database (new, modified, or deleted) to the corresponding table in the destination database. Incremental Synchronization includes DML Synchronization and DDL Synchronization, which you can customize based on your needs. For more information, see Customize DML/DDL settings. Full Verification After full migration is completed and incremental data is synchronized to the destination database and the data in the destination database is basically synchronized with the source database, data migration automatically initiates a full data verification task for the source and destination tables.
Data migration supports full data verification only for tables with unique keys (tables with primary keys or non-null unique keys).In the Select Migration Objects section, specify your way to select migration objects.
You can select migration objects in two ways: Specify Objects and Match by Rule.
In the Select Migration Scope section, select migration objects.
If you select Specify Objects, data migration supports Database Object and Entire Database. Database table migration allows you to select one or more tables or views in one or more databases as migration objects. Full database migration allows you to select an entire database as a migration object. If a database supports database table migration, it no longer supports full database migration. If a database supports full database migration, it no longer supports database table migration.
After you select Database Object or Entire Database, select the migration objects on the left and click > to add them to the list on the right.
Data migration allows you to import migration objects and supports renaming, setting row filtering, viewing column information, and removing all migration objects from the destination.

Note
If you select Full Database Migration:
The list on the right displays only database names and does not show specific objects.
If you select Incremental Synchronization > DDL Synchronization as the migration type, new tables in the source database can be synchronized to the destination database.
Action Description Import Objects In the right-side list, click Import Object in the upper-right corner. For more information, see Import migration objects. Rename Data migration allows you to rename migration objects. For more information, see Rename database tables. Row Filtering Data migration allows you to filter rows based on the WHEREcondition. For more information, see Filter data based on SQL conditions. You can also view the column information of migration objects in the View Column section.Remove/Remove All Data migration allows you to remove temporarily selected migration objects from the destination. - Remove a single migration object
In the right-side list, click the Remove icon next to the target object to remove the migration object. - Remove all migration objects
In the right-side list, click Clear All in the upper-right corner. In the dialog box that appears, click OK to remove all migration objects.
Select Match by Rule, for more information, see Configure database-to-database matching rules.
Click Next. On the Migration Options page, configure the parameters.
Full migration
In the Select Type & Objects step, One-way Sync > Full Migration must be selected to display the following parameters.

Parameter Description Read Concurrency Configuration This parameter specifies the number of concurrent threads for reading data from the source during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the source and affect business operations. Write Concurrency Configuration This parameter specifies the number of concurrent threads for writing data to the destination during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the destination and affect business operations. Limit Full Migration Rate You can decide whether to limit the full migration rate based on your needs. If you enable this option, you must also set the RPS (maximum number of data rows that can be migrated to the destination per second during full migration) and BPS (maximum amount of data that can be migrated to the destination per second during full migration). Note
The RPS and BPS values specified here are only for throttling and limiting capabilities. The actual performance of full migration is limited by factors such as the source, destination, and instance specifications.
Handling Strategy for Records in Destination Table Objects This parameter specifies the strategy for handling records in destination table objects. Valid values: Stop Migration and Ignore. - If you select Stop Migration, data migration will report an error when destination table objects contain data, indicating that migration is not allowed. Please handle the data in the destination database before resuming migration.
Notice
If you click Restore after an error occurs, data migration will ignore this setting and continue to migrate table data. Proceed with caution.
- If you select Ignore, when destination table objects contain data, data migration will adopt the strategy of recording conflicting data in logs and retaining the original data.
Notice
If you select Ignore, full verification will use the IN mode to pull data, which means it cannot verify scenarios where the destination contains data not present in the source. This will result in a certain level of performance degradation.
Allow Index Postponement This parameter specifies whether to allow index creation to be postponed after full migration is completed. If you select this option, note the following items. Notice
Before you select this option, make sure that you have selected both Schema Migration and Full Migration on the Select Migration Type page.
- Only non-unique key indexes support index creation after migration.
If index creation after migration is allowed, we recommend that you adjust the following tenant parameters based on the hardware conditions and current business traffic of the OceanBase database.
// Limit the size of the temporary file buffer. ALTER SYSTEM SET _temporary_file_io_area_size = '10' tenant = 'xxx'; // Disable throttling for OceanBase Database V4.x. ALTER SYSTEM SET sys_bkgd_net_percentage = 100;- If you select Stop Migration, data migration will report an error when destination table objects contain data, indicating that migration is not allowed. Please handle the data in the destination database before resuming migration.
Incremental synchronization
In the Select Type & Objects step, One-way Sync > Incremental Synchronization must be selected to display the following parameters.

Parameter Description Write Concurrency Configuration This parameter specifies the number of concurrent threads for writing data to the destination during incremental synchronization. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the destination and affect business operations. Limit Incremental Migration Rate You can decide whether to limit the incremental migration rate based on your needs. If you enable this option, you must also set the RPS (maximum number of data rows that can be synchronized to the destination per second during incremental synchronization) and BPS (maximum amount of data that can be synchronized to the destination per second during incremental synchronization). Note
The RPS and BPS values specified here are only for throttling and limiting capabilities. The actual performance of incremental synchronization is limited by factors such as the source, destination, and instance specifications.
Incremental Synchronization Start Timestamp - If you selected Full Migration as the migration type, this parameter is not displayed.
- If you did not select Full Migration but selected Incremental Synchronization, specify the data to migrate after a specific point in time. The default value is the current system time. For more information, see Set the incremental synchronization timestamp.
Click Next to perform a precheck on the data migration task.
In the Pre-check step, data migration checks whether the database user has the read and write permissions and whether the database is connected to the network. You can start the data migration task only after all checks pass. If the precheck fails:
You can troubleshoot and fix the issue and then perform the precheck again until the precheck succeeds.
You can also click Skip in the Actions column of the failed precheck item. A dialog box appears, prompting you to confirm whether you want to skip this operation. Click OK in the dialog box to confirm.
After the precheck succeeds, click Purchase to go to the Purchase a data migration instance page.
After you purchase the data migration instance, you can start the data migration task. For more information, see Purchase a data migration instance. If you do not need to purchase a data migration instance, click Save to go to the details page of the data migration task. You can purchase a data migration instance later as needed.
Data migration services allow you to modify the migration objects during a data migration task. For more information, see View and modify migration objects. After a data migration task is started, it is executed based on the selected migration type. For more information, see the "View migration details" section in View details of a data migration task.