This topic describes how to use OceanBase Migration Service (OMS) to migrate data from the Oracle compatible mode of OceanBase Database (include a physical data source, an ApsaraDB for OceanBase data source, or a standalone data source) to an Oracle database.
Background information
You can create a data migration task in the OMS console to seamlessly migrate the existing business data and incremental data from the Oracle compatible mode of OceanBase Database to an Oracle database through schema migration, full migration, and incremental synchronization.
OMS allows you to aggregate data from multiple tables in the Oracle compatible mode of OceanBase Database to a single table in the Oracle database. No schema migration is required for this process; only full migration and incremental synchronization are needed. Note the following limits on this feature:
For full migration and incremental synchronization, the target must have columns that exist in the source. If this requirement is not met, OMS returns an error.
The primary key column must exist in the source table.
The target table can have columns that do not exist in the source.
Prerequisites
You have created the corresponding schema in the target Oracle database.
You have created dedicated database users for the source Oracle compatible mode of OceanBase Database and the target Oracle database, and granted relevant privileges to these users. For more information, see Create a database user.
If you want to migrate tables without primary keys, you need to create the
__OCEANBASE_INNER_DRC_USERuser in the corresponding tenant and grant privileges to this user before you execute the data migration task. For more information, see Create the __OCEANBASE_INNER_DRC_USER user.
Limitations
Limitations on operations in the source database
Do not perform DDL operations that change the database or table schema during schema migration and full migration. Otherwise, the data migration task may be interrupted.
The Oracle database version must be 10g, 11g, 12c, 18c, or 19c. A database of version 12c or later contains a container database (CDB) and pluggable databases (PDB).
Incremental data migration is not supported for a table where all columns are of the LOB type (BLOB, CLOB, or NCLOB).
Data migration from a non-template secondary partitioned table in the Oracle compatible mode of OceanBase Database to an Oracle database is not supported.
Only tables with primary keys can be aggregated.
OMS does not support expression-based indexes.
OMS does not support triggers in the target database. If triggers exist in the target database, the data migration may fail.
OMS does not support a floating-point or double-precision primary key.
Data source identifiers and user accounts must be globally unique in OMS.
Oracle databases of a version earlier than 11g do not support creating database objects with names longer than 30 bytes. Make sure that you do not create database objects in the Oracle compatible mode of OceanBase Database whose names exceed this limit when you migrate the objects to an Oracle database.
OMS only supports migrating databases, tables, and column objects with ASCII-compliant names that do not contain special characters (spaces, line breaks, or .|"'`()=;/&\).
OMS does not support a standby OceanBase database as the source.
Considerations
Take note of the following items when you perform reverse incremental synchronization from an Oracle database to an Oracle compatible mode database of OceanBase Database:
If the source Oracle database is in the primary-standby mode or has only a standby database, and the number of instances where the Oracle primary database and standby database are running is different, some instances may fail to pull incremental logs. In this case, you need to manually set the parameters of the Store component to specify the instances from which incremental logs need to be pulled when they are pulled from the standby database. The procedure is as follows:
Stop the Store component immediately after it is started.
On the Update Configuration page of the Store component, add the
deliver2store.logminer.instance_threadsparameter to specify the instances from which incremental logs need to be pulled.Separate multiple threads with the | character. For example, 1|2|3. For more information about how to update the component, see Why do I lose archive logs when I perform incremental synchronization from an Oracle standby database to OceanBase Database by using OMS?.
After you set the parameters, restart the Store component.
Five minutes later, run
grep 'log entries from' connector/connector.logto view which instances have had their logs pulled (the thread field indicates which instance's logs are pulled).
When incremental synchronization is performed from an Oracle database, we recommend that you keep the size of each archived file less than 2 GB. If not, the following risks exist:
The longer the archived file, the more time is required to pull it.
When the source Oracle database is in the primary-standby mode, incremental data is pulled from the standby database. In this case, only archived files can be pulled. Therefore, archived files must be generated before they can be pulled. The longer an archived file is, the longer it takes to process before the archived file is pulled and the more time is required to process the archived file.
The larger the archived file, the more memory is required for the Store component when the same level of concurrency is set for pulling.
If archived files of the source Oracle database are retained for more than 2 days, the backup data may be unavailable for restoration when the backup data of a specific period increases sharply or the Oracle store encounters an exception in processing. In this case, restoration cannot be performed because there is no backup data available.
The parsing of incremental logs of the Oracle database supports a maximum of 5 TB per day.
For data migration tasks from the Oracle compatible mode of OceanBase Database V4.x to Oracle databases of 12c and later, OMS does not support the migration of database objects larger than 30 bytes (including schemas, tables, and columns). In this case, perform data migration by using the procedure described in How to migrate an Oracle database object larger than 30 bytes.
OMS does not support the execution of certain
UPDATEcommands on Oracle databases. The following code is an example of an unsupportedUPDATEcommand.UPDATE TABLE_NAME SET KEY=KEY+1;In the preceding code,
TABLE_NAMEindicates the name of a table, andKEYindicates a NUMERIC column defined as the primary key.If the source table has no primary key and contains LOB fields, the data quality of the table is poor after reverse incremental synchronization.
We recommend that you enable archive logs for OceanBase Database V4.x. If archive logs are enabled, OMS can still consume archive logs to implement incremental synchronization even if Clogs are recycled. For more information about how to enable archive logs, see Archive logs.
If the source character set is UTF-8, we recommend that you use a compatible character set, such as UTF-8 or UTF-16, for the target database to avoid issues such as garbled characters due to incompatible character sets.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization or reverse incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
In different scenarios where OceanBase Database's Oracle compatible mode and an Oracle database use different Oracle character sets, you can select appropriate strategies based on the specific situations of data migration tasks.
When migrating a table without a primary key, OMS adds a hidden column to the target table. If the version of the target database is earlier than Oracle Database 12c, OMS adds a non-hidden column to the target table.
If you are using OceanBase Database of a version earlier than V2.2.30, OMS does not support the migration of tables without primary keys.
A table with a primary key has a
pkornot null ukcolumn but does not have afunction-based ukcolumn.If the source database is of a version earlier than V2.2.77 or the source table has a function-based unique key on a virtual column, OMS may fail to accurately determine whether the table has a function-based unique key because of the global unique index on the table. In this case, the full migration and full verification speeds are slow, and there is a risk of data inconsistency during incremental synchronization.
If the source table contains a column named
OMS_PK_INCRMT, the incremental synchronization task is interrupted and cannot be recovered when you perform a DML operation on the table during incremental synchronization.During data migration from the Oracle compatible mode of OceanBase Database to an Oracle database, if the table to be migrated has a global unique index and you update the values of the partition key of the table, data may be lost during incremental synchronization if the version of OceanBase Database is earlier than V3.2.x.
In data migration tasks where the synchronization of DDL statements is not enabled, if you change the unique index of the target database, you must restart the incremental synchronization component. Otherwise, there may be data consistency issues.
If you do not enable forward switchover for the data migration task, you must delete the unique index and the pseudo columns in the target database. Otherwise, data cannot be written, and when data is imported to the target database, the pseudo columns will be recreated, conflicting with those in the source database.
If forward switchover is enabled for the data migration task, OMS will automatically delete the hidden columns and unique indexes based on the type of the data migration task. For more information, see Hidden columns mechanism of data migration service.
In multi-table aggregation scenarios:
We recommend that you map the relationships between the source and target databases or tables based on matching rules.
We recommend that you manually create schemas in the target database. If you use OMS to create schemas, skip failed objects in the schema migration step.
Check the objects in the recycle bin of the Oracle compatible mode of OceanBase Database. If the number of objects in the recycle bin exceeds 100, internal table queries may time out. In this case, clear objects in the recycle bin.
Check whether the recycle bin is enabled.
SELECT Value FROM V$parameter WHERE Name = 'recyclebin';Check the number of objects in the recycle bin.
SELECT count(*) FROM RECYCLEBIN;
If you skip the "Source Database - Primary Database - ROW_MOVEMENT Check" precheck item, the data in tables with ROW_MOVEMENT enabled will be inconsistent during synchronization.
In data migration tasks where the source database is an OceanBase Database and DDL synchronization is enabled, if the database or table names are changed in the source database, we recommend that you restart the task to avoid data loss during incremental synchronization.
If you select only Incremental Synchronization when you create a data migration task, OMS requires that archive logs in the source database be retained for more than 48 hours.
If you select Full Migration + Incremental Synchronization when you create a data migration task, OMS requires that archive logs in the source database be retained for at least seven days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because OMS cannot obtain incremental logs.
If the source or target table contains only objects that differ in capitalization, the migration result may not meet your expectations because the objects in the source and target tables are case-insensitive.
Data type mappings
| OceanBase Database Oracle compatible mode | Oracle Database |
|---|---|
| CHAR(n CHAR) | CHAR(n CHAR) |
| CHAR(n BYTE) | CHAR(n BYTE) |
| NCHAR(n) | NCHAR(n) |
| NCHAR(n BYTE) | NCHAR(n) |
| VARCHAR2(n) | VARCHAR2(n) |
| NVARCHAR2(n) | NVARCHAR2(n) |
| NVARCHAR2(n BYTE) | NVARCHAR2(n) |
| NUMBER(n) | NUMBER(n) |
| NUMBER(p, s) | NUMBER(p,s) |
| RAW | RAW |
| CLOB | CLOB |
| BLOB | BLOB |
| FLOAT(n) | FLOAT (n) |
| BINARY_FLOAT | BINARY_FLOAT |
| BINARY_DOUBLE | BINARY_DOUBLE |
| DATE | DATE |
| TIMESTAMP | TIMESTAMP |
| TIMESTAMP WITH TIME ZONE | TIMESTAMP WITH TIME ZONE |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP WITH LOCAL TIME ZONE |
| INTERVAL YEAR(p) TO MONTH | INTERVAL YEAR(p) TO MONTH |
| INTERVAL DAY(p) TO SECOND | INTERVAL DAY(p) TO SECOND |
| ROWID | ROWID Note: Support for ROWID is available only in OceanBase Database Oracle compatible mode 2.2.70 and later. |
| UROWID | UROWID Note: Support for UROWID is available only in OceanBase Database Oracle compatible mode 2.2.70 and later. |
Procedure
Create a data migration task.

Log in to the OMS console.
In the left-side navigation pane, click Data Migration.
On the Data Migration page, click Create Task in the upper-right corner.
On the Create Task page, specify the task name.
We recommend that you use a combination of Chinese characters, numbers, and letters. The name cannot contain spaces and must be 64 characters or less in length.
Notice
The task name is a unique identifier in the OMS system. Please enter a custom task name.
On the Select Source and Target page, configure the parameters.

Parameter Description Source If you have created an OceanBase data source in Oracle compatible mode, such as a physical data source, public cloud data source, or standalone data source, you can select the data source from the drop-down list. If not, click New Data Source in the drop-down list to create one. For more information, see Create a physical OceanBase data source, Create an OceanBase public cloud data source, or Create an OceanBase standalone data source. Target If you have created an Oracle data source, you can select the data source from the drop-down list. If not, click New Data Source in the drop-down list to create one. For more information, see Create an Oracle data source.
You can select a data source of the primary+standby database type or primary database type, but not the standby database type. In this example, a primary+standby database type data source is selected.Tag (Optional) Click the field and select a tag from the drop-down list. You can also click Manage Tags to create, modify, or delete a tag. For more information, see Manage data migration tasks by using tags. Click Next. On the Select Migration Type page, select One-way Synchronization from the Synchronization Topology drop-down list.
OMS supports One-way Synchronization and Bidirectional Synchronization. This example describes how to configure a one-way synchronization task. For more information, see Configure a bidirectional synchronization task.
In the Migration Options section, select the migration type for the current data migration task.

Options for Migration Type are Schema Migration, Full Migration, Incremental Synchronization, and Reverse Increment.
Migration type Description Schema Migration After the schema migration task starts, OMS migrates the data object definitions (tables, indexes, constraints, comments, and views) from the source database to the target database and automatically filters out temporary tables. Full Migration After the full migration task starts, OMS migrates the existing data in the tables of the source database to the corresponding tables in the target database. If you select Full Migration, we recommend that you collect the statistics of the Oracle compatible mode of the source OceanBase database before the migration. For more information, see Manually collect statistics. Incremental Synchronization After the incremental synchronization task starts, OMS synchronizes the data that is added, modified, or deleted in the source database to the corresponding table in the target database.
Incremental Synchronization includes DML synchronization and DDL synchronization, and you can configure them as needed. For more information, see Configure DDL/DML. Incremental Synchronization has the following limitations:- If the character sets and encodings of the source and target databases are inconsistent, OMS does not support schema changes of tables.
- If you select DDL synchronization, OMS may interrupt the data migration when it encounters an unsupported DDL operation in the source database.
- If the DDL operation is column addition, we recommend that you set the new column to NULL. In this case, OMS may also interrupt the data migration.
Reverse Increment After the reverse incremental task starts, it can synchronize the changes in the target database after the business switch to the source database in real time.
Usually, the configurations of incremental synchronization can be reused for reverse incremental synchronization. You can also configure them as needed. The following situations do not support selecting Reverse Increment:- Multi-table aggregation is involved.
- Multiple source schemas map to the same target schema.
(Optional) Click Next.
If you select Schema Migration or Incremental Synchronization, but the corresponding parameters are not configured for the OceanBase data source in Oracle compatible mode, the Add Data Source Information dialog box appears, prompting you to configure the parameters. For more information, see Create a physical OceanBase data source, Create an OceanBase public cloud data source, or Create an OceanBase standalone data source.
After the parameters are configured, click Test connectivity. If the connection can be established, click Save.
Click Next. On the Select Migration Objects page, select the objects to be migrated.
You can select the Specify Objects or Match by Rule tab to select migration objects. This example describes how to select migration objects on the Specify Objects tab. For information about how to configure matching rules, see Configure matching rules.
Notice
If the name of a database or table contains the characters "$$", the data migration task cannot be created.
If you have selected DDL Synchronization in the Migration Type step, we recommend that you select objects by using matching rules. This way, all new objects that meet the migration object rules will be synchronized. If you select objects by specifying them, new objects or renamed objects will not be synchronized.
OMS automatically filters out unsupported tables. For more information about the SQL statements for querying table objects, see SQL statements for querying table objects.

On the Select Migration Objects page, click Specify Objects.
Select migration objects in the Source Object(s) list. You can select tables and views from one or more databases as migration objects.
Add them to the Target Object(s) list by clicking >.
OMS allows you to import objects by using a text file, rename objects, configure row filters, select partitions and columns, and remove one or all migration objects.
Note
If you select Matching Rules to select migration objects, the rename functionality is unavailable and you can only set filter conditions. For more information, see Configure matching rules.
Operation Steps Import objects - In the Target Object(s) list, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK.
Notice: Importing will overwrite previous operations. Proceed with caution. - In the Import Migration Objects dialog box, select the objects to be migrated.
You can import a CSV file to rename databases and tables and configure row filters. For more information, see Download and import migration object configurations. - Click Validate.
- If the imported objects pass the validation, click OK.
Rename objects OMS allows you to rename migration objects. For more information, see Rename a database or table. Configure settings OMS allows you to configure row filters, select partitions, and specify columns to be migrated. - Hover the pointer over the target object in the right-side list of the selection area.
- Click Settings that appears.
- In the Settings dialog box, you can perform the following operations:
-
In the Row Filters section, configure row filters by entering WHERE clauses of standard SQL statements. For more information, see Filter data by using SQL conditions.
- In the Partition section, select the specified partition data that you want to obtain in full migration. After you select a partition, click OK.
- In the Select Columns section, select the columns to be migrated. For more information, see Column filtering.
Remove one or all objects OMS allows you to remove one or all selected objects from the target database. - Remove a migration object
Hover the pointer over the target object in the Target Object(s) list and click Remove that appears. - Remove all migration objects
Click Remove All in the upper-right corner of the Target Object(s) list. In the dialog box that appears, click OK.
Click Next. On the Migration Options page, configure the parameters.
Schema Migration
The following parameters are displayed only if you select One-way Synchronization > Schema Migration in the Select Migration Type step.

Parameter Description Automatically Enter Next Stage upon Completion If you select schema migration and any other migration type, you can specify whether to automatically proceed to the next stage after schema migration is completed. The default value is Yes. You can also view and modify this value on the Schema Migration tab of the data migration task details page. Normal Index Migration Method The migration method for non-unique key indexes associated with the migrated table objects. Valid values: Do Not Migrate, Migrate with Schema, and Post-Full-Migration (the last two modes are displayed only if full migration is selected). Full Migration
The following parameters are displayed only if you select One-way Synchronization > Full Migration in the Select Migration Type step.

Parameter Description Full Migration Rate Limit You can choose whether to limit the full migration rate as needed. If you choose to limit it, you must specify the RPS and BPS. The RPS specifies the maximum rows of data migrated to the target database per second during full migration, and the BPS specifies the maximum amount of data in bytes migrated to the target database per second during full migration. Note
The RPS and BPS set here are only for throttling and limiting the migration rate. The actual migration performance in full migration is limited by factors such as the source and target databases, instance specifications, and configurations.
Full Migration Resource Configuration You can select Small, Medium, or Large for the default read/write concurrency and memory. You can also customize the resource configurations for full migration. The resource configurations of the full-import component Full-Import limit the resource consumption of the data migration task in the full migration phase. Note
The minimum value is 1 and it must be an integer.
Handle Non-empty Tables in Target Database Valid values: Ignore and Stop Migration. - If you select Ignore, when the data to be inserted conflicts with the existing data of a target table, OMS retains the existing data and records the conflict data.
Notice
If you select Ignore, data is pulled in IN mode for full verification. In this case, the scenario where the target table contains more data than the source table cannot be verified, and the verification efficiency will be decreased.
- If you select Stop Migration and a target table contains data, an error is returned during full migration, indicating that the migration is not allowed. In this case, you must clear the data in the target table before you can continue with the migration.
Notice
After an error is returned, if you click Resume in the dialog box, OMS ignores this error and continues to migrate data. Proceed with caution.
- If you select Ignore, when the data to be inserted conflicts with the existing data of a target table, OMS retains the existing data and records the conflict data.
Incremental Synchronization
The following parameters are displayed only if you select One-way Synchronization > Incremental Synchronization in the Select Migration Type step.

Parameter Description Incremental Synchronization Rate Limit You can choose whether to limit the incremental synchronization rate as needed. If you choose to limit it, you must specify the RPS and BPS. The RPS specifies the maximum rows of data synchronized to the target database per second during incremental synchronization, and the BPS specifies the maximum amount of data in bytes synchronized to the target database per second during incremental synchronization. Note
The RPS and BPS set here are only for throttling and limiting the migration rate. The actual migration performance in incremental synchronization is limited by factors such as the source and target databases, instance specifications, and configurations.
Incremental log pull resource configuration You can select Small, Medium, or Large to use the corresponding default value of Memory. You can also customize the resource configurations for incremental log pull. By setting the resource configuration for the Store component, you can limit the resource consumption of a task in log pull in the incremental synchronization stage. Note
In the case of custom configurations, the minimum value is
1, and only integers are supported.Incremental Data Write Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Write Concurrency and Memory. You can also customize the resource configurations for incremental data writes. By setting the resource configuration for the Incr-Sync component, you can limit the resource consumption of a task in data writes in the incremental synchronization stage. Notice
In the case of custom configurations, the minimum value is
1, and only integers are supported.Incremental Record Retention Duration The duration that incremental parsed files are cached in OMS. A longer retention duration results in more disk space occupied by the Store component. Incremental Synchronization Start Timestamp - If you have selected Full Migration as the migration type, this parameter is not displayed.
- If you have selected Incremental Synchronization but not Full Migration, specify a point in time after which the data is to be synchronized. The default value is the current system time. For more information, see Set an incremental synchronization timestamp.
Reverse Increment
The following parameters are displayed only if you select Select Migration Type > Reverse Increment. The parameters are the same as those for incremental synchronization. You can select Reuse Incremental Synchronization Configuration in the upper-right corner.

If you have configured Obtain Incremental Data through Kafka for the Oracle data source, the following two parameters are unavailable:
Incremental Log Pull Resource Configuration and Incremental Record Retention Duration are not supported to configure.
The value of Incremental Data Write Resource Configuration parameter's Read Concurrency is the number of concurrent reads from Kafka during incremental synchronization. The default value is 4, the minimum value is 1, and the maximum value of 512. It is recommended that the maximum value be set to the number of the topic partitions corresponding to Kafka.
Advanced Options
Parameter Description Add Hidden Columns for Tables Without Non-null Unique Keys If data is to be migrated between OceanBase databases, you need to specify whether to add hidden columns for tables without non-null unique keys. For more information, see Hidden column mechanisms.
If you set the value to No, if the table to be migrated does not have a primary key or a non-null unique key, data duplication may occur when the task is restarted or encounters other exceptions. We recommend that you configure a non-null unique key for all tables.Target Table Storage Type The parameter is displayed only if the target is an Oracle-compatible tenant of OceanBase Database V4.3.0 or later and you select Schema Migration in the Select Migration Type step. The storage type of a table object in the target database can be Default, Row Storage, Column Storage, or Hybrid Row-Column Storage. This parameter specifies the storage type for target table objects during schema migration or incremental synchronization. For more information, see default_table_store_format. Note
The value Default means that other parameters are automatically set based on the parameter configurations of the target database. New table objects created by reverse incremental DDL statements in reverse increment are written to corresponding schemas in reverse increment based on the specified storage type.
If the parameters on this page cannot meet your requirements, you can click Parameter Configuration at the bottom of the page for more configurations. You can also reference the existing task or component templates if any.

Click Precheck to perform a precheck on the data migration task.
In the Precheck step, OMS checks whether the database user has the required read/write permissions and whether the database network connection is established. The migration task will not start unless all checks pass. If the precheck fails:
You can troubleshoot the issues and re-run the precheck until it succeeds.
You can also click Skip in the Actions column of the failed precheck item. A dialog box will appear, showing the impact of skipping this operation. If you want to proceed, click OK in the dialog box.
Click Start Task. You can also click Save to go to the details page of the data migration task, where you can start the task later.
You can click Configure Validation Task in the upper-right corner of the data migration details page to compare the data between the source and target databases. For more information, see Create a data validation task.
During the data migration task, you can modify the migration objects. For more information, see View and modify migration objects. After the data migration task starts, it will perform the migration steps based on the selected migration type. For more information, see the View Migration Details section in View details of a data migration task.