This topic describes how to use OceanBase Migration Service (OMS) to migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database (including physical, public cloud, and standalone data sources).
Background information
You can create a data migration task in the OMS console to seamlessly migrate the existing business data and incremental data from an Oracle database to an Oracle compatible mode database of OceanBase Database through schema migration, full migration, and incremental synchronization.
The Oracle database supports the following modes: primary database only, standby database only, and primary/standby databases. The following table describes the data migration operations supported by each mode.
| Mode | Supported operation |
|---|---|
| Primary database only | Schema migration, full migration, incremental synchronization, and reverse increment |
| Standby database only | Schema migration, full migration, and incremental synchronization |
| Primary/Standby databases | Primary database: reverse increment. Standby database: schema migration, full migration, and incremental synchronization |
Prerequisites
You have created a corresponding schema in the Oracle compatible mode of the target OceanBase Database.
Archive logs are enabled for the source Oracle database instance, and the log files have been switched before incremental replication to OMS.
LogMiner is installed and can be used normally in the source Oracle database instance.
You can use the LogMiner tool to retrieve specific content from Oracle archive log files.
You have created dedicated database users in the source Oracle database and the Oracle compatible mode of the target OceanBase Database for the data migration task and granted corresponding privileges to these users. For more information, see Create a database user.
The Oracle database instance has compensation logs enabled at the database level or table level. For more information, see Supplemental logging in Oracle databases.
Clock synchronization (for example, by configuring an NTP service) is performed between the Oracle server and the OMS server. If the Oracle database is Oracle RAC, clock synchronization is also performed between multiple Oracle instances.
Limitations
Limitations on the source database
Do not perform DDL operations that change the database or table schema during schema migration or full migration. Otherwise, the data migration task may be interrupted.
OMS supports Oracle Database 10g, 11g, 12c, 18c, and 19c. Database 12c and later versions include container databases (CDBs) and pluggable databases (PDBs).
OMS only supports migrating databases, tables, and column objects with names that are in ASCII format and do not contain special characters (spaces, line breaks, or .|"'`()=;/&\).
OMS does not support triggers in the target database. If triggers exist in the target database, the data migration may fail.
OMS does not support the migration of index-organized tables (IOTs) in Oracle databases. Otherwise, the data migration task may be interrupted.
Data type limitations
Incremental data migration is not supported for tables where all columns are of the LOB type (BLOB, CLOB, or NCLOB).
For tables without a primary key and with LOB columns, reverse incremental migration may result in data quality issues.
Data source identifiers and user accounts must be globally unique in OMS.
Oracle incremental log parsing supports a maximum of 5 TB per day.
Oracle databases of versions earlier than 11g do not support creating database objects whose names are longer than 30 bytes. Take note of this limit when you perform reverse incremental migration in the Oracle compatible mode of OceanBase Database.
OMS does not support the migration of database objects longer than 30 bytes in Oracle databases of version 12c and later.
OMS does not support the execution of partial
UPDATEstatements in Oracle databases of version 12c and later. The following is an example of aUPDATEstatement that is not supported.UPDATE TABLE_NAME SET KEY=KEY+1;In this example,
TABLE_NAMEis the name of the table, andKEYis a NUMERIC column defined as the primary key.
Considerations
When the source Oracle database is in single-standby or primary-standby mode, if the primary and standby databases have different numbers of running instances, some instances may not have their incremental logs pulled. To specify which instances' incremental logs need to be pulled when incremental logs are pulled from the standby database, you must modify the parameters of the Store component. The specific operation is as follows:
Stop the Store component immediately after it is started.
On the Update Configuration page of the Store component, add the
deliver2store.logminer.instance_threadsparameter to specify which instances' incremental logs need to be pulled when incremental logs are pulled from the standby database.Separate multiple threads with the | character. For example, 1|2|3.
After you modify the parameters, restart the Store component.
Five minutes after the restart, run
grep 'log entries from' connector/connector.logto view which instances' incremental logs are pulled (the thread field indicates which instances' incremental logs are pulled).
When you need to synchronize data from an Oracle database incrementally, the size of each archived file in the Oracle database should be less than 2 GB. A large archived file poses the following risks:
The longer the archived file, the more time is required to pull it, and the time increases linearly.
When the source Oracle database is in single-standby or primary-standby mode, incremental data is pulled from the standby database. In this case, only archived files can be pulled. If an archived file is generated, it can be pulled. The longer the archived file, the longer it has been delayed before it is processed, and the more time is required to process the large archived file.
The larger the archived file, the more memory, which is required for the Store component, when the same level of concurrency is set for pulling archived files.
If the archived files in the source Oracle database are retained for more than 2 days, the data in the archived files may be unavailable for recovery when the data in the archived files is required but the archived files no longer exist. This is because the peak archiving rate may be very high for a period of time or the Oracle store may encounter some exceptions in processing archived files.
If the source Oracle database executes a DML statement that exchanges primary keys, OMS fails to parse the log and data is lost during migration to the target database. A sample DML statement that exchanges primary keys is as follows:
UPDATE test SET c1=(CASE WHEN c1=1 THEN 2 WHEN c1=2 THEN 1 END) WHERE c1 IN (1,2);The character set of the Oracle instance can be AL32UTF8, AL16UTF16, ZHS16GBK, or GB18030. If the source character set is UTF-8, we recommend that you use a compatible character set, such as UTF-8 or UTF-16, in the target database to avoid issues such as garbled text caused by incompatible character sets.
When OMS pulls incremental data from a standby Oracle database, if the specified migration type includes incremental synchronization and reverse incremental synchronization and the pull of incremental data fails, you can execute
ALTER SYSTEM SWITCH LOGFILEin the primary database to help OMS work normally.When you migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database, you must not perform operations that change the ROWIDs on all tables in the Oracle compatible mode of OceanBase Database, such as importing, exporting, altering tables, flashing back tables, and splitting or merging partitions.
If the source Oracle database performs operations that affect ROWIDs, such as updating partition keys and merging partitions, the hidden columns in the Oracle compatible mode of OceanBase Database, which depend on ROWIDs, may lead to data loss.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during forward incremental synchronization and reverse incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
Due to historical reasons in China, there may be a one-hour time difference between the
TIMESTAMP(6) WITH TIME ZONEdata types in the source Oracle database and target Oracle compatible mode of OceanBase Database for the start and end dates of Daylight Saving Time in 1986 to 1991, and from April 10 to April 17, 1988.During reverse incremental synchronization of data from the Oracle compatible mode of OceanBase Database to the source Oracle database, if the Oracle compatible mode of OceanBase Database is an earlier version before V3.2.x and the source table is a multi-partition table with a global unique index, data loss may occur if you update the partition key of the table.
If the version of the Oracle compatible mode of OceanBase Database is earlier than V2.2.70, there may be incompatibilities when supplemental foreign key and check constraints are added during switchover.
If DDL synchronization is not enabled, if you modify the unique index in the Oracle compatible mode of OceanBase Database, you must restart the incremental synchronization component. Otherwise, data inconsistency may occur.
If forward switchover is not enabled for the data migration task, you must delete the unique indexes and fake columns in the target database. If you do not delete the unique indexes and fake columns, data cannot be written to the target database, and when data is imported from downstream, the fake columns will be regenerated, conflicting with those in the source database.
If forward switchover is enabled for the data migration task, OMS will delete hidden columns and unique indexes automatically based on the type of the data migration task. For more information, see Hidden column mechanism of data migration service.
During forward incremental synchronization of data from the Oracle database to the Oracle compatible mode of OceanBase Database, OMS will not delete hidden columns and unique indexes that are added in the Oracle compatible mode of OceanBase Database for tables without primary keys. You must delete these hidden columns and unique indexes before reverse incremental synchronization.
You can check the
logs/msg/manual_table.logfile to confirm the tables without primary keys that are affected by forward incremental synchronization.When the character encodings are different for the source and target databases, the schema migration will adopt a strategy to expand the field length definitions. For example, the field length will be expanded to 1.5 times its original length, and the length unit will be converted from bytes to characters.
After the conversion, data of different character sets in the source database can be successfully migrated to the target database. However, when you perform reverse incremental synchronization, you may find that the data is too long to be written back to the source database.
If the source database contains date and time data types that contain time zone information, such as TIMESTAMP WITH TIME ZONE, make sure that the target database supports the time zone and the time zone exists in the target database. Otherwise, data inconsistency will occur during migration.
In multi-table aggregation scenarios:
We recommend that you use matching rules to map the relationships between the source and target databases.
We recommend that you manually create schemas in the target database. If you use OMS to create schemas, skip failed objects in the schema migration step.
Check objects in the recycle bin of the Oracle database. If an object in the recycle bin is larger than 100, queries on internal tables may time out. In this case, you must clear objects in the recycle bin.
Check whether the recycle bin is enabled.
SELECT Value FROM V$ob_parameters WHERE Name = 'recyclebin';Check the number of objects in the recycle bin.
SELECT COUNT(*) FROM USER_RECYCLEBIN;
If you select only Incremental Synchronization when you create a data migration task, OMS requires that archived logs in the source database be retained for more than 48 hours.
If you select Full Migration + Incremental Synchronization when you create a data migration task, OMS requires that archived logs in the source database be retained for at least seven days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because OMS cannot obtain incremental logs.
If the source and target tables are the same except for the case of their names, the migration result may not be as expected because the names of table objects in the source and target databases are case-insensitive.
For incremental synchronization tasks whose source is an Oracle database (excluding those that obtain incremental data through Kafka), if a single transaction spans multiple archive logs, LogMiner may fail to return complete data information for the transaction, leading to data loss. We recommend that you configure full data verification and data correction to ensure data consistency.
Data type mappings
Notice
If the target is Oracle compatible mode of OceanBase Database earlier than V4.2.0, the size of CLOB or BLOB data must be less than 48 MB.
If the target is Oracle compatible mode of OceanBase Database V4.2.0 or later, the size of CLOB or BLOB data can be up to 512 MB.
ROWID, BFILE, XMLType, UROWID, UNDEFINED, and UDT data cannot be migrated.
Table data of the LONG or LONG RAW type cannot be incrementally synchronized.
| Oracle database | Oracle compatible mode of OceanBase Database |
|---|---|
| CHAR(n CHAR) | CHAR(n CHAR) |
| CHAR(n BYTE) | CHAR(n BYTE) |
| NCHAR(n) | NCHAR(n) |
| VARCHAR2(n) | VARCHAR2(n) |
| NVARCHAR2(n) | NVARCHAR2(n) |
| NUMBER(n) | NUMBER(n) |
| NUMBER (p, s) | NUMBER(p,s) |
| RAW | RAW |
| CLOB | CLOB |
| NCLOB |
|
| BLOB | BLOB |
| REAL | FLOAT |
| FLOAT(n) |
|
| BINARY_FLOAT | BINARY_FLOAT |
| BINARY_DOUBLE | BINARY_DOUBLE |
| DATE | DATE |
| TIMESTAMP | TIMESTAMP |
| TIMESTAMP WITH TIME ZONE | TIMESTAMP WITH TIME ZONE |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP WITH LOCAL TIME ZONE |
| INTERVAL YEAR(p) TO MONTH | INTERVAL YEAR(p) TO MONTH |
| INTERVAL DAY(p) TO SECOND | INTERVAL DAY(p) TO SECOND |
| LONG | CLOB Caution This type is not supported in incremental synchronization. |
| LONG RAW | BLOB Caution This type is not supported in incremental synchronization. |
Oracle table partitioning conversion
During data migration from an Oracle database, OMS adapts and converts the business SQL statements. There are differences in the conversion methods for some SQL statements between OceanBase Database V2.2.30 and V2.2.50.
Note
The partition conversion rules described in this topic apply to all types of partitions.
| Original table definition | OceanBase Database V2.2.30 | OceanBase Database V2.2.50 and later |
|---|---|---|
CREATE TABLE T_RANGE_0 (A INT, B INT, PRIMARY KEY (B) )PARTITION BY RANGE(A)(....); |
CREATE TABLE "T_RANGE_0" ("A" NUMBER, "B" NUMBER NOT NULL, PRIMARY KEY ("B", "A") )PARTITION BY RANGE ("A")(.... ); CREATE UNIQUE INDEX ON "T_RANGE_0"(B);
|
CREATE TABLE "T_RANGE_0" ("A" NUMBER, "B" NUMBER NOT NULL,CONSTRAINT "T_RANGE_10_UK" UNIQUE ("B") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE T_RANGE_10 ("A" INT, "B" INT, "C" DATE, "D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL, CONSTRAINT "T_RANGE_10_PK" PRIMARY KEY (A) )PARTITION BY RANGE(D)( .... ); |
CREATE TABLE T_RANGE_10 ("A" INT NOT NULL,"B" INT,"C" DATE,"D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,CONSTRAINT "T_RANGE_10_PK" PRIMARY KEY (A, C) )PARTITION BY RANGE(D)( .... ); |
CREATE TABLE T_RANGE_10 ("A" INT NOT NULL,"B" INT,"C" DATE,"D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,CONSTRAINT "T_RANGE_10_PK" UNIQUE (A) )PARTITION BY RANGE(D)( .... ); |
CREATE TABLE T_RANGE_1 (A INT,B INT,UNIQUE (B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_1" | Supported. |
CREATE TABLE T_RANGE_2 (A INT,B INT NOT NULL,UNIQUE (B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_2" ("A" NUMBER,"B" NUMBER NOT NULL,PRIMARY KEY ("B", "A") )PARTITION BY RANGE ("A")( .... ); |
Supported. |
CREATE TABLE T_RANGE_3 (A INT,B INT,UNIQUE (A) )PARTITION BY RANGE(A)( .... ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_2" | Supported. |
CREATE TABLE T_RANGE_4 (A INT NOT NULL,B INT,UNIQUE (A) )PARTITION BY RANGE(A)( .... ); |
CREATE TABLE "T_RANGE_4" ("A" NUMBER NOT NULL,"B" NUMBER,PRIMARY KEY ("A") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE "T_RANGE_4" ("A" NUMBER NOT NULL,"B" NUMBER,PRIMARY KEY ("A") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE T_RANGE_5 (A INT,B INT,UNIQUE (A, B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_5" | Supported. |
CREATE TABLE T_RANGE_6 (A INT NOT NULL,B INT,UNIQUE (A, B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_5" | Supported. |
CREATE TABLE T_RANGE_7 (A INT NOT NULL,B INT NOT NULL,UNIQUE (A, B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_7" ("A" NUMBER NOT NULL,"B" NUMBER NOT NULL,PRIMARY KEY ("A", "B") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE "T_RANGE_7" ("A" NUMBER NOT NULL,"B" NUMBER NOT NULL,PRIMARY KEY ("A", "B") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE T_RANGE_8 ("A" INT,"B" INT,"C" INT NOT NULL,UNIQUE (A),UNIQUE (B),UNIQUE (C) )PARTITION BY RANGE(B)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_8" ("A" NUMBER,"B" NUMBER,"C" NUMBER NOT NULL,PRIMARY KEY ("C", "B"),UNIQUE ("A"),UNIQUE ("B"),UNIQUE ("C") )PARTITION BY RANGE ("B")( .... ); |
Supported. |
CREATE TABLE T_RANGE_9 ("A" INT,"B" INT,"C" INT NOT NULL,UNIQUE(A),UNIQUE(B),UNIQUE (C) )PARTITION BY RANGE(C)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_9" ("A" NUMBER,"B" NUMBER,"C" NUMBER NOT NULL,PRIMARY KEY ("C"),UNIQUE ("A"),UNIQUE ("B") )PARTITION BY RANGE ("C")( .... ); |
CREATE TABLE "T_RANGE_9" ("A" NUMBER,"B" NUMBER,"C" NUMBER NOT NULL,PRIMARY KEY ("C"),UNIQUE ("A"),UNIQUE ("B") )PARTITION BY RANGE ("C")( .... ); |
Check and modify the system configuration of an Oracle instance
You need to perform the following steps:
Enable the archiving mode in the source Oracle database.
Enable the supplemental logging in the source Oracle database.
(Optional) Set the system parameters of the Oracle database.
Enable the archivelog mode in the source Oracle database
SELECT log_mode FROM v$database;
The log_mode field must be archivelog. Otherwise, you need to modify it based on the following method.
Run the following command to enable the archivelog mode.
SHUTDOWN IMMEDIATE; STARTUP MOUNT; ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN;Run the following command to view the path and quota of the archive log.
Check the path and quota of
recovery file. We recommend that you set thedb_recovery_file_dest_sizeparameter to a large value. Additionally, after archiving is enabled, you need to regularly clean up archive logs by using RMAN or other methods.SHOW PARAMETER db_recovery_file_dest;Change the quota of the archive log as needed.
ALTER SYSTEM SET db_recovery_file_dest_size =50G SCOPE = BOTH;
Enable supplemental logging in the source Oracle database
For more information about supplemental logging in an Oracle database, see Oracle supplemental logging.
Set system parameters of the Oracle database (optional)
We recommend that you set the system parameter _log_parallelism_max of the Oracle database to 1. The default value of this parameter is 2.
Query the value of the
_log_parallelism_maxparameter. You can perform the following two methods to query the value:Method 1
SELECT NAM.KSPPINM,VAL.KSPPSTVL,NAM.KSPPDESC FROM SYS.X$KSPPI NAM,SYS.X$KSPPSV VAL WHERE NAM.INDX= VAL.INDX AND NAM.KSPPINM LIKE '_%' AND UPPER(NAM.KSPPINM) LIKE '%LOG_PARALLEL%';Method 2
SELECT VALUE FROM v$parameter WHERE name = '_log_parallelism_max';
Modify the
_log_parallelism_maxparameter. The modification methods vary depending on whether the Oracle database is RAC. The following two examples show you how to modify the value in different scenarios.Modify the value for an Oracle RAC database.
ALTER SYSTEM SET "_log_parallelism_max" = 1 SID = '*' SCOPE = spfile;Modify the value for a non-RAC Oracle database.
ALTER SYSTEM SET "_log_parallelism_max" = 1 SCOPE = spfile;
If you modify the
_log_parallelism_maxparameter in Oracle Database 10g and receive the error messagewrite to SPFILE requested but no SPFILE specified at startup, perform the following operations:CREATE SPFILE FROM PFILE; SHUTDOWN IMMEDIATE; STARTUP; SHOW PARAMETER SPFILE;After you modify the system parameter
_log_parallelism_max, restart the instance, switch two archive logs, and wait more than 5 minutes before you start a task.
Procedure
Create a data migration task.

Log in to the OMS console.
In the left-side navigation pane, click Data Migration.
On the Data Migration page, click Create Task in the upper-right corner.
On the Create Task page, specify the name of the migration task.
We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length.
Notice
The task name must be a unique identifier in the OMS system.
In the Select Source and Target step, configure the parameters.

Parameter Description Source If you have created an Oracle data source, select it from the drop-down list. If not, click New Data Source in the drop-down list and create one in the dialog box that appears on the right. For more information about the parameters, see Create an Oracle data source. Target If you have created a data source for the Oracle compatible mode of OceanBase Database, which can be a physical data source, a public cloud OceanBase data source, or a standalone data source, select it from the drop-down list. If not, click New Data Source in the drop-down list and create one in the dialog box that appears on the right. For more information about the parameters, see Create a physical OceanBase data source, Create a public cloud OceanBase data source, or Create a standalone OceanBase data source. Tag (Optional) Click the text box and select a tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Use tags to manage data migration tasks. Click Next. In the Select Migration Type step, specify One-way Synchronization for Synchronization Topology.
OMS supports One-way Synchronization and Bidirectional Synchronization. This topic describes how to configure one-way synchronization. For more information about bidirectional synchronization, see Configure a bidirectional synchronization task.
In the Migration Options section, specify the migration type for the migration task.

Options for Migration Type are Schema Migration, Full Migration, Incremental Synchronization, and Reverse Increment.
Migration type Description Schema migration The definitions of data objects, such as tables, indexes, constraints, comments, and views, are migrated from the source database to the target database. Temporary tables are automatically filtered out. Full migration After a full migration task is started, OMS migrates existing data of tables in the source database to corresponding tables in the target database. If you select Full Migration, we recommend that you use the GATHER_SCHEMA_STATSorGATHER_TABLE_STATSstatement to collect the statistics of the Oracle database before data migration.Incremental synchronization Changed data in the source database is synchronized to the corresponding tables in the target database after an incremental synchronization task starts. Supported data changes are data addition, modification, and deletion.
Options for Incremental Synchronization are DML synchronization and DDL synchronization. Select the options as needed. For more information, see Configure DDL/DML synchronization. Incremental Synchronization has the following limitations:- If you select DDL synchronization, when you perform a DDL operation that cannot be synchronized by OMS in the source database, data migration may be interrupted.
- If the DDL operation creates a new column, we recommend that you set the column to NULL. If a new column contains default values, data migration may be interrupted.
You can add a new column and then specify the default values. - The source Oracle database does not support incremental synchronization of tables using the
empty_clob()function.
Reverse increment When a reverse increment task starts, OMS migrates the data changed in the target database after the business switchover back to the source database in real time.
Generally, incremental synchronization configurations are reused for reverse increment. You can also customize the configurations for reverse increment as needed. If a table to migrate has no primary key or unique index and a large amount of data in the table is changed, the reverse increment will take a long time. In this case, you can add unique indexes in the source database.
You cannot select Reverse Increment in the following cases:- Multi-table aggregation is involved.
- Multiple source schemas map to the same target schema.
(Optional) Click Next.
If you have selected Reverse Increment without configuring the related parameters for the target Oracle-compatible tenant of OceanBase Database, the Add Data Source Information dialog box appears, prompting you to configure related parameters. For more information about the parameters, see Create a physical OceanBase data source, Create a public cloud OceanBase data source, or Create a standalone OceanBase data source.
After you configure the parameters, click Test connectivity. After the test succeeds, click Save.
Click Next. In the Select Migration Objects step, specify the migration objects for the migration task.
You can select Specify Objects or Match by Rule to specify the migration objects. The following procedure describes how to specify migration objects by using the Specify Objects option. For information about the procedure for specifying migration objects by using the Match by Rule option, see Configure matching rules.
Notice
If a database or table name contains double dollar signs ("$$"), you cannot create the migration task.
If you have selected DDL Synchronization in the Select Migration Type step, we recommend that you select Match by Rule to specify migration objects. This way, all new objects that meet the specified rules will be synchronized. If you select Specify Objects to specify migration objects, new or renamed objects will not be synchronized.
OMS automatically filters out unsupported tables. For information about the SQL statements for querying table objects, see SQL statements for querying table objects.

In the Select Migration Objects section, select Specify Objects.
In the Source Object(s) list, select the objects to be migrated. You can select tables and views of one or more databases as the migration objects.
If you select Schema Migration in the Select Migration Type step, you can select sequences, types, tables, views, procedures, functions, packages, and synonyms in one or more databases as migration objects.
Notice
If you select only Schema Migration, at least select one object. If you select other types besides Schema Migration, at least select one table object.
The following objects will be migrated only when you select Schema Migration. The migration timing for different types of objects is as follows:
Tables, views, stored procedures, functions, synonyms, custom types, and package objects will be automatically migrated during schema migration.
The migration timing for regular indexes and trigger objects can be set in the migration options.
Sequences, constraints, and foreign key objects will be migrated during forward switching.
If you do not select Schema Migration in the Select Migration Type step, you can only select tables in one or more database as migration objects.
Click > to add the selected objects to the Target Object(s) list.
OMS also allows you to import objects from text, rename objects, configure row filters, select partitions and columns, and remove one or all objects to be migrated.
Note
When you select Match by Rule to specify migration objects, object renaming is implemented based on the syntax of the specified matching rules. In the operation area, you can only set filter conditions. For more information, see Configure matching rules.
PL objects such as views, functions, and stored procedures do not support renaming, setting row filter conditions, or selecting partitions and columns.
Operation Steps Import objects - In the Target Object(s) list, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK.
Notice
This operation will overwrite previous selections. Proceed with caution. - In the Import Objects dialog box, import the objects to be migrated.
You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of migration objects. - Click Validate.
- After the validation succeeds, click OK.
Rename objects OMS allows you to rename migration objects. For more information, see Rename a migration or synchronization object. Configure settings OMS allows you to configure row filters, select partitions, and specify columns to be migrated. - Hover the pointer over the target object in the right-side list of the selection area.
- Click Settings that appears.
- In the Settings dialog box, you can perform the following operations:
-
In the Row Filters section, configure row filters by entering WHERE clauses of standard SQL statements. For more information, see Filter data by using SQL conditions.
- In the Partition section, select the specified partition data that you want to obtain in full migration. After you select a partition, click OK.
- In the Select Columns section, select the columns to be migrated. For more information, see Column filtering.
Remove one or all objects OMS allows you to remove one or all objects to be migrated to the target database during data mapping. - To remove one migration object:
In the Target Object(s) list, move the pointer over the target object and click Remove. - To remove all migration objects:
In the Target Object(s) list, click Remove All in the upper-right corner. In the dialog box that appears, click OK.
When the source database is an Oracle database, if row filtering is enabled for columns other than the primary key and unique key columns, enable supplemental logging for the corresponding columns or all columns.
Execute the following statement to enable supplemental logging for the corresponding columns:
ALTER TABLE table_name ADD SUPPLEMENTAL LOG GROUP log_group_name (column1, column2, column3) ALWAYS;Execute the following statement to enable supplemental logging for all columns:
-- Enable database-level supplemental logging: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; -- Enable table-level supplemental logging: ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Click Next. In the Migration Options step, configure the parameters.
Schema migration
The following parameters are displayed only if you select One-way Synchronization > Schema Migration in the Select Migration Type step.

Parameter Description Automatically Enter Next Stage upon Completion If you select schema migration and any other migration type, you can specify whether to automatically proceed to the next stage after schema migration is completed. The default value is Yes, You can also view and modify it in the Schema Migration tab on the data migration task details page. Normal Index Migration Method The migration method for non-unique key indexes associated with the migrated table objects. Valid values: Do Not Migrate, Migrate with Schema, and Post-Full-Migration (the option becomes available only after full migration is selected). Trigger Migration Method The migration method of the associated triggers after the migration of table objects, including Do Not Migrate, Migration with Schema, and Forward switchover migration. Notice
If the selected migration type only includes Schema Migration, the default value is Do Not Migrate, and Forward Switchover Migration cannot be selected.
If the selected migration types include not only Schema Migration, Migrate with Schema cannot be selected.
Full migration
The following parameters are displayed only if you have selected One-way Synchronization > Full Migration in the Select Migration Type step.

Parameter Description Full Migration Rate Limit You can choose whether to limit the full migration rate as needed. If you choose to limit it, you must specify the RPS and BPS. The RPS specifies the maximum rows of data migrated to the target database per second during full migration, and the BPS specifies the maximum amount of data in bytes migrated to the target database per second during full migration. Note
The RPS and BPS values specified here are only for throttling. The actual full migration performance is subject to factors such as the settings of the source and target databases and the instance specifications.
Full Migration Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Read Concurrency, Write Concurrency, and Memory. You can also customize the resource configurations for full migration. Through resource configuration for the Full-Import component, you can limit the resource consumption of a task in the full migration stage. Notice
In the case of custom configurations, the minimum value is
1, and only integers are supported.Handle Non-empty Tables in Target Database Valid values: Ignore and Stop Migration. - If you select Ignore, when the data to be inserted conflicts with the existing data of a target table, OMS retains the existing data and records the conflict data.
Notice
If you select Ignore, data is pulled in IN mode for full verification. In this case, the scenario where the target table contains more data than the source table cannot be verified, and the verification efficiency will be decreased.
- If you select Stop Migration and a target table contains data, an error is returned during full migration, indicating that the migration is not allowed. In this case, you must clear the data in the target table before you can continue with the migration.
Notice
After an error is returned, if you click Resume in the dialog box, OMS ignores this error and continues to migrate data. Proceed with caution.
- If you select Ignore, when the data to be inserted conflicts with the existing data of a target table, OMS retains the existing data and records the conflict data.
Incremental synchronization
The following parameters are displayed only if you have selected One-way Synchronization > Incremental Synchronization in the Select Migration Type step.

Parameter Description Incremental Synchronization Rate Limit You can choose whether to limit the incremental synchronization rate as needed. If you choose to limit it, you must specify the RPS and BPS. The RPS specifies the maximum rows of data synchronized to the target database per second during incremental synchronization, and the BPS specifies the maximum amount of data in bytes synchronized to the target database per second during incremental synchronization. Note
The RPS and BPS values specified here are only for throttling. The actual incremental synchronization performance is subject to factors such as the settings of the source and target databases and the instance specifications.
Incremental Log Pull Resource Configuration You can select Small, Medium, or Large to use the corresponding default value of Memory. You can also customize the resource configurations for incremental log pull. Through resource configuration for the Store component, you can limit the resource consumption of a task in log pull in the incremental synchronization stage. Notice
- In the case of custom configurations, the minimum value is
1, and only integers are supported. - If Obtain Incremental Data through Kafka is configured for the Oracle data source, this parameter is not displayed.
Incremental Data Write Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Write Concurrency and Memory. You can also customize the resource configurations for incremental data writes. Through resource configuration for the Incr-Sync component, you can limit the resource consumption of a task in data writes in the incremental synchronization stage. Notice
- In the case of custom configurations, the minimum value is
1, and only integers are supported. - If Obtain Incremental Data through Kafka is configured for the Oracle data source, this parameter is set to be the number of concurrent reads from Kafka during incremental synchronization. The default value is 4, the minimum value is 1, and the maximum value of 512. It is recommended that the maximum value be set to the number of the topic partitions corresponding to Kafka.
Incremental Record Retention Duration The duration that incremental parsed files are cached in OMS. A longer retention duration results in more disk space occupied by the Store component. Note
If Obtain Incremental Data through Kafka is configured for the Oracle data source, this parameter is not displayed.
Incremental Synchronization Start Timestamp - If you have selected Full Migration as the migration type, this parameter is not displayed.
- If you have selected Incremental Synchronization but not Full Migration, specify a point in time after which the data is to be synchronized. The default value is the current system time. For more information, see Set an incremental synchronization timestamp.
- In the case of custom configurations, the minimum value is
Reverse increment
The following parameters are displayed only if you have selected Reverse Increment in the Select Migration Type step. The parameters for reverse increment are consistent with those for incremental synchronization. You can select Reuse Incremental Synchronization Configuration in the upper-right corner.

Advanced options
Parameter Description Encoding and Length Options This parameter is displayed only if you have selected Schema Migration on the Select Migration Type page and the source and target databases use different character sets. Note
If the character set of the source database is different from that of the target database, for example, the character set of the source database is GBK while that of the target database is UTF-8, fields may be truncated, which results in data inconsistency.
If you select Automatically Extend Fields at Target, Namely from N Bytes to 1.5N Bytes, the data after conversion is truncated to the maximum length limit if it exceeds the limit.
Add Hidden Columns for Tables Without Non-null Unique Keys If data is to be migrated between OceanBase databases, you need to specify whether to add hidden columns for tables without non-null unique keys. For more information, see Hidden column mechanisms.
If you set the value to No, if the table to be migrated does not have a primary key or a non-null unique key, data duplication may occur when the task is restarted or encounters other exceptions. We recommend that you configure a non-null unique key for all tables.Target Table Storage Type This parameter is displayed only if the target is an Oracle-compatible tenant of OceanBase Database V4.3.0 or later and you have selected Schema Migration or DDL synchronization for Incremental Synchronization in the Select Migration Type step.
This parameter specifies the storage type for target table objects during schema migration or incremental synchronization. The storage types supported for target table objects are Default, Row Storage, Column Storage, and Hybrid Row-Column Storage. For more information, see default_table_store_format.Note
The value Default means that other parameters are automatically set based on the parameter configurations of the target database. Table objects in schema migration and new table objects created by incremental DDL statements are written to corresponding schemas based on the specified storage type.
If the parameter settings on the page cannot meet your requirements, you can click Parameter Configuration in the lower part of the page to configure more specific settings. You can also reference an existing task or component template.

Click Precheck to start a precheck on the data migration task.
During the precheck, OMS checks the read and write privileges of the database users and the network connectivity of the databases. A data migration task can be started only after it passes all check items. If an error is returned during the precheck, you can perform the following operations:
Identify and troubleshoot the issue and then perform the precheck again.
Click Skip in the Actions column of a failed precheck item. In the dialog box that prompts the consequences of the operation, click OK.
Click Start Task. If you do not need to start the task now, click Save to go to the details page of the task. You can start the task later as needed.
You can click Configure Validation Task in the upper-right corner of the data migration details page to compare the data between the source database and the target database. For more information, see Create a data validation task.
OMS allows you to modify the migration objects when the data migration task is running. For more information, see View and modify migration objects. After the data migration task is started, it is executed based on the selected migration types. For more information, see the View migration details section in View details of a data migration task.