This topic describes how to use OceanBase Migration Service (OMS) to migrate data from an Oracle database to an Oracle tenant of OceanBase Database.
Background
You can create a data migration project in the OMS console to seamlessly migrate the existing business data and incremental data from an Oracle database to an Oracle tenant of OceanBase Database by using the schema migration, full migration, and incremental data synchronization features.
The Oracle database supports the following modes: single primary database, single standby database, and primary/standby databases. The following table describes the data migration operations supported for each mode.
| Type | Supported operations |
|---|---|
| Single primary database | Schema migration, full migration, incremental synchronization, full verification, and reverse incremental migration |
| Single standby database | Schema migration, full migration, incremental synchronization, and full verification |
| Primary/standby databases | Primary database: reverse incremental migration. Standby database: schema migration, full migration, incremental synchronization, and full verification |
Prerequisites
You have created a corresponding schema in the destination Oracle tenant of OceanBase Database. OMS allows you to migrate tables and columns. Therefore, you must create a corresponding schema in the destination database before migration.
You have enabled archivelog for the source Oracle instance and switched the logfile before OMS starts incremental data replication.
You have installed LogMiner in the source Oracle instance, and LogMiner runs properly.
LogMiner enables you to obtain data from the archived logs of the Oracle instance.
You have created a dedicated database user in the source Oracle database and the destination Oracle tenant of OceanBase Database for data migration and granted the corresponding privileges to the users. For more information, see Create a database user.
You have made sure that the Oracle instance has enabled the database-level or table-level supplemental_log feature.
If you enable supplemental_log of the primary key and unique key at the database level, when a large number of unnecessary logs are generated by tables that do not need to be synchronized, the pressure on LogMiner Reader to fetch logs and on the Oracle database increases. Therefore, you can enable only the table-level supplemental_log of the primary key and unique key for Oracle databases in the OMS console. However, if you configure the Set ETL Options to filter columns other than the primary key and unique key columns when you create a migration task, enable supplemental_log for the corresponding columns or all columns.
Clock synchronization (such as the NTP service) is required between an Oracle server and the OMS server to avoid data risks. For an Oracle RAC, clock synchronization is also required between Oracle instances.
Limits
OMS supports the following Oracle database versions: 10g, 11g, 12c, 18c, and 19c. Version 12c and later provide container databases (CDBs) and pluggable databases (PDBs).
In long-term synchronization between databases, OMS does not support triggers in the destination database.
Data type limits
Incremental data migration is not supported for a table whose data in all columns is of the following three large object (LOB) types: BLOB, CLOB, and NCLOB.
If a table does not have the primary key and contains data of the LOB type, the reverse incremental migration of the table can suffer poor data quality.
If the LOB field in the source Oracle database is too large, it cannot be stored in OceanBase Database, causing data synchronization errors.
OMS allows you to migrate data from the source Oracle instance that uses character sets including AL32UTF8, AL16UTF16, ZHS16GBK, and GB18030. If the character set used by the source database is UTF-8, we recommend that you use UTF-8 or a greater character set for the destination database.
If you select a migration mode that supports incremental synchronization and reverse incremental migration, and an exception occurs when OMS pulls the incremental data from a standby Oracle database, you can run the
ALTER SYSTEM SWITCH LOGFILEcommand in the primary database to handle the exception.When you migrate a table without the primary key from an Oracle database to an Oracle tenant of OceanBase Database, do not perform any operations on the source Oracle database that may change the ROWID, such as data import and export, Alter Table, FlashBack Table, and partition splitting or compaction.
If a new table without the primary key is added in the source Oracle database during the incremental synchronization, OMS does not automatically delete the hidden columns and the unique index added to the table in the destination Oracle tenant of OceanBase Database. You need to manually delete them before you start a reverse migration task.
To confirm the tables without the primary key that are added during the incremental synchronization, view the manual_table.log file in the
logs/msg/directory.Daylight Savings Time (DST) was once adopted in China, so a one-hour time difference between the source and the destination is expected for the data of the
TIMESTAMP(6) WITH TIME ZONEtype that was generated during the following periods: the DST periods from 1986 to 1991, and April 10 to 17, 1988.In a project of reverse incremental migration from an Oracle database to an Oracle tenant of OceanBase Database, when the Oracle tenant is of a version earlier than V3.2.x and has a multi-partition table with global unique indexes, if you update the value of a partitioning key of the table, data may be lost during migration.
When the Oracle tenant of the destination OceanBase database is earlier than V2.2.70, foreign keys, checks, and other objects added during the switchover may not be supported.
Character encoding and reverse synchronization are limited:
If the source and destination databases use different character sets, a field length extension policy will be provided during schema migration. For example, the field length is extended by 1.5 times, and the length unit is changed from BYTE to CHAR.
This ensures that data encoded by using different character sets can be migrated from the source database to the destination database. However, after cutover, data may fail to be written back to the source database during reverse incremental data migration because of an extra long data length.
If forward switchover is not started in a data migration project, delete the unique indexes and pseudocolumns from the source database. If you do not delete the unique indexes and pseudocolumns, data cannot be written, and pseudocolumns will be generated again when data is imported to the downstream system, causing conflicts with the pseudocolumns in the source database.
If you change the unique index of the destination, you must restart the incremental synchronization. Otherwise, the data may be inconsistent.
Data type mappings
Notice
Data of the CLOB and BLOB types must be less than 48 MB in size.
Data of the LONG, ROWID, BFILE, LONG RAW, XMLType, and UDT types cannot be migrated.
| Oracle Database | Oracle tenant of OceanBase Database |
|---|---|
| CHAR | CHAR |
| NCHAR | NCHAR |
| VARCHAR2 | VARCHAR2 |
| NVARCHAR2 | NVARCHAR2 |
| NUMBER | NUMBER |
| NUMBER (p, s) | NUMBER(p,s) |
| LONG | Full migration and verification of the data are supported. Incremental synchronization is not supported. |
| RAW | RAW |
| CLOB | CLOB |
| NCLOB |
|
| BLOB | BLOB |
| FLOAT(n) |
|
| BINARY_FLOAT | BINARY_FLOAT |
| BINARY_DOUBLE | BINARY_DOUBLE |
| DATE | DATE |
| TIMESTAMP | TIMESTAMP |
| TIMESTAMP WITH TIME ZONE | TIMESTAMP WITH TIME ZONE |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP WITH LOCAL TIME ZONE |
| INTERVAL YEAR(p) TO MONTH | INTERVAL YEAR(p) TO MONTH |
| INTERVAL DAY(p) TO SECOND | INTERVAL DAY(p) TO SECOND |
| ROWID | Not supported |
| BFILE | Not supported |
| LONG RAW | Full migration and verification of the data are supported. Incremental synchronization is not supported. |
| XMLType | Not supported |
| UDT | Not supported |
Conversion of Oracle table partitions
When OMS is used to migrate data from an Oracle database, the system automatically converts your business SQL statements. However, the conversion performed in OceanBase Database V2.2.30 is different from that in OceanBase Database V2.2.50.
Note
The partition conversion rules described in this topic apply to all partitioning types.
| Source table definition | Table after conversion in OceanBase Database V2.2.30 | Table after conversion in OceanBase Database V2.2.50 and later |
|---|---|---|
CREATE TABLE T_RANGE_0 (A INT, B INT, PRIMARY KEY (B) )PARTITION BY RANGE(A)(....); |
CREATE TABLE "T_RANGE_0" ("A" NUMBER, "B" NUMBER NOT NULL, PRIMARY KEY ("B", "A") )PARTITION BY RANGE ("A")(.... ); CREATE UNIQUE INDEX ON "T_RANGE_0"(B);
|
CREATE TABLE "T_RANGE_0" ("A" NUMBER, "B" NUMBER NOT NULL,CONSTRAINT "T_RANGE_10_UK" UNIQUE ("B") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE T_RANGE_10 ("A" INT, "B" INT, "C" DATE, "D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL, CONSTRAINT "T_RANGE_10_PK" PRIMARY KEY (A) )PARTITION BY RANGE(D)( .... ); |
CREATE TABLE T_RANGE_10 ("A" INT NOT NULL,"B" INT,"C" DATE,"D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,CONSTRAINT "T_RANGE_10_PK" PRIMARY KEY (A, C) )PARTITION BY RANGE(D)( .... ); |
CREATE TABLE T_RANGE_10 ("A" INT NOT NULL,"B" INT,"C" DATE,"D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,`` CONSTRAINT "T_RANGE_10_PK" UNIQUE (A) )PARTITION BY RANGE(D)( .... ); |
CREATE TABLE T_RANGE_1 (A INT,B INT,UNIQUE (B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_1" | The source table definition is supported. |
CREATE TABLE T_RANGE_2 (A INT,B INT NOT NULL,UNIQUE (B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_2" ("A" NUMBER,"B" NUMBER NOT NULL,PRIMARY KEY ("B", "A") )PARTITION BY RANGE ("A")( .... ); |
The source table definition is supported. |
CREATE TABLE T_RANGE_3 (A INT,B INT,UNIQUE (A) )PARTITION BY RANGE(A)( .... ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_2" | The source table definition is supported. |
CREATE TABLE T_RANGE_4 (A INT NOT NULL,B INT,UNIQUE (A) )PARTITION BY RANGE(A)( .... ); |
CREATE TABLE "T_RANGE_4" ("A" NUMBER NOT NULL,"B" NUMBER,PRIMARY KEY ("A") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE "T_RANGE_4" ("A" NUMBER NOT NULL,"B" NUMBER,PRIMARY KEY ("A") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE T_RANGE_5 (A INT,B INT,UNIQUE (A, B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_5" | The source table definition is supported. |
CREATE TABLE T_RANGE_6 (A INT NOT NULL,B INT,UNIQUE (A, B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_5" | The source table definition is supported. |
CREATE TABLE T_RANGE_7 (A INT NOT NULL,B INT NOT NULL,UNIQUE (A, B) )PARTITION BY RANGE(A)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_7" ("A" NUMBER NOT NULL,"B" NUMBER NOT NULL,PRIMARY KEY ("A", "B") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE "T_RANGE_7" ("A" NUMBER NOT NULL,"B" NUMBER NOT NULL,PRIMARY KEY ("A", "B") )PARTITION BY RANGE ("A")( .... ); |
CREATE TABLE T_RANGE_8 ("A" INT,"B" INT,"C" INT NOT NULL,UNIQUE (A),UNIQUE (B),UNIQUE (C) )PARTITION BY RANGE(B)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_8" ("A" NUMBER,"B" NUMBER,"C" NUMBER NOT NULL,PRIMARY KEY ("C", "B"),UNIQUE ("A"),UNIQUE ("B"),UNIQUE ("C") )PARTITION BY RANGE ("B")( .... ); |
The source table definition is supported. |
CREATE TABLE T_RANGE_9 ("A" INT,"B" INT,"C" INT NOT NULL,UNIQUE(A),UNIQUE(B),UNIQUE (C) )PARTITION BY RANGE(C)( partition P_MAX values less than (10) ); |
CREATE TABLE "T_RANGE_9" ("A" NUMBER,"B" NUMBER,"C" NUMBER NOT NULL,PRIMARY KEY ("C"),UNIQUE ("A"),UNIQUE ("B") )PARTITION BY RANGE ("C")( .... ); |
CREATE TABLE "T_RANGE_9" ("A" NUMBER,"B" NUMBER,"C" NUMBER NOT NULL,PRIMARY KEY ("C"),UNIQUE ("A"),UNIQUE ("B") )PARTITION BY RANGE ("C")( .... ); |
Check and modify the system configurations of the Oracle instance
Perform the following operations:
Enable archivelog for the source Oracle database.
Enable supplemental_log in the source Oracle database.
Set the system parameters of the Oracle database.
Restart the instance and perform the archivelog switchover three times.
Enable archivelog for the source Oracle database
select log_mode from v$database;
The value of the log_mode field must be archivelog. Otherwise, perform the following steps to change it:
Run the following commands to enable archivelog.
shutdown immediate; startup mount; alter database archivelog; alter database open;Run the following command to view the path and quota of archived logs.
View the path and quota of the
recovery file. We recommend that you set thedb_recovery_file_dest_sizeparameter to a relatively large value. After you enable archivelog, you need to regularly clear the archived logs by using RMAN or other methods.show parameter db_recovery_file_dest;Change the quota of archived logs as needed.
alter system set db_recovery_file_dest_size=50G scope=both;
Enable supplemental_log in the source Oracle database
LogMiner Reader allows you to enable only table-level supplemental_log in an Oracle database. If you create new tables in the Oracle instance before the migration, you must enable the supplemental_log for the primary key and unique key before the DML operations. Otherwise, OMS returns an exception of incomplete logs.
Notice
You need to enable supplemental_log in the primary Oracle database.
If the indexes are inconsistent between the source and destination databases, the ETL does not meet the expectation, or the migration performance of partitioned tables deteriorates. You need to add the following supplemental_logs:
Add the database-level or table-level
supplemental_log_data_pkandsupplemental_log_data_ui.Add columns to the supplemental_logs.
Add all columns involved by the primary keys or unique keys in the source and destination databases to resolve the problem of index inconsistency between the source and destination databases.
If an ETL exists, add the ETL column to resolve the problem that the ETL does not meet the expectation.
If the destination table is a partitioned table, add a partition column to resolve the problem that the write performance deteriorates because partition pruning cannot be performed.
You can execute the following statement to check the addition result.
select log_group_type from all_log_groups where owner = 'Database' and table_name = 'Table';If the check result includes ALL COLUMN LOGGING, the check is passed. Otherwise, check whether the
ALL_LOG_GROUP_COLUMNStable contains all preceding columns.Sample statement for adding columns to supplemental_logs:
alter table <table_name> add supplemental log group <table_name_group> (c1, c2) always;
The following table describes the possible risks and solutions when you perform DDL operations in a running data migration project.
| Operation | Risks | Solution |
|---|---|---|
| CREATE TABLE (table to be synchronized) | If the table in the destination database is a partitioned table, the table indexes in the source and destination databases are inconsistent, or ETL is required, the data migration performance may be affected and ETL may not meet the expectation. | Database-level primary key and unique key supplemental_logs must be enabled. Manually add the involved columns to the supplemental_logs. |
| Add, delete, or modify the primary key, unique key, or partition column, or modify the ETL column | This violates the rule of adding supplemental_logs upon start and may result in data inconsistency or reduced migration performance. | Add supplemental_logs based on the preceding rules. |
LogMiner Reader uses one of the following two methods to check whether supplemental_log is enabled. If not, LogMiner Reader exits.
Enable
supplemental_log_data_pkandsupplemental_log_data_uiat the database level.Run the following commands to check whether the supplemental_log is enabled. If the returned values are both
YES, the supplemental_log is enabled.select supplemental_log_data_pk, supplemental_log_data_ui from v$database;Otherwise, perform the following steps:
Execute the following statement to enable the supplemental_log.
alter database add supplemental log data(primary key, unique) columns;Perform archivelog switchover three times. For an Oracle RAC, perform switchover for the instances alternately.
alter system switch logfile;The reason for performing the archivelog switchover three times:
When the Oracle Store locates the start time to pull log files, it rolls back 0 to 2 archived logs based on the specified timestamp. Therefore, after you enable the supplemental_log, you need to perform the archivelog switchover three times to prevent the store from pulling the logs that are generated before the specified timestamp. Otherwise, the store exits unexpectedly.
The reason for alternately performing the archivelog switchover among multiple instances in an RAC system:
In an Oracle RAC system, if you perform the archivelog switchover multiple times on one instance, when you perform the archivelog switchover on the next instance, the latter instance may pull the logs that are generated before the supplemental_log is enabled.
Enable
supplemental_log_data_pkandsupplemental_log_data_uiat the table level.Execute the following statement to confirm whether
supplemental_log_data_minis enabled at the database level.select supplemental_log_data_min from v$database;If the returned value is
YESorIMPLICIT, the supplemental_log is enabled.Execute the following statement to check whether the table-level supplemental_log is enabled for the tables to be synchronized.
select log_group_type from all_log_groups where owner = 'xxx' and table_name = 'yyy';Each type of supplemental_log returns one row. The results must contain
ALL COLUMN LOGGINGor bothPRIMARY KEY LOGGINGandUNIQUE KEY LOGGING.If the table-level supplemental_log is not enabled, execute the following statement.
alter table table_name add supplemental log data(primary key, unique) columns;Perform archivelog switchover three times. For an Oracle RAC, perform switchover for the instances alternately.
alter system switch logfile;
(Optional) Set the system parameters of the Oracle database
We recommend that you set the _log_parallelism_max parameter of the Oracle database to 1. The default value is 2.
You can use one of the following two methods to query the value of the _log_parallelism_max parameter.
Method 1
SELECT NAM.KSPPINM,VAL.KSPPSTVL,NAM.KSPPDESC FROM SYS.X$KSPPI NAM,SYS.X$KSPPSV VAL WHERE NAM.INDX= VAL.INDX AND NAM.KSPPINM LIKE '_%' AND UPPER(NAM.KSPPINM) LIKE '%LOG_PARALLEL%';Method 2
select value from v$parameter where name = '_log_parallelism_max';
Execute one of the following statements to modify the value of the _log_parallelism_max parameter.
Oracle RAC
alter system set "_log_parallelism_max"=1 sid='*' scope=spfile;Non-Oracle RAC
alter system set "_log_parallelism_max"=1 scope=spfile;
When you modify the value of the _log_parallelism_max parameter in Oracle Database 10g, if the error message write to SPFILE requested but no SPFILE specified at startup is returned, perform the following operations:
create spfile from pfile;
shutdown immediate;
startup;
show parameter spfile;
Restart the instance and perform log switchover
After completing the preceding operations, restart the instance and perform log switchover three times.
Create a data migration project
Create a migration project.
Log on to the OMS console.
In the left-side navigation pane, click Data Migration.
On the Data Migration page, click Create Migration Project in the upper-right corner.
On the Select Source and Destination page, configure the parameters.
Parameter Description Migration Project Name We recommend that you set it to a combination of Chinese characters, digits, and letters. It must not contain any spaces and cannot exceed 64 characters in length. Label Click the field and select the target tag from the drop-down list. You can click Manage Tags to create, modify, and delete tags. For more information, see Use tags to manage data migration projects. Source If you have created an Oracle data source, select it from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create an Oracle data source. Destination If you have created Oracle tenants in OceanBase Database as data sources, select one from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create OceanBase Database physical tables as a data source. Click Next. On the Select Migration Type page, specify the following parameters.
Options are available for Migration Type include Schema Migration, Full Migration, Incremental Synchronization, Full Verification, and Reverse Incremental Migration.
Migration type Limits Full Migration If you select Full Migration, we recommend that you use the GATHER_SCHEMA_STATSorGATHER_TABLE_STATSstatement to collect the statistics of the Oracle database before data migration.Incremental Synchronization Options for Incremental Synchronization are DML Synchronization and DDL Synchronization. The DML operations for synchronization are Insert,Delete, andUpdate. You can select the operations as needed. For more information, see Supported DDL operations for incremental migration from an Oracle database to an Oracle tenant of OceanBase Database. Incremental Synchronization has the following limits:- For Oracle 12c or later versions, if you select DDL Synchronization, when you add or change a column, the table name and column name cannot exceed 30 bytes in length.
If you want the database to support table names and column names of more than 30 bytes in length, specify theENABLE_GOLDENGATE_REPLICATIONparameter as the SYS user, and setdeliver2store.logminer.need_check_object_lengthto false. - Set
ENABLE_GOLDENGATE_REPLICATIONas follows:
For a Real Application Cluster (RAC) environment, set this parameter for each node. If the Oracle database is in Active Data Guard (ADG) mode, set this parameter in the ADG source database.
alter system set ENABLE_GOLDENGATE_REPLICATION=true SCOPE=BOTH; - Query
ENABLE_GOLDENGATE_REPLICATIONas follows.
SELECT K.KSPPINM,V.KSPPSTVL FROM SYS.X$KSPPI K,SYS.X$KSPPSV V WHERE K.INDX=V.INDX AND UPPER(K.KSPPINM) = 'ENABLE_GOLDENGATE_REPLICATION'; - If you do not select DDL Synchronization, ensure that the source database involves no modifications and that the incremental DML data has been synchronized to the destination before DDL modifications. Then, perform related DDL operations in the source and destination databases respectively.
- If you do not select DDL Synchronization, for DDL operations on tables in the migration link, perform these operations in the destination database first. Otherwise, data migration may fail.
- If you have selected DDL Synchronization, when you perform a DDL operation for incremental migration that is not supported by OMS in the source database, data migration may fail.
- The source Oracle database does not support incremental synchronization of tables using the
empty_clob()function.
Full Verification - If you select Full Verification, we recommend that you collect the statistics of the Oracle database and the Oracle tenant of OceanBase Database before full verification.
- If you have selected Incremental Synchronization but did not select all DML statements in DML Synchronization, OMS does not support full data verification in this scenario.
Reverse Incremental Migration If a table to migrate has no primary key or unique index and a large amount of data in the table is changed, the reverse incremental migration will take a long time. In this case, you can add unique indexes in the source database.
You cannot select Reverse Incremental Migration in the following cases:- Multi-table aggregation and synchronization are enabled.
- Multiple schemas are configured in a rule to match one type of objects.
- For Oracle 12c or later versions, if you select DDL Synchronization, when you add or change a column, the table name and column name cannot exceed 30 bytes in length.
(Optional) Click Next. If you select Reverse Incremental Migration but the ConfigUrl, username, or password is not configured for the data source of the destination Oracle tenant of OceanBase Database, the More about Data Sources dialog box appears, prompting you to configure related parameters. For more information, see Create OceanBase Database physical tables as a data source.
After you configure the parameters, click Test Connectivity. After the test succeeds, click Save.
Click Next. On the Select Migration Objects page, select the migration objects and migration scope.
You can select one of the following two modes to migrate objects: Specify Objects or Match Rules. If you select DDL Synchronization, only the Match Rules option is available.
Select Specify Objects. Then select the objects to be migrated on the left and click > to add them to the list on the right. You can select tables and views of one or more databases as the migration objects.
Notice
The name of a table to be migrated and the names of columns in the table must not contain Chinese characters.
If the database or table name contains a double dollar sign ($$), you cannot create the migration project.
When you migrate data from an Oracle database to an Oracle tenant of OceanBase Database, OMS allows you to import objects through text, rename object names, set row filters, view column information, and remove one or all objects to be migrated.
Operation Steps Import Objects - In the list on the right of the Specify Migration Scope section, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK.
NoticeThis operation will overwrite previous selections. Proceed with caution. - In the Import Objects dialog box, import the objects to be migrated.
You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of migration objects. - Click Validate.
- After the validation succeeds, click OK.
Rename - In the list on the right of the Specify Migration Scope section, hover the pointer over the target object.
- Click Rename.
- Enter a new name and click OK.
Settings OMS allows you to set WHEREconditions to filter data by row and view column information.- In the list on the right of the Specify Migration Scope section, hover the pointer over the target object.
- Click Settings.
- In the Settings dialog box, specify a standard SQL
WHEREclause to filter data by row. The setting takes effect for full migration and incremental synchronization.
Notice- Add an escape character (`) for column names. Example:`col`.
- Only the data meeting the
WHEREcondition is synchronized to the destination data source, thereby filtering data by row. - If row-based filtering with the
WHEREclause is enabled, right-trim is forcibly performed on data of the CHAR or VARCHAR type, which may cause an inaccurate comparison of the VARCHAR data. Proceed with caution.
- Click OK.
You can also view column information of the migration object in the View Columns section.
Remove/Remove All OMS allows you to remove one or more objects from the destination database during data mapping. - Remove a single migration object
In the list on the right of the Specify Migration Scope section, hover the pointer over the target object, and click Remove. The migration object is removed. - Remove all migration objects
In the list on the right of the Specify Migration Scope section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all migration objects.
When the source database is an Oracle database, if row filtering is enabled for columns other than the primary key and unique key columns, enable supplemental_log for the corresponding columns or all columns.
Statement for enabling supplemental_log for the corresponding columns:
ALTER TABLE table_name ADD SUPPLEMENTAL LOG GROUP log_group_name (column1, column2, column3) ALWAYS;Statement for enabling supplemental_log for all columns:
-- Enable database-level supplemental_log: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; -- Enable table-level supplemental_log: ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Select Match Rules. For more information, see Configure matching rules for migration objects.
Click Next. On the Migration Options page, configure the parameters.
Parameter Description Concurrency for Full Migration The value can be Smooth, Normal, or Fast. The amount of resources to be consumed by a full data migration task varies based on the migration performance.
You can also modify the configurations of the checker component to customize the concurrency.
Notice
To enable this feature, select Full Migration on the Select Migration Type page.Full Verification Concurrency The value can be Smooth, Normal, or Fast. Different quantities of resources of the source and destination databases are consumed at different concurrencies.
You can also modify the configurations of the checker component to customize the concurrency.Incremental Record Retention Time The duration that incremental parsed files are cached in OMS. A longer retention period indicates more disk space occupied by the store component of OMS. Whether to Allow Destination Table to Be Not Empty During Full Migration If destination tables are allowed to be not empty during full migration, full verification is performed in INmode.
Notice
To enable this feature, select Full Migration on the Select Migration Type page.Whether to Allow Post-indexing You can specify whether to allow post-indexing after full migration is completed. Post-indexing can shorten the time of full migration.
Notice- To enable this feature, select both Schema Migration and Full Migration on the Select Migration Type page.
- Only non-unique key indexes can be created after the migration is completed.
Click Precheck to start a precheck on the data migration project.
During the precheck****, OMS checks the read and write privileges of the database users and the network connections of the databases. The data migration project can be started only after it passes all check items. If an error is returned:
You can troubleshoot the error and run the precheck again.
You can also click Skip in the Actions column of the precheck item that returns the error. Then, a dialog box appears, indicating the impact that may be caused if you choose to skip this check item. If you want to continue, click OK in the dialog box.
Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data migration project. You can start the project later as needed.
OMS allows you to modify migration objects when a data migration project is running. For more information, see View and modify migration objects. After a data migration project is started, the migration objects will be executed based on the selected migration type. For more information, see the "View migration details" section in the View details of a data migration project.