This topic describes how to use OceanBase Migration Service (OMS) to migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database (including physical, public cloud, and standalone data sources).
Background information
You can create a data migration task in the OMS console to seamlessly migrate the existing business data and incremental data from an Oracle database to a MySQL compatible mode database of OceanBase Database through schema migration, full migration, and incremental synchronization.
The Oracle database supports the following modes: primary database only, standby database only, and primary/standby databases. The following table describes the data migration operations supported by each mode.
| Mode | Supported operation |
|---|---|
| Primary database only | Schema migration, full migration, incremental synchronization, and reverse increment |
| Standby database only | Schema migration, full migration, and incremental synchronization |
| Primary/Standby databases | Primary database: reverse increment. Standby database: schema migration, full migration, and incremental synchronization |
Prerequisites
You have created a corresponding schema in the MySQL compatible mode of the target OceanBase Database.
The Oracle source database must have enabled archive logs, and its log file must have been switched before incremental replication to OMS.
The LogMiner tool must be installed and properly configured in the Oracle source database.
You can use the LogMiner tool to view the specific content in the Oracle archive log file.
You have created dedicated database users in the source Oracle database and the MySQL compatible mode of the target OceanBase Database for the data migration task, and granted relevant privileges to these users. For more information, see Create a database user.
The Oracle instance has enabled supplemental logging at the database level or table level. For more information, see Supplemental logging in Oracle databases.
Clock synchronization must be configured between the Oracle server and the OMS server. Otherwise, data loss may occur. If you are using Oracle RAC, clock synchronization must also be configured between the multiple Oracle instances.
Limitations
Limitations on the source database
Do not perform DDL operations that modify database or table schemas during schema migration or full migration. Otherwise, the data migration task may be interrupted.
OMS supports Oracle Database 10g, 11g, 12c, 18c, and 19c. A Database (CDB) and Pluggable Database (PDB) are included in Oracle Database 12c and later.
When you migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database, DDL synchronization is not supported.
OMS does not support triggers in the target database. If triggers exist in the target database, the data migration may fail.
Data source identifiers and user accounts must be globally unique in OMS.
OMS only supports migrating databases, tables, and column objects with ASCII-compliant names that do not contain special characters (spaces, line breaks, or .|"'`()=;/&\).
OMS does not support reverse incremental migration from an Oracle database of a version earlier than 12c to a MySQL compatible mode of OceanBase Database. If you perform reverse incremental migration from an Oracle database of a version earlier than 12c to a MySQL compatible mode of OceanBase Database, the migration may fail.
The parsing of incremental logs from an Oracle database of a version earlier than 11g is not supported. If you want to perform incremental migration from an Oracle database of a version earlier than 11g to a MySQL compatible mode of OceanBase Database, perform full migration instead.
OMS does not support the migration of database objects, such as schemas, tables, and columns, with names longer than 30 bytes in an Oracle database of a version later than or equal to 12c. In reverse incremental migration, you cannot create database objects in the MySQL compatible mode of OceanBase Database whose names are longer than 30 bytes.
OMS does not support the migration of database objects, such as schemas, tables, and columns, with names longer than 30 bytes in an Oracle database of a version later than or equal to 12c. If you want to migrate such database objects, see Migrate an Oracle database object with a name longer than 30 bytes.
OMS does not support the execution of the following
UPDATEstatement in an Oracle database as the source:UPDATE TABLE_NAME SET KEY=KEY+1;In the statement,
TABLE_NAMEindicates the name of the table, andKEYindicates the name of a NUMERIC type column defined as the primary key.
Considerations
When the source Oracle database is in the primary-standby mode, if the primary and standby databases have different numbers of running instances, some instances may not have their incremental logs pulled. In this case, you must manually set parameters of the Store component to specify the instances whose incremental logs need to be pulled from the standby database. The operation is as follows:
Stop the Store component immediately after it is started.
On the Update Configuration page of the Store component, add the
deliver2store.logminer.instance_threadsparameter to specify the instances whose incremental logs need to be pulled.Separate multiple threads with |. For example, 1|2|3. For more information about how to update the component, see Why does OMS lose archive logs when it synchronizes incremental data from an Oracle standby database?.
After you set the parameters, restart the Store component.
Five minutes after the restart, run
grep 'log entries from' connector/connector.logto view which instances have had their logs pulled (the thread field indicates which instances' logs are pulled).
When you need to synchronize data from an Oracle database incrementally, it is recommended that the size of a single archive file of the source Oracle database be less than 2 GB. If a single archive file is excessively large, the following risks exist:
The longer the archive file, the more time is required to pull it.
When the Oracle database is in primary-standby mode, incremental data is pulled from the standby database. In this case, only archive files can be pulled. If an archive file is generated, it can be pulled. The longer the archive file, the longer the delay before it is processed, and the more time is required to process it.
The larger the archive file, the more memory is required for the Store component when the same degree of parallelism is set for pulling data.
If the retention period of archive files of the source Oracle database exceeds 2 days, the archive files may run out when the backup data needs to be restored due to a sudden surge in the amount of archive data or a processing exception in the Oracle store. In this case, data restoration cannot be performed.
If there are right-space differences in the source Oracle database, the VARCHAR or CHAR data imported to the MySQL compatible mode of the target OceanBase Database will be right-trimmed, causing data inconsistency.
If the source Oracle database contains a DML statement that swaps primary keys, OMS will fail to parse the log, resulting in data loss during migration to the target database. An example of the DML statement is as follows:
UPDATE test SET c1=(CASE WHEN c1=1 THEN 2 WHEN c1=2 THEN 1 END) WHERE c1 IN (1,2);The character set of the Oracle instance can be AL32UTF8, AL16UTF16, ZHS16GBK, or GB18030. If the source character set is UTF-8, we recommend that you use a compatible character set, such as UTF-8 or UTF-16, for the target character set to avoid issues such as garbled text caused by incompatible character sets.
When OMS pulls incremental data from a standby Oracle database, if the selected migration type includes incremental synchronization and reverse incremental synchronization and the pull of incremental data fails, you can execute
ALTER SYSTEM SWITCH LOGFILEin the primary database to help OMS work properly.If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization or reverse incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
When you migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database, you cannot import, export, alter tables, perform table flashback, or split or merge partitions in tables where the ROWID changes.
During reverse incremental synchronization from an Oracle database to the MySQL compatible mode of OceanBase Database, if the MySQL compatible mode is earlier than V3.2.x and the source table is a multi-partition table with a global unique index, data loss may occur during data migration if you update the partition key of the table.
If the PK/UK (as a verification field) contains the INTERVAL data type, you must change the verification mode of the
filter.verify.inmod.tablescomponent toinmode. Otherwise, the verification result is inaccurate.You can add or update the value of the
filter.verify.inmod.tablescomponent to.*;.*;.*. For more information, see Update the configurations of a Full-Verification component.If a function such as update current_timestamp is used in a time-type field of the target OceanBase database, data inconsistency may occur between the source and target databases.
Verify whether the migration precision of FLOAT or DOUBLE columns meets your expectations for the source and target databases. If the precision of the target column type is less than that of the source column type, the data may be truncated, causing data inconsistency between the source and target databases.
If you modify the unique index of the target database, you must restart the incremental synchronization component. Otherwise, data inconsistency may occur.
If the data migration task is not enabled for forward switchover, you must delete the unique indexes and pseudo columns corresponding to the target database. If you do not delete the unique indexes and pseudo columns, data cannot be written, and when data is imported from downstream, new pseudo columns will be generated, conflicting with those in the source database.
If the data migration task is enabled for forward switchover, OMS will automatically delete hidden columns and unique indexes based on the type of the data migration task. For more information, see Description of the hidden column mechanism of data migration service.
By default, the parameter
lower_case_table_namesof the target database is set to 1, and target database objects are created in lowercase.In multi-table aggregation scenarios:
We recommend that you map the source and target databases by using matching rules.
We recommend that you manually create schemas in the target database. If you use OMS to create schemas, skip failed objects in the schema migration step.
If the source and target table schemas are not exactly the same, data inconsistency may occur. Known scenarios are as follows:
When users manually create table schemas, if the data types of any columns are not supported by OMS, implicit data type conversion may occur, which causes inconsistent column types between the source and target columns.
If the length of a column in the target database is shorter than that in the source database, the data of this column may be automatically truncated, causing data inconsistency between the source and target databases.
If you select only Incremental Synchronization when you create a data migration task, OMS requires that the archive logs in the source database be retained for more than 48 hours.
If you select Full Migration + Incremental Synchronization when you create a data migration task, OMS requires that the archive logs in the source database be retained for at least 7 days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because OMS cannot obtain incremental logs.
If the source or target table objects differ only in capitalization of their names, the migration result may not be as expected because the source or target object names are case-insensitive.
For incremental synchronization tasks whose source is an Oracle database (excluding those that obtain incremental data through Kafka), if a single transaction spans multiple archive logs, LogMiner may fail to return complete data information for the transaction, leading to data loss. We recommend that you configure full data verification and data correction to ensure data consistency.
Check and modify the system configuration of an Oracle instance
You need to:
Enable the archiving mode in the source Oracle database.
Enable the supplemental logging in the source Oracle database.
(Optional) Set the system parameters of the Oracle database.
Enable the archivelog mode in the source Oracle database
SELECT log_mode FROM v$database;
The log_mode field must be archivelog. Otherwise, you need to modify it based on the following method.
Run the following command to enable the archivelog mode.
SHUTDOWN IMMEDIATE; STARTUP MOUNT; ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN;Run the following command to view the path and quota of the archive logs.
Check the path and quota of
reover file. We recommend that you set thedb_recovery_file_dest_sizeparameter to a large value. Additionally, after archiving is enabled, you need to regularly clean up the archive logs by using RMAN or other methods.SHOW PARAMETER db_recovery_file_dest;Modify the quota of the archive logs as needed.
ALTER SYSTEM SET db_recovery_file_dest_size =50G SCOPE = BOTH;
Enable supplemental logging in the source Oracle database
For more information about supplemental logging in an Oracle database, see Oracle supplemental logging.
Set system parameters of the Oracle database (optional)
We recommend that you set the system parameter _log_parallelism_max of the Oracle database to 1. The default value of this parameter is 2.
Query the value of the
_log_parallelism_maxparameter. You can use the following methods to query the value:Method 1
SELECT NAM.KSPPINM,VAL.KSPPSTVL,NAM.KSPPDESC FROM SYS.X$KSPPI NAM,SYS.X$KSPPSV VAL WHERE NAM.INDX= VAL.INDX AND NAM.KSPPINM LIKE '_%' AND UPPER(NAM.KSPPINM) LIKE '%LOG_PARALLEL%';Method 2
SELECT VALUE FROM v$parameter WHERE name = '_log_parallelism_max';
Modify the
_log_parallelism_maxparameter. The modification method varies depending on whether the Oracle database is RAC. The modification methods are as follows:Modify the parameter in an Oracle RAC environment
ALTER SYSTEM SET "_log_parallelism_max" = 1 SID = '*' SCOPE = spfile;Modify the parameter in a non-RAC Oracle database
ALTER SYSTEM SET "_log_parallelism_max" = 1 SCOPE = spfile;
If you modify the
_log_parallelism_maxparameter in Oracle Database 10g and receive the error messagewrite to SPFILE requested but no SPFILE specified at startup, perform the following operations:CREATE SPFILE FROM PFILE; SHUTDOWN IMMEDIATE; STARTUP; SHOW PARAMETER SPFILE;After you modify the system parameter
_log_parallelism_max, restart the instance, switch two archive logs, and wait for more than 5 minutes before you start a task.
Data type mappings
| Oracle database | OceanBase Database in MySQL compatible mode |
|---|---|
| CHAR(n) | CHAR(n) |
| CHAR(n CHAR) | VARCHAR(n) |
| CHAR(n BYTE) | CHAR(n) |
| NCHAR(n) | VARCHAR(n) |
| VARCHAR2 | VARCHAR |
| NVARCHAR2 | VARCHAR |
| NUMBER (p, s) | DECIMAL(p, s)/NUMERIC(p, s) When (p,s) is not specified for NUMBER, it is converted to (65,30) by default |
| LONG | LONGTEXT |
| RAW | VARBINARY |
| CLOB | LONGTEXT |
| NCLOB | LONGTEXT |
| BLOB | LONGBLOB |
| FLOAT | DOUBLE |
| BINARY_FLOAT | DOUBLE |
| BINARY_DOUBLE | DOUBLE/DOUBLE PRECISION |
| DATE | DATETIME |
| TIMESTAMP(n) | DATETIME(n) |
| TIMESTAMP WITH TIME ZONE | VARCHAR(50) |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP |
| INTERVAL YEAR(p) TO MONTH | VARCHAR(50) |
| INTERVAL DAY(p) TO SECOND | VARCHAR(50) |
| BFILE | BLOB |
| LONG RAW | LONGBLOB |
Procedure
Create a data migration task.

Log in to the OMS console.
In the left-side navigation pane, click Data Migration.
On the Data Migration page, click Create Task in the upper-right corner.
On the Create Task page, specify the task name.
We recommend that you use a combination of Chinese characters, numbers, and letters. The name cannot contain spaces and must be 64 characters or less in length.
Notice
The task name is a unique identifier in the OMS system. We recommend that you use a custom task name.
On the Select Source and Target page, configure the parameters.

Parameter Description Source If you have created an Oracle data source, you can select it from the drop-down list. If not, click New Data Source in the drop-down list to create one. For more information, see Create an Oracle data source. Target If you have created a MySQL-compatible OceanBase data source (including a physical data source, public cloud data source, or standalone data source), you can select it from the drop-down list. If not, click New Data Source in the drop-down list to create one. For more information, see Create a physical OceanBase data source, Create a public cloud OceanBase data source, or Create a standalone OceanBase data source. Tag (Optional) Click the field and select a tag from the drop-down list. You can also click Manage Tags in the drop-down list to create, modify, or delete a tag. For more information, see Manage data migration tasks by using tags. Click Next. On the Select Migration Type page, select the migration type for the current data migration task.

Options for Migration Type are Schema Migration, Full Migration, and Reverse Increment.
Migration Type Description Schema Migration After the schema migration task starts, OMS migrates data objects in the source database to the target database and automatically filters out temporary tables. Migration of schemas is subject to the following limitations: - The NUMERIC type in MySQL compatible mode of OceanBase Database cannot be used as the partition key. If a partitioned table in the MySQL compatible mode of OceanBase Database does not have a primary key, a partition of the table that contains a column of the NUMBER or INT type in the source Oracle database will be migrated to the NUMERIC type, which causes an error during migration.
- The TIMESTAMP type in the source Oracle database, with a precision of 9, will be converted to the DATETIME type in MySQL compatible mode of OceanBase Database, with a precision of 6. Note the loss of precision.
- The BINARY_FLOAT type in the source Oracle database will be converted to the DOUBLE type in MySQL compatible mode of OceanBase Database. Note that the loss of precision may occur during reverse incremental synchronization.
Full Migration After the full migration task starts, OMS migrates the existing data of tables in the source database to corresponding tables in the target database. If you select Full Migration, we recommend that you collect statistics of the source Oracle database by using the GATHER_SCHEMA_STATSorGATHER_TABLE_STATSstatement before data migration.Incremental Synchronization After the incremental synchronization task starts, OMS synchronizes the data that is added, modified, or deleted in the source database to corresponding tables in the target database.
Incremental Synchronization supports the following DML synchronization options:INSERT,DELETE, andUPDATE. You can configure these options as needed. For more information, see Configure DDL/DML operations.Notice
If all columns of the table to be migrated in the source Oracle database are of the LOB type (BLOB, CLOB, or NCLOB), or if Obtain Incremental Data through Kafka is configured for the Oracle data source, Incremental Synchronization is not supported.
Reverse Incremental Synchronization After reverse incremental synchronization starts, it can synchronize the changes generated in the target database after the business switch to the source database in real time.
Usually, the settings for reverse incremental synchronization reuse those for incremental synchronization. You can also modify the settings as needed. The following scenarios do not support selecting Reverse Increment:- Multi-table aggregation is involved.
- Multiple source schemas map to the same target schema.
Click Next. On the Select Migration Objects page, select the objects to be migrated for the current data migration task.
You can select Specify Objects or Match by Rule to specify the migration objects. The following procedure describes how to specify migration objects by using the Specify Objects option. For information about the procedure for specifying migration objects by using the Match by Rule option, see Configure matching rules.
Notice
If the name of a database or a table contains the "$$" character, the data migration task cannot be created.
OMS automatically filters out unsupported tables. For more information about the SQL statements for querying table objects, see SQL statements for querying table objects.

On the Select Migration Objects page, select Specify Objects.
Select one or more tables or views in the Source Object(s) list.
Click > to add the selected objects to the Target Object(s) list.
OMS allows you to import objects in the form of text, rename migration objects, configure row filters, select columns, and remove one or all migration objects.
Note
When you select Match by Rule to specify migration objects, object renaming is implemented based on the syntax of the specified matching rules. In the operation area, you can only set filter conditions. For more information, see Configure matching rules.
Action Steps Import objects - In the Target Object(s) list, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK.
Notice: The import operation will overwrite the previous selection. Proceed with caution. - In the Import Migration Objects dialog box, select the objects to be migrated.
You can rename databases and tables and configure row filters by importing a CSV file. For more information, see Download and import migration object configurations. - Click Validate.
- If the imported objects pass the validation, click OK.
Rename objects OMS allows you to rename migration objects. For more information, see Rename a database or table. Configure filters OMS allows you to configure row filters and specify columns to be migrated. - Hover the pointer over the target object in the right-side list of the selection area.
- Click Settings that appears.
- In the Settings dialog box, you can perform the following operations:
-
In the Row Filters section, configure row filters by entering WHERE clauses of standard SQL statements. For more information, see Use SQL conditions to filter data.
- In the Select Columns section, select the columns to be migrated. For more information, see Column filtering.
Remove one or all objects OMS allows you to remove one or all selected objects from the target list. - Remove a migration object
Hover the pointer over the target object in the Target Object(s) list, and click Remove displayed in the drop-down list. - Remove all migration objects
In the upper-right corner of the Target Object(s) list, click Remove All. In the dialog box that appears, click OK.
When you filter rows for a table to be migrated from the source Oracle database, if the filtered table does not have a primary key or unique key, enable the supplemental logging for the corresponding column or set the log level to ALL.
To enable supplemental logging for a specific column, execute the following statement:
ALTER TABLE table_name ADD SUPPLEMENTAL LOG GROUP log_group_name (column1, column2, column3) ALWAYS;To enable supplemental logging at the ALL level, execute the following statement:
-- Method 1: Enable supplemental logging at the database level ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; -- Method 2: Enable supplemental logging at the table level ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Click Next. On the Migration Options page, configure the parameters.
Schema Migration
The following parameters are displayed only if you select Schema Migration in the Select Migration Type step.

Parameter Description Automatically Enter Next Stage upon Completion If you select schema migration and any other migration type, you can specify whether to automatically proceed to the next stage after schema migration is completed. The default value is Yes. You can also view and modify this value on the Schema Migration tab of the data migration task details page. Normal Index Migration Method The migration mode for non-unique key indexes associated with table objects. Valid values: Do Not Migrate, Migrate with Schema, and Post-Full-Migration. The last value is displayed only if full migration is enabled. Full Migration
The following parameters are displayed only if you select Full Migration in the Select Migration Type step.

Parameter Description Full Migration Rate Limit You can choose whether to limit the full migration rate as needed. If you choose to limit it, you must specify the RPS and BPS. The RPS specifies the maximum rows of data migrated to the target database per second during full migration, and the BPS specifies the maximum amount of data in bytes migrated to the target database per second during full migration. Note
The RPS and BPS specified here are only for throttling. The actual performance of full migration is affected by factors such as the source and target databases, instance specifications, and configurations.
Full Migration Resource Configuration You can select the default values of read concurrency, write concurrency, and memory for full migration, or customize the resource configuration for full migration. The resource configuration for full migration limits the resource consumption of the full migration phase. Note
The minimum value is 1 and only integers are supported.
Handle Non-empty Tables in Target Database Valid values: Ignore and Stop Migration: - If you set the value to Ignore, OMS will write data to the target table only if the data does not conflict with existing data in the table. OMS will log the conflicting data and keep the existing data unchanged.
- If you set the value to Stop Migration, OMS will report an error when it finds data in the target table and stop migration. You need to resolve the issue with the data in the target table before continuing the migration.
Note
If you click Restore after an error is reported, OMS will ignore this setting and continue to migrate table data. Proceed with caution.
Incremental Synchronization
The following parameters are displayed only if you select Incremental Synchronization in the Select Migration Type step.

Parameter Description Incremental Synchronization Rate Limit You can choose whether to limit the incremental synchronization rate as needed. If you choose to limit it, you must specify the RPS and BPS. The RPS specifies the maximum rows of data synchronized to the target database per second during incremental synchronization, and the BPS specifies the maximum amount of data in bytes synchronized to the target database per second during incremental synchronization. Note
The RPS and BPS specified here are only for throttling. The actual performance of incremental synchronization is affected by factors such as the source and target databases, instance specifications, and configurations.
Incremental Log Pull Resource Configuration You can select Small, Medium, or Large to use the corresponding default value of Memory. You can also customize the resource configurations for incremental log pull. By setting the resource configuration for the Store component, you can limit the resource consumption of a task in log pull in the incremental synchronization stage. Note
- In the case of custom configurations, the minimum value is
1, and only integers are supported. - If Obtain Incremental Data through Kafka is configured for the Oracle data source, this parameter is not displayed.
Incremental Data Write Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Write Concurrency and Memory. You can also customize the resource configurations for incremental data writes. By setting the resource configuration for the Incr-Sync component, you can limit the resource consumption of a task in data writes in the incremental synchronization stage. Notice
- In the case of custom configurations, the minimum value is
1, and only integers are supported. - If Obtain Incremental Data through Kafka is configured for the Oracle data source, the value of this parameter is the number of concurrent reads from Kafka during incremental synchronization. The default value is 4, the minimum value is 1, and the maximum value of 512. It is recommended that the maximum value be set to the number of the topic partitions corresponding to Kafka.
Incremental Record Retention Duration The duration that incremental parsed files are cached in OMS. A longer retention duration results in more disk space occupied by the Store component. Note
If Obtain Incremental Data through Kafka is configured for the Oracle data source, this parameter is not displayed.
Incremental Synchronization Start Timestamp - If you have selected Full Migration as the migration type, this parameter is not displayed.
- If you have selected Incremental Synchronization but not Full Migration, specify a point in time after which the data is to be synchronized. The default value is the current system time. For more information, see Set an incremental synchronization timestamp.
- In the case of custom configurations, the minimum value is
Reverse Increment
The following parameters are displayed only if you select Reverse Increment in the Select Migration Type step. The parameters are the same as those for incremental synchronization. You can also disable Reuse Incremental Synchronization Configuration in the upper-right corner.

Advanced Options
Parameter Description Add Hidden Columns for Tables Without Non-null Unique Keys If data is to be migrated between OceanBase databases, you need to specify whether to add hidden columns for tables without non-null unique keys. For more information, see Hidden column mechanisms.
If you set the value to No, if the table to be migrated does not have a primary key or a non-null unique key, data duplication may occur when the task is restarted or encounters other exceptions. We recommend that you configure a non-null unique key for all tables.Target Table Storage Type This parameter is displayed only if the target is an Oracle-compatible tenant of OceanBase Database V4.3.0 or later and you select Schema Migration in the Select Migration Type step. The storage type of a table object in the target database can be Default, Row Storage, Column Storage, or Hybrid Row-Column Storage. This parameter specifies the storage type for target table objects during schema migration or incremental synchronization. For more information, see default_table_store_format. Note
The value Default means that other parameters are automatically set based on the parameter configurations of the target database. Table objects in schema migration and new table objects created by incremental DDL statements are written to corresponding schemas based on the specified storage type.
If the parameters on this page cannot meet your requirements, you can click Parameter Configuration at the bottom of the page for more advanced configurations. You can also reference the task templates or component templates that you have configured.

Click Precheck to perform a precheck on the data migration task.
In the Precheck step, OMS checks whether the database user has the required read/write permissions and whether the network connection to the database is established. The data migration task will not start unless all checks pass.
You can perform troubleshooting and re-run the precheck until it succeeds.
You can also click Skip in the Actions column of a check item to pop up a dialog box that describes the impact of skipping the check item. After you confirm the operation, click OK in the dialog box.
Click Start Task. If you do not want to start the task immediately, you can click Save to go to the details page of the data migration task, where you can start the task later.
You can click Configure Validation Task in the upper-right corner of the data migration details page to compare the data between the source and target databases. For more information, see Create a data validation task.
OMS allows you to modify the migration objects when the data migration task is running. For more information, see View and modify migration objects. After the data migration task is started, it is executed based on the selected migration types. For more information, see the View migration details section in View details of a data migration task.