This topic describes how to use OceanBase Migration Service (OMS) to migrate data from an Oracle database to a MySQL tenant of OceanBase Database.
Background
You can create a data migration project in the OMS console to migrate the existing business data and incremental data from an Oracle database to a MySQL tenant of OceanBase Database through schema migration, full migration, and incremental data synchronization.
The Oracle database supports the following modes: single primary database, single standby database, and primary/standby databases. The following table describes the data migration operations supported by each mode.
| Type | Supported operations |
|---|---|
| Single primary database | Schema migration, full migration, incremental synchronization, full verification, and reverse incremental migration |
| Single standby database | Schema migration, full migration, incremental synchronization, and full verification |
| Primary/standby databases | Primary database: reverse incremental migration. Standby database: schema migration, full migration, incremental synchronization, and full verification |
Prerequisites
You have created a corresponding schema in the destination MySQL tenant of OceanBase Database. OMS allows you to migrate tables and columns. Therefore, you must create a corresponding schema in the destination database before migration.
You have enabled archivelog for the source Oracle instance and switched the logfile before OMS starts incremental data replication.
You have installed LogMiner in the source Oracle instance, and LogMiner runs properly.
LogMiner enables you to obtain data from the archived logs of the Oracle instance.
You have created dedicated database users in the source Oracle database and the destination MySQL tenant of OceanBase Database for data migration and granted the corresponding privileges to the users. For more information, see Create a database user.
You have made sure that the Oracle instance has enabled the database-level or table-level supplemental_log feature.
If you enable supplemental_log of the primary key and unique key at the database level, when a large number of unnecessary logs are generated by tables that do not need to be synchronized, the pressure on LogMiner Reader to fetch logs and on the Oracle database increases. Therefore, you can enable only the table-level supplemental_log of the primary key and unique key for Oracle databases in the OMS console. However, if you configure the Set ETL Options to filter columns other than the primary key and unique key columns when you create a migration task, enable supplemental_log for the corresponding columns or all columns.
Clock synchronization (such as the NTP service) is required between an Oracle server and the OMS server to avoid data risks. For an Oracle RAC, clock synchronization is also required between Oracle instances.
Limits
OMS supports the following Oracle Database versions: 11g, 12c, 18c, and 19c. Version 12c and later provide container databases (CDBs) and pluggable databases (PDBs).
You cannot use DDL operations to migrate incremental data from an Oracle database to a MySQL tenant of OceanBase Database.
OMS allows you to migrate data from the source Oracle instance that uses character sets including AL32UTF8, AL16UTF16, ZHS16GBK, and GB18030. If the character set used by the source database is UTF-8, we recommend that you use UTF-8 or a greater character set for the destination database.
If you select a migration mode that supports incremental synchronization and reverse incremental migration, and an exception occurs when OMS pulls the incremental data from a standby Oracle database, you can run the
ALTER SYSTEM SWITCH LOGFILEcommand in the primary database to handle the exception.In long-term synchronization between databases, OMS does not support triggers in the destination database.
When you migrate a table without a unique key from an Oracle database to a MySQL tenant of OceanBase Database, do not perform any operations on the source Oracle database that may change the ROWID, such as data import and export, Alter Table, FlashBack Table, and partition splitting or compaction.
The data of the NUMERIC type in the MySQL tenant of OceanBase Database cannot serve as a partitioning key. During schema migration of a partitioned table without a primary key, the data of the NUMBER or INT type in the partition column of the Oracle database is converted to data of the NUMERIC type, resulting in an error.
During schema migration, the data of the TIMESTAMP type (with a precision of 9) in the Oracle database is converted to data of the DATETIME type (with a precision of 6) in the MySQL tenant of OceanBase Database. Precision loss occurs.
During schema migration, the data of the BINARY_FLOAT type in the Oracle database is converted to data of the DOUBLE type in the MySQL tenant of OceanBase Database. Precision loss may occur during reverse incremental migration.
In a project of reverse incremental migration from an Oracle database to a MySQL tenant of OceanBase Database, when the MySQL tenant is of a version earlier than V3.2.x and has a multi-partition table with global unique indexes, if you update the value of a partitioning key of the table, data may be lost during migration.
When you migrate data from an Oracle database to a non-Oracle tenant of OceanBase Database, if the primary key or unique key (as a verification field) contains data of the INTERVAL type, you must set
filter.verify.inmod.tablesto theinmode for the table. Otherwise, the verification result is inaccurate.A function similar to update current_timestamp exists in the time type field of a MySQL tenant of OceanBase database, which will cause data inconsistency between the source and destination databases.
When the length of a field in the destination database is shorter than that in the source database, the data of this field will be truncated in the destination database, causing data inconsistency between the source and destination databases.
If you change the unique index of the destination, you must restart the incremental synchronization. Otherwise, the data may be inconsistent.
If forward switchover is not started in a data migration project, delete the unique indexes and pseudocolumns from the source database. If you do not delete the unique indexes and pseudocolumns, data cannot be written, and pseudocolumns will be generated again when data is imported to the downstream system, causing conflicts with the pseudocolumns in the source database.
Check and modify the system configurations of the Oracle instance
Enable archivelog for the source Oracle database.
select log_mode from v$database;The value of the
log_modefield must bearchivelog. Otherwise, perform the following steps to change it:Run the following commands to enable archivelog.
shutdown immediate; startup mount; alter database archivelog; alter database open;Run the following command to view the path and quota of archived logs.
View the path and quota of the
recovery file. We recommend that you set thedb_recovery_file_dest_sizeparameter to a relatively large value. After you enable archivelog, you need to regularly clear the archived logs by using RMAN or other methods.show parameter db_recovery_file_dest;Change the quota of archived logs as needed.
alter system set db_recovery_file_dest_size=50G scope=both;
Enable supplemental_log in the source Oracle database.
LogMiner Reader allows you to enable only table-level supplemental_log in an Oracle database. If you create new tables in the Oracle instance before the migration, you must enable the supplemental_log for the primary key and unique key before the DML operations. Otherwise, OMS returns an exception of incomplete logs.
Notice
You need to enable supplemental_log in the primary Oracle database.
LogMiner Reader uses one of the following two methods to check whether supplemental_log is enabled. If not, LogMiner Reader exits.
Enable
supplemental_log_data_pkandsupplemental_log_data_uiat the database level.Run the following commands to check whether the supplemental_log is enabled. If the returned values are both
YES, the supplemental_log is enabled.select supplemental_log_data_pk, supplemental_log_data_ui from v$database;Otherwise, perform the following steps:
Execute the following statement to enable the supplemental_log.
alter database add supplemental log data(primary key, unique) columns;Perform archivelog switchover three times. For an Oracle RAC, perform switchover for the instances alternately.
alter system switch logfile;The reason for performing the archivelog switchover three times:
When the Oracle Store locates the start time to pull log files, it rolls back 0 to 2 archived logs based on the specified timestamp. Therefore, after you enable the supplemental_log, you need to perform the archivelog switchover three times to prevent the store from pulling the logs that are generated before the specified timestamp. Otherwise, the store exits unexpectedly.
The reason for alternately performing the archivelog switchover among multiple instances in an RAC system:
In an Oracle RAC system, if you perform the archivelog switchover multiple times on one instance, when you perform the archivelog switchover on the next instance, the latter instance may pull the logs that are generated before the supplemental_log is enabled.
Enable
supplemental_log_data_pkandsupplemental_log_data_uiat the table level.Execute the following statement to confirm whether
supplemental_log_data_minis enabled at the database level.select supplemental_log_data_min from v$database;If the returned value is
YESorIMPLICIT, the supplemental_log is enabled.Execute the following statement to check whether the table-level supplemental_log is enabled for the tables to be synchronized.
select log_group_type from all_log_groups where owner = 'xxx' and table_name = 'yyy';Each type of supplemental_log returns one row. The results must contain
ALL COLUMN LOGGINGor bothPRIMARY KEY LOGGINGandUNIQUE KEY LOGGING.If the table-level supplemental_log is not enabled, execute the following statement.
alter table table_name add supplemental log data(primary key, unique) columns;Perform archivelog switchover three times. For an Oracle RAC, perform switchover for the instances alternately.
alter system switch logfile;
(Optional) We recommend that you set the
_log_parallelism_maxparameter of the Oracle database to 1. The default value is 2.You can use one of the following two methods to query the value of the
_log_parallelism_maxparameter.Method 1
SELECT NAM.KSPPINM,VAL.KSPPSTVL,NAM.KSPPDESC FROM SYS.X$KSPPI NAM,SYS.X$KSPPSV VAL WHERE NAM.INDX= VAL.INDX AND NAM.KSPPINM LIKE '_%' AND UPPER(NAM.KSPPINM) LIKE '%LOG_PARALLEL%';Method 2
select value from v$parameter where name = '_log_parallelism_max';
Execute one of the following statements to modify the value of the
_log_parallelism_maxparameter.Oracle RAC
alter system set "_log_parallelism_max"=1 sid='*' scope=spfile;Non-Oracle RAC
alter system set "_log_parallelism_max"=1 scope=spfile;
When you modify the value of the
_log_parallelism_maxparameter in Oracle Database 10g, if the error messagewrite to SPFILE requested but no SPFILE specified at startupis returned, perform the following operations:create spfile from pfile; shutdown immediate; startup; show parameter spfile;Restart the instance and perform the archivelog switchover three times.
Data type mappings
| Oracle Database | MySQL tenant of OceanBase Database |
|---|---|
| CHAR(n) | CHAR(n) |
| CHAR(n CHAR) | VARCHAR(n) |
| CHAR(n BYTE) | CHAR(n) |
| NCHAR(n) | VARCHAR(n) |
| VARCHAR2 | VARCHAR |
| NVARCHAR2 | VARCHAR |
| NUMBER (p, s) | DECIMAL(p, s)/NUMERIC(p, s) If (p,s) is not specified for NUMBER, the default value (65,30) applies. |
| LONG | LONGTEXT |
| RAW | VARBINARY |
| CLOB | LONGTEXT |
| NCLOB | LONGTEXT |
| BLOB | LONGBLOB |
| FLOAT | DOUBLE |
| BINARY_FLOAT | DOUBLE |
| BINARY_DOUBLE | DOUBLE/DOUBLE PRECISION |
| DATE | DATETIME |
| TIMESTAMP(n) | DATETIME(n) |
| TIMESTAMP WITH TIME ZONE | VARCHAR(50) |
| TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP |
| INTERVAL YEAR(p) TO MONTH | VARCHAR(50) |
| INTERVAL DAY(p) TO SECOND | VARCHAR(50) |
| BFILE | BLOB |
| LONG RAW | LONGBLOB |
Create a data migration project
Create a migration project.
Log on to the OMS console.
In the left-side navigation pane, click Data Migration.
On the Data Migration page, click Create Migration Project in the upper-right corner.
On the Select Source and Destination page, configure the parameters.
Parameter Description Migration Project Name We recommend that you set it to a combination of Chinese characters, digits, and letters. It must not contain any spaces and cannot exceed 64 characters in length. Label Click the field and select the target tag from the drop-down list. You can click Manage Tags to create, modify, and delete tags. For more information, see Use tags to manage data migration projects. Source If you have created an Oracle data source, select it from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create an Oracle data source. Destination If you have created a data source for the MySQL tenant of OceanBase Database, select it from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create OceanBase Database physical tables as a data source. Click Next. On the Select Migration Type page, specify the following parameters.
Options are available for Migration Type include Schema Migration, Full Migration, Incremental Synchronization, Full Verification, and Reverse Incremental Migration.
Migration type Limits Full Migration If you select Full Migration, we recommend that you use the GATHER_SCHEMA_STATSorGATHER_TABLE_STATSstatement to collect the statistics of the Oracle database before data migration.Incremental Synchronization Incremental Synchronization supports the following DML operations: Insert,Delete, andUpdate. You can select statements based on your business needs.
If the data in all columns in the tables of an Oracle database to be migrated are of an LOB type (BLOB, CLOB, or NCLOB), Incremental Synchronization is not supported.Full Verification - If you select Full Verification, we recommend that you collect the statistics of the Oracle database and the MySQL tenant of OceanBase Database before full verification.
- If you have selected Incremental Synchronization but did not select all DML statements in DML Synchronization, OMS does not support full data verification in this scenario.
Reverse Incremental Migration You cannot select Reverse Incremental Migration in the following cases: - Multi-table aggregation and synchronization are enabled.
- Multiple schemas are configured in a rule to match one type of objects.
Click Next. On the Select Migration Objects page, select the migration objects and migration scope.
You can select one of the following two modes to migrate objects: Specify Objects or Match Rules.
Select Specify Objects. Then select the objects to be migrated on the left and click > to add them to the list on the right. You can select tables and views of one or more databases as the migration objects.
Notice:
The name of a table to be migrated and the names of columns in the table must not contain Chinese characters.
If the database or table name contains a double dollar sign ($$), you cannot create the migration project.
When you migrate data from an Oracle database to a MySQL tenant of OceanBase Database, OMS allows you to import objects through text, rename object names, set row filters, view column information, and remove one or all objects to be migrated.
Operation Steps Import Objects - In the list on the right of the Specify Migration Scope section, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK.
NoticeThis operation will overwrite previous selections. Proceed with caution. - In the Import Objects dialog box, import the objects to be migrated.
You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of migration objects. - Click Validate.
- After the validation succeeds, click OK.
Rename - In the list on the right of the Specify Migration Scope section, hover the pointer over the target object.
- Click Rename.
- Enter a new name and click OK.
Settings OMS allows you to set WHEREconditions to filter data by row and view column information.- In the list on the right of the Specify Migration Scope section, hover the pointer over the target object.
- Click Settings.
- In the Settings dialog box, specify a standard SQL
WHEREclause to filter data by row. The setting takes effect for full migration and incremental synchronization.
Notice:- Add an escape character (`) for column names. Example: `col`.
- Only the data meeting the
WHEREcondition is synchronized to the destination data source, thereby filtering data by row. - If row-based filtering with the
WHEREclause is enabled, right-trim is forcibly performed on data of the CHAR or VARCHAR type, which may cause an inaccurate comparison of the VARCHAR data. Proceed with caution.
- Click OK.
You can also view column information of the migration object in the View Columns section.
Remove/Remove All OMS allows you to remove one or more objects from the destination database during data mapping. - Remove a single migration object
In the list on the right of the Specify Migration Scope section, hover the pointer over the target object, and click Remove. The migration object is removed. - Remove all migration objects
In the list on the right of the Specify Migration Scope section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all migration objects.
When the source database is an Oracle database, if row filtering is enabled for columns other than the primary key and unique key columns, enable supplemental_log for the corresponding columns or all columns.
Statement for enabling supplemental_log for the corresponding columns:
ALTER TABLE table_name ADD SUPPLEMENTAL LOG GROUP log_group_name (column1, column2, column3) ALWAYS;Statement for enabling supplemental_log for all columns:
-- Enable database-level supplemental_log: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; -- Enable table-level supplemental_log: ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Select Match Rules. For more information, see Configure matching rules for migration objects.
Click Next. On the Migration Options page, configure the parameters.
Parameter Description Concurrency for Full Migration The value can be Smooth, Normal, or Fast. The amount of resources to be consumed by a full data migration task varies based on the migration performance.
You can also modify the configurations of the checker component to customize the concurrency.
Notice
To enable this feature, select Full Migration on the Select Migration Type page.Full Verification Concurrency The value can be Smooth, Normal, or Fast. Different quantities of resources of the source and destination databases are consumed at different concurrencies.
You can also modify the configurations of the checker component to customize the concurrency.Incremental Record Retention Time The duration that incremental parsed files are cached in OMS. A longer retention period indicates more disk space occupied by the store component of OMS. Whether to Allow Destination Table to Be Not Empty During Full Migration If destination tables are allowed to be not empty during full migration, full verification is performed in INmode.
Notice
To enable this feature, select Full Migration on the Select Migration Type page.Whether to Allow Post-indexing You can specify whether to allow post-indexing after full migration is completed. Post-indexing can shorten the time of full migration.
Notice- To enable this feature, select both Schema Migration and Full Migration on the Select Migration Type page.
- Only non-unique key indexes can be created after the migration is completed.
Click Precheck to start a precheck on the data migration project.
During the precheck****, OMS checks the read and write privileges of the database users and the network connections of the databases. The data migration project can be started only after it passes all check items.
You can troubleshoot the error and run the precheck again.
You can also click Skip in the Actions column of the precheck item that returns the error. Then, a dialog box appears, indicating the impact that may be caused if you choose to skip this check item. If you want to continue, click OK in the dialog box.
Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data migration project. You can start the project later as needed.
OMS allows you to modify migration objects when a data migration project is running. For more information, see View and modify migration objects. After a data migration project is started, the migration objects will be executed based on the selected migration type. For more information, see the "View migration details" section in the View details of a data migration project.