This topic describes how to use OceanBase Migration Service (OMS) to migrate data from a DB2 LUW database to an Oracle-compatible OceanBase database, including physical data sources and public cloud data sources.
Prerequisites
You have created the corresponding schema in the target OceanBase database in Oracle compatible mode.
You need to create the schema in advance. OMS will migrate the tables and views to be migrated to the schema you created.
You have created a database user for data migration tasks in the source DB2 LUW database and the target OceanBase database in Oracle compatible mode, and granted the corresponding permissions to the user. For more information, see Create a database user.
The Archive Log feature is enabled for the DB2 LUW database.
If the Archive Log feature is not enabled, perform the following steps:
Connect to the database.
db2 connect to ${db_name}Change the directory for the archive log.
db2 update db cfg for ${db_name} using LOGARCHMETH1 logpath(${your_logpath})Backup the database.
db2 backup database ${db_name} to dbbackuppath(${your_logpath})Stop the database.
db2stopStart the database.
db2startConnect to the database.
db2 connect to ${db_name}Manually archive the logs.
db2 archive log for db ${db_name}View the archive logs.
db2 get db cfg|grep LOG
The Data Changes feature is enabled for the tables in the DB2 LUW database.
If the Data Changes feature is not enabled, execute the following statement to enable it:
alter table ${table_name} data capture changesThe
log_ddl_stmtfeature is enabled for the DB2 LUW database.If the
log_ddl_stmtfeature is not enabled, execute the following statement to enable it:db2 update db cfg using LOG_DDL_STMTS YES
Limitations
Limitations on operations of the source database
Do not perform DDL operations for changing the schema or table structure during schema migration or full migration. Otherwise, the data migration task may be interrupted.
DB2 LUW databases support V9.7, V10.1, V10.5, V11.1, and V11.5 on Linux or AIX operating systems.
When a DB2 LUW database is used as the source database, you cannot synchronize DDL operations.
On ARM architecture, incremental synchronization from a DB2 LUW database to an OceanBase database in Oracle compatible mode is not supported.
DB2 LUW databases support only objects whose names consist of letters, underscores, and digits, and must start with a letter or an underscore. The object names cannot be keywords of the DB2 LUW database.
During migration from a DB2 LUW database to an OceanBase database in Oracle compatible mode, full migration and incremental synchronization support only tables with unique constraints in the DB2 LUW database.
OMS does not support triggers on the target database. If triggers exist, data migration may fail.
Unique constraints that allow null values are not supported. This may cause data inconsistency. In OceanBase Database, multiple null values can exist in the same column of a unique constraint, because null != null. In DB2 LUW Database, a unique constraint requires that the column is not null, but a unique index allows null values, because null = null.
For example, in OceanBase Database, the unique index unique (c1, c2) (null, null) can be inserted multiple times, but in DB2 LUW Database, a unique constraint does not allow null values. If a unique index is used, only (null, null) can be inserted once.
Therefore, the presence of null values is not compatible with unique indexes in OceanBase Database. Do not use unique constraints that allow null values to avoid errors during schema migration. Incremental synchronization will add the NOT NULL constraint to the constrained column, and if null data is written, an error will occur.
Additionally, during DDL synchronization, if a unique index is created in OceanBase Database, ensure that all constrained columns are not null. Otherwise, an error will occur in the DB2 LUW database.
The user who parses the DB2 LUW database must have the
sysadmprivilege on the corresponding schema. Otherwise, the user cannot obtain logs.Data source identifiers and user accounts are globally unique in the OMS system.
Considerations
If the source character set is UTF-8, we recommend that you use a character set that is compatible with the source character set (such as UTF-8 or UTF-16) on the destination. Otherwise, garbled characters may appear on the destination.
When you update LOB type data in a DB2 LUW database, a large number of log-level row migrations occur. If an unknown combination of row migrations causes the Store to exit abnormally, retain the logs and provide them to technical support.
Do not use the UPDATE statement to change the primary key. Otherwise, data inconsistency may occur during row migration.
Currently, we mainly test the log pull without compression format in DB2 LUW databases. The stability of logs in the compressed format is not verified. We recommend that you use the compressed log format with caution.
Retain logs of DB2 LUW databases and OceanBase databases for at least 3 days to prevent data loss due to accidental log pull.
If the clocks on the nodes are not synchronized or the clocks on the client and server are not synchronized, the delay time (incremental synchronization or reverse incremental) may be inaccurate.
For example, if the clock is earlier than the standard time, the delay time may be negative. If the clock is later than the standard time, the delay time may be positive.
If a table field in the Oracle compatible mode of the destination OceanBase database has the NOT NULL constraint, empty strings generated by the source DB2 LUW database cannot be written to the destination.
In reverse incremental synchronization from a DB2 LUW database to an OceanBase database in the Oracle compatible mode, if the OceanBase database is of a version earlier than 3.2.x and contains a global unique index, updating the value of the partitioning key of the table may cause data loss during data migration.
If the synchronized DDL statement is
renameand the source table or destination table is not in the synchronization list, therenamestatement is ignored. After you execute the synchronized DDL statement, restart full verification. If the new table created by therenamestatement is not synchronized to the destination, an error is reported during full verification.If you change the unique index on the destination without enabling synchronized DDL, you must restart incremental synchronization. Otherwise, data inconsistency may occur.
In the scenario where data is migrated from a source database to a destination database:
We recommend that you map the relationships between the source and destination by using matching rules.
We recommend that you create the table structure on the destination. If you use OMS to create the table structure, skip some failed objects in the structure migration step.
If you configure only Incremental Synchronization when you create a data migration task, OMS requires that the archived logs of the source database be retained for more than 48 hours.
If you configure Full Migration + Incremental Synchronization when you create a data migration task, OMS requires that the archived logs of the source database be retained for at least 7 days. Otherwise, the data migration task may fail because it cannot obtain incremental logs. In addition, the data on the source and destination may be inconsistent.
If a table object exists on the source or destination with only the case different, the data migration result may be inconsistent with the expected result because the source or destination is case-insensitive.
Data type mapping
Migration conversion rules
| DB2 LUW database | OceanBase Database in Oracle compatible mode |
|---|---|
| TIME | DATE Warning: If the default value is incompatible, please modify it manually. |
| TIMESTAMP(n) | TIMESTAMP(n>0) |
| DATE | DATE |
|
|
| CHAR(n) FOR BIT DATA | RAW(n<=255) |
|
|
| VARCHAR(n) FOR BIT DATA | RAW(n<=2000) or BLOB |
| NCHAR(m) | NCHAR(m) |
| NVARCHAR(m) | NVARCHAR2(m) |
| CLOB | CLOB |
| NCLOB | CLOB |
| GRAPHIC(n) | NCHAR(n) |
| VARGRAPHIC(n) | NVARCHAR2(n) |
| LONG VARGRAPHIC | CLOB |
| LONG VARCHAR | VARCHAR2(m BYTE) |
| DBCLOB | CLOB |
| BINARY(m < 256) | RAW |
| VARBINARY(m < 32672) | BLOB |
| BLOB | BLOB |
| BOOLEAN | NUMBER(1) |
| SMALLINT | NUMBER(6, 0) |
| INTEGER | NUMBER(11,0) |
| BIGINT | NUMBER(19, 0) |
| DECIMAL(p,s) | NUMBER(p,s) |
| NUMERIC(p,s) | NUMBER(p,s) |
| DECFLOAT(16|34) | FLOAT(53|113) |
| REAL | BINARY_FLOAT |
| DOUBLE | BINARY_DOUBLE |
| XML | -- |
Notice
In OceanBase Database in Oracle compatible mode, the CHAR and VARCHAR2 types can usually store multi-byte encoded data. Therefore, during reverse conversion, using single-byte encoded units directly may result in insufficient length issues.
In DB2 LUW databases, data storage must consider not only the type length but also the OCTETS, CODEUNITS16, and CODEUNITS32 encoding units.
Only DB2 LUW databases of version 10.5 and later support the OCTETS and CODEUNITS32 encoding units.If the target is OceanBase Database in Oracle compatible mode of a version earlier than V4.2.0, the CLOB and BLOB data must be less than 48 MB.
If the target is OceanBase Database in Oracle compatible mode of V4.2.0 or later, the CLOB and BLOB data can be up to 512 MB.
Migration of data of the LONG, ROWID, BFILE, LONG RAW, XMLType, and UDT types is not supported.
Tables with FLOAT, DOUBLE, and REAL types as primary keys may have inconsistent full data.
Limitations
The maximum precision of TIMESTAMP in DB2 LUW databases is 12, while that in OceanBase Database in Oracle compatible mode is 9. Therefore, data will be truncated. Data types that cause truncation cannot be used as primary keys or unique keys.
Length limitations
The maximum length of the CHAR and BINARY types in DB2 LUW databases is 255. If the data written to OceanBase Database in Oracle compatible mode exceeds 255 during reverse synchronization, the data migration task will fail.
The maximum length of the VARCHAR and BINARY types in DB2 LUW databases is 32 K. If the data written to OceanBase Database in Oracle compatible mode exceeds 32 K, the data migration task will fail.
In the DECIMAL(dp, ds) type in DB2 LUW databases, dp cannot exceed 31, and ds must be less than or equal to dp. Therefore, the corresponding type in OceanBase Database in Oracle compatible mode is the NUMBER type.
The maximum storage size of a number in OceanBase Database in Oracle compatible mode is limited. The default length of the NUMBER, INT, SMALLINT, and NUMBER(*, s) types in OceanBase Database in Oracle compatible mode is 38. Therefore, you must explicitly define the NUMBER(p,s) type and set its length to a value that is compatible with both the business requirements and the source and destination databases.
Data type limitations
If you convert a data type in DB2 LUW databases to the LOB type in OceanBase Database in Oracle compatible mode, the data stored in the LOB type cannot exceed 48 MB.
The TIME type in DB2 LUW databases cannot be used as a partitioning key for migration.
XML data types are not supported.
We recommend that you do not define the OCTUNIT16/32 type or use multibyte storage types such as NCHAR or GRAPHC.
You cannot modify the default value of the BLOB data type.
Procedure
Create a data migration task.

Log in to the OMS console.
In the left-side navigation pane, click Data Migration.
On the Data Migration page, click Create Task at the top right.
On the Select Source and Target page, configure parameters.
Please translate the following technical document into English: | Parameter | Description |
Migration task name You can use the combination of Chinese, digits, and English letters. The name cannot contain spaces, and its maximum length is 64 characters. Source If you have created a DB2 LUW data source, select it from the drop-down list. If you have not created one, click New Data Source in the drop-down list and create it in the dialog box that appears on the right. For more information, see Create a DB2 LUW data source.
Note:
The columns that represent unique keys in a DB2 LUW database must be nonnull.| Target | If you have created an Oracle data source for OceanBase Database (physical or public cloud data source), select one from the drop-down list. If not, click New Data Source in the drop-down list and create a data source in the dialog box that appears. For more information, see Create an OceanBase physical data source or Create an OceanBase public cloud data source. |
| Optional label | Click the label in the text box and select the target label from the drop-down list. You can also click Manage Tags to create, edit, and delete the tag. For more information, see Manage data migration tasks by using labels. |
- After you click Next, click Noted in the message that appears.
Note: This task applies to tables and views with a primary key or a non-null unique index, and automatically filters out others.

- On the Select Migration Type page, configure the parameters.

Migration Type includes Schema Migration, Full Migration, Incremental Synchronization, Full Validation, and Reverse Increment. | Migration type | Description | |------|----------| | Schema migration | After the schema migration task starts, OMS migrates schema objects from the source to the target database, including tables, indexes, constraints, comments, and views. OMS also filters out temporary tables. |
| Full migration | After the full migration task is initiated, the OMS migrates the existing data in the source table to the corresponding table in the target database. If you select Full Migration, we recommend that you use the RUNSTATS statement to collect statistical information about a DB2 LUW database before data migration. | | Incremental synchronization | After an incremental synchronization task starts, OMS synchronizes the data (added, modified, or deleted) that has changed in the source database to the corresponding table in the destination database.
Incremental Synchronization includes DML synchronization and DDL synchronization, which you can customize as required. For more information, see Customize DDL/DML settings. Incremental Synchronization has the following limitations:
- Incremental Synchronization is not supported if OMS is deployed in ARM mode.
- If you choose DDL synchronization, data migration might be interrupted if DDL operations not supported by OMS are performed on the source database.
- If a DDL operation adds a column, we recommend that you set the nullability of the column to nullable, which can prevent data migration from being interrupted.
- We recommend that you collect the statistics of the DB2 LUW database and the OceanBase Database in Oracle compatible mode before initiating a full data verification task. For more information, see Manually collect statistics.
- If you have selected Incremental Synchronization, and have not selected all DML operations, OMS does not support the full data verification in the current scenario.
| Reverse Incremental Sync | After the reverse incremental sync task starts, you can synchronize incremental data generated on the destination database to the source database in real time. Typically, reverse incremental sync reuses the incremental synchronization configuration. However, you can also customize the configuration based on your specific requirements. Selecting Reverse Increment is not supported in the following cases:
- When multiple tables are involved in a join.
- When the schema has a one-to-many mapping.
- (Optional) Click Next.
If you select Reverse Increment, but the corresponding parameters for the target OceanBase database in Oracle compatible mode are not configured, the Add Data Source Information dialog box pops up, prompting you to configure the parameters. For more information, see Create an OceanBase physical data source or Create an OceanBase public cloud data source.
Click Test Connection. After the test connection succeeds, click OK.
- Click Next, and then, on the Select Migration Objects page, select migration objects and migration scope.
You can select a migration object through the Specify Objects and Match by Rule tabs. This topic describes how to select a migration object through the Specify Objects tab. For more information about how to configure the matching rules, see Configure matching rules.
Caution
The name of the table and its columns to be migrated cannot contain Chinese characters.
Data migration tasks cannot be created when the names of the database or table contain the $$ symbol.
</li> <li> <p>When you select <b>DDL synchronization</b> in the <b>Choose Migration Type</b> step, it is recommended that you select the migration objects based on matching rules to ensure that all new objects that match the migration object rules are synchronized. If you select the migration objects by specifying objects, new objects or renamed objects will not be synchronized.</p> </li> <li> <p>OMS will automatically filter out tables that are not supported. For more information about how to query tables, see <a href="../1200.reference-guide/500.select-sql.md">SQL for Querying Table Objects</a>.</p> </li> </ul>
In the Select Migration Objects section, select Specify Objects.
In the Specify Migration Scope section, select the objects you want to migrate from the Source Object(s) list. You can select tables and views from one or more databases as migration objects.
Click ** >** and add it to the Target Object(s) list.
You can use the OMS text import feature to import objects, rename objects, set row filters, view column information, and remove single or all migration objects.
Note
| Action | Procedure |<p>When <b>matching rules</b> is used to select migration objects, the renaming ability is covered by the matching rule syntax, and you can only set filter conditions. For details, see <a href="../1200.reference-guide/90.function-introduction/600.configure-matching-rules-for-migration-objects.md">Configure Matching Rules</a>. </p>Import objects - In the right-side list of the Specify Migration Scope area, click Import Objects in the upper right corner.
- In the dialog box that appears, click OK.
Note:
The imported objects will overwrite your previous selections. Proceed with caution. - In the Import Migration Objects dialog box, import the objects to migrate.
You can rename databases and tables and set row filter conditions by importing a CSV file. For more information, see Download and import migration object settings. - Click Validate.
- After the legality test is passed, click OK.
Rename a migration object OMS allows you to rename migration objects. For more information, see Rename database tables. Setting OMS supports the WHEREcondition to filter rows. For more information, see SQL condition-based data filtering.
For information about the columns of a migrated object, see the View Column section.Please translate the following technical document into English:
| Remove/Remove All | OMS allows you to remove one or more migrated objects temporarily selected to the target server when you perform data mapping.
- Remove one migrated object
Hover the mouse over a target object in the right-side list in the Specify Migration Scope region, and click the Remove button that appears. - Remove all migrated objects
In the right-side list in the Specify Migration Scope region, click Remove All in the upper-right corner. In the dialog box that appears, click OK.
7. Click **<UI-TERM key="oms-v2.components.ParameterConfig.NextStep">Next</UI-TERM>**. On the **<UI-TERM key="oms-v2.components.ParameterConfig.IncrParamsConfig.MigrationOptions">Migration Options</UI-TERM>** page, configure the parameters. Please translate the following technical document into English: * Full migration. On the **<UI-TERM key="oms-v2.components.MigrationTypeParams.SelectAMigrationType">Select Migration Type</UI-TERM>** page, select **<UI-TERM key="OMS.components.CheckUpDateConfig.constants.FullMigration">Full Migration</UI-TERM>**. The following parameters are displayed:  |Parameter|Description| |----|----------------------------------| | Full Data Migration Rate Limit | You can choose whether to enable the full data migration rate limit based on your needs. If you enable this limit, set the RPS (maximum number of data rows that can be migrated to the destination per second during the full data migration stage) and BPS (maximum amount of data that can be migrated to the destination per second during the full data migration stage).<main id="notice" type='explain'><h4>Note</h4><p>The RPS and BPS set here are only used for bandwidth throttling. The actual performance of full data migration is affected by factors such as the source and destination, and instance specifications.</p></main>| | Full data migration resource allocation | You can choose the default read concurrency, write concurrency, and memory size, or customize the resource configuration for full data migration. The resource configuration for the full data import component Full-Import limits resource usage during the full data migration stage. <main id="notice" type='notice'><h4>Notice</h4><p>When you customize the configurations, the minimum value is 1, and only integer values are supported. </p></main> | | Target Table Strategy | The target table strategy includes **<UI-TERM key="must.PreCheck.constants.Skip">Ignore</UI-TERM>** and **<UI-TERM key="oms-v2.New.components.ParameterForm.StopProject">Stop Migration</UI-TERM>**: <ul><li>Select **<UI-TERM key="oms-v2.Operation.constants.Ignore">Ignore</UI-TERM>**. OMS writes the original data to the target table and ignores the write-in data when target table data conflicts with the write-in data. <main id="notice" type='notice'><h4>Notice</h4><p>If <b>Ignore</b> is selected, the full check uses the IN mode to pull data from the target table and cannot check scenarios where the target table contains data that the source table does not. Additionally, the check performance will be significantly affected. </p></main></li><li>Select **<UI-TERM key="oms-v2.New.components.ParameterForm.StopProject">Stop Migration</UI-TERM>**. If the target table contains data, OMS displays the error message: "Migration is not allowed, because the target table contains data." Therefore, you must process the target table data before the migration proceeds. <main id="notice" type='notice'><h4>Notice</h4><p> If you click "Restore" after an error occurs, the migration will resume and the table strategy will be ignored. Exercise caution when performing this action. </p></main></li> </ul>| | Is the index creation allowed after data migration is completed? | You can choose whether to allow you to create an index after full data migration is completed. Indexes are created after full data migration to shorten the time required for full data migration. For information about the usage limits of this feature, see the following table. <main id="notice" type='notice'><h4>Notice</h4><ul><li><p>You can choose this option only on the <b>Migrate Type</b> page and enable both <b>Schema Migration</b> and <b>Full Data Migration</b>.</p><li>Non-unique indexes can be created after full data migration. </li><li>Indexes cannot be created after full data migration in OceanBase Database V1.x. </li></ul></main>| If you can allow secondary indexes, we recommend that you adjust the parameters based on the hardware conditions and current business traffic of OceanBase Database. * If you use OceanBase Database V4.x, adjust the following sys tenant and business tenant parameters through a command-line tool. * Adjust parameters for the sys tenant. ```sql // parallel_servers_target sets the conditions of parallel query queueing on each server. // For better performance, we recommend that you set the value to 1.5 times that of the physical CPU. At the same time, the value is limited to a maximum of 64 to avoid lock contention issues within OceanBase Database. set global parallel_servers_target = 64; ``` Please translate the following technical document into English: * Modify the parameters of a business tenant. ```sql // The file buffer size limits the memory buffer size. This is usually the size of the file. You can change the buffer size by adjusting the buffer. The buffer size in this example is set to 1 MB. The file buffer size is set to the value obtained from the file buffer size attribute. The file buffer size attribute value is the size of the file. The buffer size must be at least 1 MB. This example sets the file buffer size to 1 MB. The file buffer size is set to the value of the file buffer size attribute. The file buffer size attribute value is the size of the file. The buffer size must be at least 1 MB. This example sets the file buffer size to 1 MB. alter system set _temporary_file_io_area_size = '10' tenant = 'xxx'; // 4.x Disables rate limiting. alter system set sys_bkgd_net_percentage = 100; ``` * If you use OceanBase Database V2.x or V3.x, you must adjust the following sys tenant parameters using a command-line tool. ```sql // parallel_servers_target specifies the number of parallel queries to be performed by each server. // If the parameter is to improve performance, we recommend that you set it to a value greater than the number of physical CPU cores. For example, we recommend that you set it to 1.5 times the number of physical CPU cores. We recommend that you set the value to no more than 64 to prevent OceanBase Database from encountering a contention issue when it tries to obtain locks.set global parallel_servers_target = 64;
// data_copy_concurrency specifies the maximum number of data migration and copy tasks that can be executed concurrently in the system. alter system set data_copy_concurrency = 200; ``` Please translate the following technical document into English: * Incremental synchronization On the **<UI-TERM key="OMS.Migration.New.SelectMigrationType">Select Migration Type</UI-TERM>** page, select **<UI-TERM key="oms-v2.components.SyncOption.CheckSyncKind.IncrementalSynchronization">Incremental Synchronization</UI-TERM>** before you can see the parameters that follow.  |Parameter|Description| |----|----------------------------------|| Incremental synchronization rate limits | You can decide based on your requirements whether to enable the rate limit for incremental synchronization. If you enable this feature, you can specify RPS (maximum number of data rows that can be synchronized to the destination per second in the incremental synchronization phase) and BPS (maximum amount of data that can be synchronized to the destination per second in the incremental synchronization phase).
Note
The set RPS and BPS are only used for rate limiting and throttling, and the actual performance of incremental synchronization may vary based on the source, destination, and instance configuration.
|| Resource allocation for incremental log pulling | You can specify small, medium, or large memory values as the default for incremental log pulling or define your own configurations for resource allocation. You can configure resource allocation for the Store component to limit the resources required to pull logs in the incremental synchronization phase of the task.<main id="notice" type='notice'><h4>Notice</h4><p>When you define configurations, the minimum value is 1, and only integer values are supported. </p></main>| | Allocate resources for incremental data writing | You can set the default write concurrency and memory consumption of **<UI-TERM key="OMS.components.ResourceAllocation.Small">Small</UI-TERM>**, **<UI-TERM key="OMS.components.ResourceAllocation.Medium">Medium</UI-TERM>**, or **<UI-TERM key="OMS.components.ResourceAllocation.Large">Large</UI-TERM>**. You can also customize the resource allocation for incremental data writing. You can configure the resource allocation for the incremental synchronization component Incr-Sync to limit the resource consumption of data writing in the incremental synchronization phase of the task. <main id="notice" type='notice'><h4>Notice</h4><p>When you customize the settings, the minimum value is 1, and only integer values are allowed. </p></main>|Please translate the following technical document into English:
|Duration of incremental record retention|The length of time that the OMS stores cached incremental resolution files. The longer the retention period, the more disk space the Store component consumes. | |Start time of the incremental synchronization| <ul><li> If **<UI-TERM key="oms-v2.New.components.MigrationObjectForm.FullMigration">Full Migration</UI-TERM>** is selected when migration type is selected, this parameter is not displayed. <li> If **<UI-TERM key="oms-v2.Operation.constants.FullMigration">Full Migration</UI-TERM>** is not selected when migration type is selected but **<UI-TERM key="oms-v2.components.SyncOption.CheckSyncKind.IncrementalSynchronization">Incremental Synchronization</UI-TERM>** is selected, specify the date after which the migration starts, as in the current system time. For more information, see [Incremental synchronization start time](../1200.reference-guide/90.function-introduction/500.incremental-synchronization-timestamp.md). | Please translate the following technical document into English: * Reverse incremental migration On the **<UI-TERM key="OMS.Migration.New.SelectMigrationType">Select Migration Type</UI-TERM>** page, make sure that the **<UI-TERM key="OMS.components.IncrTransferForm.ReverseIncrement">Reverse Increment</UI-TERM>** is selected for the migration type before the corresponding parameters in this section are displayed. The configuration parameters of Reverse Increment are the same as those of Increment Synchronization, and you can select **<UI-TERM key="OMS.components.ParameterConfigForm.AppSwitchForm.ReuseIncrementalSynchronizationConfiguration">Reuse Incremental Synchronization Configuration</UI-TERM>**.
* Full verification On the **<UI-TERM key="oms-v2.components.MigrationTypeParams.SelectAMigrationType">Select Migration Type</UI-TERM>** page, select **<UI-TERM key="oms-v2.DataVerify.constants.taskStepMap.FullCalibration">Full Validation</UI-TERM>** before the following parameters are displayed.  |Parameter|Description| |----|----------------------------------| | Full verification resource configuration | You can choose **<UI-TERM key="OMS.components.ResourceAllocation.Small">Small</UI-TERM>**, **<UI-TERM key="OMS.components.ResourceAllocation.Medium">Medium</UI-TERM>**, or **<UI-TERM key="OMS.components.ResourceAllocation.contsants.Large">Large</UI-TERM>** as the default concurrent reading and memory values, or you can customize the resource configuration for full verification. You can use the resource configuration of the Full-Verification component to limit the resource consumption during the full verification phase of the task. <main id="notice" type='notice'><h4>Notice</h4><p>When you customize the configuration, the minimum value is 1 and only integers are supported. </p></main>| Please translate the following technical document into English:- Advanced options
The parameter is displayed in the page only when the Oracle compatibility mode of the target OceanBase Database is V4.3.0 or later, and either Schema Migration or Incremental Synchronization > DDL synchronization is selected on the Select Migration Type page.

The target table storage types include Default, Row Storage, Column Storage, and Hybrid Row-Column Storage. This setting specifies the storage type of the target table object during schema migration or incremental synchronization. For more information, see default_table_store_format.
Notes
Please translate the following technical document into English:
Default applies the target options to other options based on the parameters at the destination end. When you use the default option, the schema is written to a table in an adaptive manner for a table migrated by using schema migration or added incrementally by using incremental DDL operations.