| Parameter | Required | Default value | Description |
|---|---|---|---|
| datasource.binarypk.permit | No | true | Specifies whether the primary key can be a binary value. The value true indicates that the primary key can be a binary value. The value false indicates that the primary key cannot be a binary value. |
| datasource.char.trim | No | false | Specifies whether to delete trailing spaces of Oracle char data. The value true indicates to delete the trailing spaces. The value false indicates not to delete the trailing spaces. |
| datasource.image.address | Yes | None | The address of the destination database. The address format varies with the databases.
|
| datasource.image.charset.map | No | {"gb18030":"gbk","gbk":"gbk","utf16":"utf16","default":"utf8"} |
The character set mapped from the destination OceanBase database. |
| datasource.image.index.ignore | No | false | Specifies whether to directly pull the index with the lowest score in the source table. If you set this parameter to true, the index with the lowest score in the source table will be directly pulled if no new index can be matched. In this case, duplicate data may exist if the destination table has no primary key or unique key. |
| datasource.image.insert.error.ignore | No | false | Specifies whether to ignore data insertion errors to avoid interruption. You can handle all the errors after the task is completed. The error information is recorded in the insertErrorIgnoreerror.log file. This log file is stored in the logs directory. |
| datasource.image.password | Yes | None | The password for accessing the destination database. |
| datasource.image.table.notexists.ignore | No | false | Specifies whether to ignore error messages indicating that the destination table is not found. The value "false" indicates not to ignore such error messages. The value "true" indicates to ignore such error messages. |
| datasource.image.table.empty.check | No | true | Specifies whether to check whether the destination table is empty. This parameter applies only to data migration scenarios.
|
| datasource.image.type | Yes | None | The type of the destination database. Valid values: OB_IN_ORACLE_MODE and OB10. OB10 represents OceanBase Database in MySQL Mode. |
| datasource.image.username | Yes | None | The password for accessing the destination database. |
| datasource.master.address | Yes | None | The address of the source database. The address format varies with the data sources. |
| datasource.master.password | Yes | None | The username for accessing the source database. |
| datasource.master.systenant.password | No | Same as that of datasource.master.password |
The password for accessing the SYS tenant. |
| datasource.master.systenant.username | No | Same as that of datasource.master.username |
The username for accessing the SYS tenant, for example, root@sys#ob_1008810671.admin. |
| datasource.master.type | Yes | None | The type of the source database. Valid values: ORACLE, MYSQL, DB2, SYBASE, OB_IN_ORACLE_MODE, OB10, and OB05. |
| datasource.master.username | Yes | None | The password for accessing the source database. |
| datasource.nchar.charset.map | No | {"AL16UTF16":"UTF16"} | The character set mapped from NLS_NCHAR_CHARACTERSET in Oracle database to Java. |
| datasource.ob.splitor.bymarcroinfo | No | false | The splitting strategy used by OceanBase Migration Service (OMS). This parameter is required when the source database is an OceanBase database containing a primary key table.
|
| datasource.read.mod | No | stream | The mode for reading data for migration and verification.
|
| datasource.sybase.charset | No | utf-8 | The character set for the Sybase database. |
| datasource.sybase.metadata.uppercase | No | true | Specifies whether table names are case sensitive. The default value is true. Set this parameter to false if the destination database is a MySQL tenant in OceanBase Database. Retain the default value in other cases. By default, table names are in lowercase for MySQL tenants in OceanBase Database, and are in uppercase for Oracle tenants in OceanBase Database. |
| datasource.timezone | No | +00:00 | The time zone. |
| filter.master.blacklist | Yes | None (nullable) | The blacklists. Multiple blacklists are separated by vertical lines (|). Each blacklist is in the format of schema;tablename;column. Each section is a regular expression and *=.* are wildcards. Sample code: ^db$;^table1$;.*|^db$;^table2$;.* .*;.*;.* |
| filter.master.whitelist | Yes | None (nullable) | The whitelists. Multiple whitelists are separated by vertical lines (|). Each whitelist is in the format of schema;tablename;column. Each section is a regular expression and *=.* are wildcards. Sample code: ^db$;^table1$;.*|^db$;^table2$;.* .*;.*;.* The following example represents that the OMS_OBJECT_NUMBER, OMS_RELATIVE_FNO, OMS_BLOCK_NUMBER, and OMS_ROW_NUMBER columns are included: ^OBDBA$;^ROWID_TEST$;^(?!OMS_OBJECT_NUMBER)(?!OMS_RELATIVE_FNO)(?!OMS_BLOCK_NUMBER)(?!OMS_ROW_NUMBER).*$ |
| filter.verify.inmod.keys | No | 100 | The maximum number of data records that can be queried by using a primary key or a unique key in a batch in the destination database. |
| filter.verify.inmod.tables | No | "" | The tables that need to be verified by using the IN mode. If this parameter is not specified, the prefix indexes are verified by using the IN mode by default. If this parameter is specified, only matched tables are verified by using the IN mode. The value is in the same format as a blacklist or whitelist, for example, ^sqltest$;^prefix_index_test_bigdata$;.*. In IN-mode verification, data is queried from the source database by using the sharding column, and the destination database parses the primary key/unique key from the data queried from the source database, queries data in itself by using the key in(...) statement and the primary key/unique key, and compares the data. Notice: The IN-mode verification method has lower efficiency than the default comparison method and is inapplicable if the destination database has a large amount of data. |
| filter.verify.inmod.workers | No | 1 | The number of concurrent IN queries in the destination database. |
| filter.verify.rectify.type | No | No | Specifies whether to correct data in the destination database.
verify/{subId}/{schema}/rectify/suc/{table}.sql. The SQL files failed to be corrected are saved in verify/{subId}/{schema}/rectify/err/{table}.sql. |
| force.split.by.rowid | No | false | If the source database has a hidden primary key, you can set this parameter to true so that the hidden column is forcibly used as the primary key during data migration and verification. For example, the Oracle database has the hidden primary key ROWID, and OceanBase Database has the hidden primary key __pk_increment. If you set this parameter to false, the primary key or unique key of the table is used as the primary key during data migration and verification. You can set this parameter to true only when the source database is in ORACLE, OB_IN_ORACLE_MODE, or OB10 mode. You must set this parameter to false for other database types. |
| limitator.datasource.connections.max | No | 50 | The maximum size of the database connection pool. The setting of this parameter applies to both the source and destination databases. For example, if you set this parameter to 100, the maximum number of connections is 100 for both the source and destination databases. The value must be greater than 0 and greater than the maximum number of concurrent requests. The specific value is subject to the relationship between the number of concurrent requests and the number of connections. |
| limitator.datasource.image.ob10freecpu.min | No | 30 | This is a new parameter used to prevent CPU resource exhaustion for OceanBase Database. The default value is 30, indicating that links will no longer be obtained when the CPU usage reaches 30%. If you set this parameter to 0, CPU resource exhaustion prevention is not activated. The priority of the limitator.datasource.image.ob10freecpu.min parameter is lower than that of the limitator.datasource.image.ob10freememory.min parameter, where the latter is used to prevent memory resource exhaustion. This parameter is deactivated when data write suspension has been triggered based on the setting of the limitator.datasource.image.ob10freememory.min parameter. This parameter takes effect only when the threshold specified by the limitator.datasource.image.ob10freememory.min parameter is not reached or the database is the source database, for which the limitator.datasource.image.ob10freememory.min parameter does not take effect. |
| limitator.datasource.image.ob10freememory.min | No | 20 | The memory protection threshold for OceanBase Database. Value range: 10 to 100. This value is the percentage of idle memory space of OceanBase Database that triggers data write suspension. For example, the value 30 indicates that when the percentage of idle memory space is less than 30%, data writes are suspended, and the program keeps waiting until the percentage of idle memory space exceeds 30%. This parameter takes effect only on an OceanBase database that serves as the destination of a data migration link. |
| limitator.db2.graphic.rtirm | No | false | Specifies whether to delete spaces when data of the graphic type is read from the DB2 database. |
| limitator.empty.table.select.parallel | No | 16 | The number of concurrent hints based on which the Oracle database determines whether a table is empty. SELECT /*+parallel(%d)*/1 FROM %s WHERE ROWNUM<2 |
| limitator.image.insert.batch.max | No | 500 | The maximum number of records inserted into the destination database that triggers a commit. For example, the value 200 indicates that a maximum of 200 records can be inserted before a commit must be performed. |
| limitator.java.opt | No | None | The runtime Java virtual machine (JVM) parameter. This parameter is checked when the checker script /home/ds/bin/checker_new.sh is booted. If this parameter is specified, the setting of this parameter is to boot the checker script. |
| limitator.noneed.retry.exception | No | None | Specifies whether to directly exit SQL statement execution when an exception that does not allow retries, for example, a "table not found" exception, is encountered. |
| limitator.null.replace.enable | No | true | Specifies whether to replace null values read by the JDBCWriter when the value of the gb18030 field is any one of chr(129) to chr(254) and a not-null constraint is present in the Oracle database. If you set this parameter to true, the null values are replaced with the value specified by LIMITATOR_NULL_REPLACE_STRING. |
| limitator.null.replace.string | No | Space | The string used to replace null values. This parameter is used together with the limitator.null.replace.enable parameter. |
| limitator.oceanbase.index.useuk | No | true | Specifies whether to use a unique key when no primary key is available and the source database is OceanBase Database. |
| limitator.oom.avoid | No | false | Specifies whether to enable out-of-memory (OOM) prevention. After OOM prevention is enabled, the system measures and records the actual memory size. This may affect the system performance. To enable OOM prevention, you must set useCursorFetch to true for setFetchSize to take effect.
useCursorFetch and useServerPrepStmts. Once OOM prevention is enabled, ensure that the invalid time value 0000-00-00 00:00:00 and FLOAT values beyond the precision range do not exist. |
| limitator.platform.split.threads.number | No | limitator.platform.threads.number/8<8?8:limitator.platform.threads.number/8 | The number of threads in a thread pool that trigger splitting of the thread pool. The minimum value is 8. The minimum value of limitator.datasource.connections.max must be the sum of the values of limitator.platform.threads.number and limitator.platform.split.threads.number. |
| limitator.platform.threads.number | No | 3 | The maximum size of the worker thread pool during migration and verification. This parameter is used together with limitator.datasource.connections.max. Generally, the number of connections must be greater than the maximum size of the worker thread pool. Otherwise, some worker threads must wait for connections. |
| limitator.prefix.index.action | No | 1 | The handling method that is used when the primary key is a prefix index in a link for migrating data from a MySQL database to a MySQL tenant in OceanBase Database. Default value: 2.
|
| limitator.prepared.splitors | No | 1000 | Task splitting suspends when the number of split tasks minus the number of running tasks is greater than the specified value of this parameter, which means that many split tasks are waiting in the queue. |
| limitator.queue.size | No | limitator.select.batch.max*4 | Migration: The size of the cache queue in which the data read from the source database is stored. Verification: The size of the cache queue and JOIN cache queue in which the data read from the source and destination databases is stored. The actual queue size is the value of this parameter multiplied by 2. Default value: limitator.select.batch.max*4. |
| limitator.query.withorder | No | true | Specifies whether to sort queries. The default value is true, which means that the queries are to be sorted. This parameter is implemented only for MySQL databases and MySQL tenants in OceanBase Database. |
| limitator.resume.verify.fromkeys | No | false | Specifies whether to reverify only the records that are found inconsistent in the last verification. This parameter takes effect only when the following parameters are set to the specified values:
|
| limitator.reviewer.period | No | 3 | The review interval, in seconds. |
| limitator.reviewer.review.batch.max | No | 100 | The number of keys queried in a review. |
| limitator.reviewer.rounds.max | No | 20 | The maximum number of reviews allowed in a verification process. For inconsistent data found in verification, the review process queries these keys in the source and destination databases and performs comparison multiple times. This parameter specifies the maximum number of times of comparison allowed. |
| limitator.reviewer.time.max | No | 60 | The maximum review time, in seconds. |
| limitator.select.batch.max | No | 3000 | The maximum number of records read from the source database in a batch. This parameter affects stmt.setFetchSize(fetchSize). This parameter specifies the number of records migrated or verified during primary key-based migration and verification. When an Oracle ROWID is used as the primary key for migration, this parameter calculates the size of each block. The formulas are as follows: To-be-split data volume = Field length of the table × limitator.select.batch.max, Block size = To-be-split data volume/8k. This parameter affects the size of the queue read from the source database: queueSize=limitator.select.batch.max*4. |
| limitator.splitor.blocks | No | 128 | The number of blocks of each shard. This parameter is valid only when tables without primary keys in the Oracle database are migrated. |
| limitator.splitor.block.number.max | No | Long.MAX_VALUE | The maximum number of blocks of a shard. When this value is exceeded, data is split based on data files. |
| limitator.splitor.compare.threads.number | No | 1 | The number of comparison threads during the verification of a single shard. This parameter is valid only for the verification process. |
| limitator.split.usecondition | No | false | Specifies whether to use conditions in the SQL statements for querying data by using the sharding column. |
| limitator.splitor.writer.number | No | 1 | The number of tasks written to the destination database by using each sharding column. This parameter is valid only for data migration projects. |
| limitator.sql.exec.max.last.time | No | 3600 | The maximum SQL statement retry time, in seconds. |
| limitator.table.diff.max | No | 1000000 | The maximum number of inconsistent records found during verification. If this value is exceeded, the verification ends and the review is not performed. |
| limitator.table.nonunique.max | No | 10000 | The maximum number of records that can be migrated without indexes. |
| limitator.verify.many2one | No | false | Specifies whether to enable the many-to-one table verification mode. In this mode, the verification is successful if the data in the source table is found in the destination table. The value "true" specifies to enable the many-to-one table verification mode. |
| mapper.from_master_to_image.list | No | None | The mapping of schemas from the source database to the destination database. The value is usually in the format of sourceSchema ;*;*= destSchema ;*;* , with a maximum of four sections. Multiple mappings are separated with vertical lines (|). |
| rectifier.image.enable | No | false | Specifies whether to automatically correct data in the destination table. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true. |
| rectifier.image.operator.delete | No | false | Specifies whether to correct deleted data. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true. |
| rectifier.image.operator.insert | No | false | Specifies whether to correct inserted data. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true. |
| rectifier.image.operator.update | No | false | Specifies whether to correct updated data. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true. |
| sampler.verify.ratio | No | 100 | The percentage of records sampled for comparison. The value must be greater than 0 and less than 100. |
| src.record.filter.mapping | No | None | A GroovyRule configuration, which is required when task.split.mode is set to true. |
| task.resume | No | false | Set the value to "false" if the task is run for the first time and to "true" if the task is run for recheck. |
| src.table.whitelist | No | None | A GroovyRule configuration, which is required when task.split.mode is set to true. |
| task.active.active | No | false | The active-active flag. The value "true" indicates an active-active link. |
| task.id | Yes | None | The unique ID of the checker, which is related to the runtime directory. |
| task.subId | Yes | None | The initial value is 1 and the value increments by 1 each time a recheck is performed. A directory is created for each sub ID in the task directory to record the corresponding running result files. /home/ds/run/{taskname}/{task.type}/{task.subId} |
| task.type | Yes | None | The task type. Valid values: migrate or verify. |
| weak.consistency.read | No | false | Specifies whether to enable weak consistency read. This parameter is valid in OceanBase Database. The value "true" specifies to enable weak consistency read. set @@ob_read_consistency='weak' |
Checker parameters
share
Previous topic