OceanBase Migration Service (OMS) allows the admin user to modify system parameters and general users to view system parameters.
Procedure
Log in to the OMS console.
In the left-side navigation pane, choose System Management > System Parameters.
The table on the System Parameters page describes details about parameters in the following columns: Parameter Name, Value, Module, Description, and Modified At.
Parameter Description Default value oms.oceanbase.logproxy.pool The configurations of oblogproxy. OMS automatically identifies this parameter. For more information, see oblogproxy parameters. {"default":""} operation_audit_log.enable Specifies whether to enable operation audit. For more information about the operation audit feature, see Operation audit. false operation_audit_log.retention_time The retention period of operation audit records. The recommended value ranges from 1 to 1095, in days. 7 oms.captcha.enable Specifies whether to enable the verification code feature. After you change the value to true, an image verification code appears on the login page. The image verification code will time out in 10 minutes. You must enter the image verification code to log in to OMS. A timeout or input error will cause a login failure.false oms.user.password.expiration.date.config The expiration strategy for different user passwords. {"rootRolePasswordValidityDays":90,"rootViewerRolePasswordValidityDays":90,"adminRolePasswordValidityDays":90,"adminViewerRolePasswordValidityDays":90,"userRolePasswordValidityDays":90,"userViewerRolePasswordValidityDays":90,"userPasswordValidityDaysTipsThreshold":30} precheck.timeout.seconds The timeout period of a precheck task, in seconds. 600 mysql.store.metabuilder.filter Specifies whether the MySQL store filters metadata based on the allowlist. Valid values: true: indicates that metadata is filtered based on the allowlist.false: indicates that all metadata is pulled without filtering.
RENAME TABLEstatement, we recommend that you set this parameter totrueto reduce the time for obtaining metadata. If online DDL statements are used, set this parameter tofalse. Otherwise, subsequent data cannot be consumed after an online DDL statement is executed.false mysql_to_obmysql.charset.mapping The conversion rule for character sets that are not supported in a task that migrates data from a MySQL database to a MySQL-compatible tenant of OceanBase Database. []
Example: [{"charset":"utf16le","mappedCharset":"utf16"},{"charset":"*","mappedCharset":"utf8mb4"}]mysql_to_obmysql.collation.mapping The rules for converting unsupported collations in a task that migrates data from a MySQL database to a MySQL-compatible tenant of OceanBase Database. []
Example: [{"collation":"utf16le_general_ci","mappedCollation":"utf16_general_ci"},{"collation":"*","mappedCollation":"utf8mb4_general_ci"}]obmysql41_to_obmysql40_and_earlier.collation.mapping The conversion rule for an unsupported collation when you migrate data from a MySQL-compatible tenant of OceanBase Database V4.1.0 to a MySQL-compatible tenant of OceanBase Database of an earlier version. [{"collation":"latin1_swedish_ci","mappedCollation":"utf8mb4_general_ci"}] obmysql41_to_obmysql40_and_earlier.charset.mapping The conversion rule for an unsupported character set when you migrate data from a MySQL-compatible tenant of OceanBase Database V4.1.0 to a MySQL-compatible tenant of OceanBase Database of an earlier version. [{"charset":"latin1","mappedCharset":"utf8mb4"}] alarm.thresholds The alert thresholds. failedLengthOfTimeThreshold: the time threshold after which a task fails and is alerted.syncDelayThreshold: the delay alert threshold for a synchronization task.syncFailedLengthOfTimeThreshold: the time threshold after which a synchronization task fails and is alerted.migrateDelayThreshold: the delay alert threshold for a migration task.migrateFailedLengthOfTimeThreshold: the time threshold after which a migration task fails and is alerted.alarmRestrainTimeOfMin: the alert suppression time by alert level.HIGH: the high protection level.MEDIUM: the medium protection level.LOW: the low protection level.IGNORE: the no protection level.
{"delayThreshold":{"HIGH":30,"MEDIUM":300,"LOW":900},"failedLengthOfTimeThreshold":{"HIGH":30,"MEDIUM":300,"LOW":900},"alarmRestrainTimeOfMin":{"HIGH":3,"MEDIUM":3,"LOW":3,"IGNORE":100},"rule":"OMS_CONFIG_RULE_ALARM_THRESHOLDS"} ha.config Specifies whether to enable high availability (HA). For more information, see Modify HA configurations. {"enable":false,"enableHost":false,"enableStore":true,"perceiveStoreClientCheckpoint":false,"enableConnector":true,"enableJdbcWriter":true,"subtopicStoreNumberThreshold":5,"checkRequestIntervalSec":600,"checkJdbcWriterIntervalSec":600,"checkHostDownIntervalSec":540,"checkModuleExceptionIntervalSec":240,"clearAbnormalResourceHours":72} migration.db.support_versions The source database versions supported in data migration. The key is the database type, and the value is a regular expression containing supported database versions. "MYSQL": "(5.5\|5.6\|5.7\|8.0).*": indicates that OMS supports MySQL 5.5, 5.6, 5.7, and 8.0."MARIADB": "10.\[123456\].\*": indicates that OMS supports MariaDB 10.1.0 to 10.6.x."ORACLE": "1\[01289\].\*": indicates that OMS supports Oracle 10g, 11g, 12c, 18c, and 19c."DB2": "(9.7\|10.1\|10.5\|11.1\|11.5).\*": indicates that OMS supports DB2 LUW for Linux or AIX 9.7, 10.1, 10.5, 11.1, and 11.5."POSTGRESQL": "(10\|11\|12\|13).\*": indicates that OMS supports PostgreSQL 10.x, 11.x, 12.x, and 13.x.
{ "MYSQL": "(5.5|5.6|5.7|8.0).*", "MARIADB": "10.[123456].*", "ORACLE": "1[01289].*", "DB2": "(9.7|10.1|10.5|11.1|11.5).*", "POSTGRESQL": "(10|11|12|13).*"} migration.mysql.support_collations The allowlist of collations supported by the source MySQL database in data migration. ["binary","gbk","gb18030","utf8mb4","utf16","utf8"] migration.mysql.support_charsets The allowlist of character sets supported by the source MySQL database in data migration. The value is an array of character sets supported by MySQL. Each element is one MySQL character set. ["binary","gbk","gb18030","utf8mb4","utf16","utf8"] migration.mysql.support_datatypes The allowlist of data types supported by the source MySQL database in data migration. The value is an array of data types supported by MySQL. Each element is one MySQL data type. [] migration.oracle.unsupport_datatypes The blocklist of data types unsupported by the source Oracle database in data migration. The value is an array of data types unsupported by Oracle. Each element is one Oracle data type. ["LONG","LONG RAW","XMLTYPE","UNDEFINED","BFILE","ROWID","UROWID"] ops.dms.logic_name.suffix.pattern The prefix of the DMS-based logical table in the synchronization task. Empty ops.store.max_count_per_subtopic The maximum number of active store processes allowed under a subtopic. The value indicates the maximum number of active store processes allowed. 6 precheck.skippable_flags Specifies whether to skip the precheck. In the case of failed precheck items, if you confirm that they have no impact on the database service, you can set their values to truein theprecheck.skippable_flagsparameter. The value of this parameter is of the JSON type. Example:{ "DB_ACCOUNT_FULL_READ_PRIVILEGE": true, "DB_ACCOUNT_INCR_READ_PRIVILEGE":true, "DB_SERVICE_STATUS":true }
For more information about the key values of different precheck items, see the Precheck items section in this topic.{} sync.unified.config The general parameter for an OMS synchronization task. It has the following three parameters: enableHeartBeatRecordToDataHub: specifies whether to deliver the heartbeats.enableHadoopVendorsKafkaServer: specifies whether the Kafka server supports Hadoop.disableIdentificationAlgorithm: specifies whether to disable hostname (domain name) verification for the address of the created Kafka data source that requires SSL authentication. If the SSL root certificate provided does not contain the address of this Kafka data source, you can set this parameter totrueto disable hostname verification.checkStoreStartedMinSyncProcess: the minimum synchronization progress for determining whether the store is properly started. Default value: 3s. You can change the value and the change takes effect in real time.
The full migration starts only when the Store component is running and the synchronization progress exceeds the specified minimum value.fullJvmMem: the initial memory of the Full-Import component. Default value: 4096 MB.incrJvmMem: the initial memory of the Incr-Sync component. Default value: 2048 MB.
{"enableHeartBeatRecordToDataHub":false,"enableHadoopVendorsKafkaServer":false,"disableIdentificationAlgorithm":false,"checkStoreStartedMinSyncProcess":3,"fullJvmMem":4096,"incrJvmMem":2048} store.topic.mode.config The rule that is used to build an allowlist of store subtopics in a data synchronization task in OMS. - OceanBase Database supports store subtopics in unshared and shared modes. Store subtopics in shared mode can be shared within a cluster and among tenants.
In theoceanbasefield, you can specifyUN_SHARE,DATABASE,OCEANBASE_TENANT, orOCEANBASE_CLUSTERformode. Themode_numfield specifies the maximum subscription granularity for the specified mode.- Sharing within a cluster: A store is shared within a cluster. The store configurations in tenants are invalid. The first created store is reused. A new store is created only when the current store does not meet the timestamp requirements.
- Sharing among tenants:
When the value ofmode_numis 1, different stores are created for different tenants.
When the value ofmode_numis greater than 1, multiple tenants share the same store. The number of affected tenants is the value ofmode_numminus 1, and the first created store is reused. A new store is created only when the current store does not meet the timestamp requirements.
- OceanBase Database in the MySQL compatible mode supports the subscription of store subtopics based only on service instances.
In themysqlfield, you can specifyINSTANCEorUN_SHAREformode. - OceanBase Database in the Oracle compatible mode supports the subscription of store subtopics based only on databases.
In theoraclefield, you can specifyDATABASEorUN_SHAREformode.
{"oceanbase":{"mode":"OCEANBASE_TENANT","modeNum":1},"mysql":{"mode":"INSTANCE","modeNum":1},"oracle":{"mode":"DATABASE","modeNum":1}} sync.connnector.max.size The maximum number of concurrent data synchronization tasks. 2 sync.ddl.supported The DDL operations supported for data synchronization tasks. {"supportConfigs":{"ADB_SINK":["ALTER_TABLE","ALTER_TABLE_ADD_COLUMN","ALTER_TABLE_MODIFY_COLUMN"],"DATAFLOW_SINK":["ALTER_TABLE","ALTER_TABLE_ADD_COLUMN","ALTER_TABLE_MODIFY_COLUMN"]}} store.logic.config.url.config If the ConfigUrl of OceanBase Database Proxy (ODP) logical tables cannot be directly obtained, you must manually specify it by using this parameter. The key of configUrlMapis{ip}:{port}-{cluster}, and the value is the correct ConfigUrl.{"enabled":false,"configUrlMap":{}} migration.timeout The timeout configuration for the migration object. {"ddl.timeout.in.private.cloud": 120000, "ddl.timeout.in.public.cloud": 600000} migration.db.dest.support_versions The target database versions supported in data migration. {"POLARDB_X_1": {"OB_MYSQL": "(1|2|3).*"}} migration.record.init.batch_size The initial batch size of schema migration objects. 100 is.show.polardb.public Specifies whether to display PolarDB-X 1.0 data sources. false oms.user.task.allocate.count.switch The maximum number of tasks that the admin user can allocate. {"allocateSwitch":false,"totalCount":0} datasource.multi_version.driver.config The supported versions of database drivers. {"MYSQL":{"com.mysql.cj.jdbc.Driver":[],"shade.com.mysql.jdbc.Driver":[]},"POLARDB_X_1":{"com.mysql.cj.jdbc.Driver":[],"shade.com.mysql.jdbc.Driver":[]}} datasource.mysql.driver.should_switch_prompts The prompts displayed when you switch the driver of the MySQL data source. ["CLIENT_PLUGIN_AUTH is required","Unknown system variable 'performance_schema'"] supervisor.config The configuration of the Supervisor component. {"configMap":{}} store.jvm.config.default The default JVM configuration delivered by the Store component. {"MYSQL_STORE":{"memory":2048,"enable":true},"PG_STORE":{"memory":2048,"enable":true},"enable":true} project.transfer.object.modify.config The configuration of the feature that dynamically adds or removes table objects. {"incrSyncRealtimeThreshold":60,"storeRealtimeThreshold":60,"storePullBackDuration":60,"inheritableStoreConfigKeys":[],"inheritableConnectorConfigKeys":[]} dataflow.query.batch.size The number of objects queried by the Dataflow component at a time. {"POLARDB_X_1":999,"OB_ORACLE":999,"OB_MYSQL":999,"POLARDB_X_2":999,"ORACLE":999,"DEFAULT":10000} struct.transfer.retry.config The parameters related to the retry of a schema migration task. {"enabled":true,"max.attempts":5,"skippable.errors":["Duplicate key name","name is already used by an existing object","already exists"],"non.retryable.errors":["Out of resource","execute ddl while there are some long running ddl on foreign key related table not allowed","fulltext index is disabled by default not supported","out of disk space"],"retryable.errors":{"Ignore":"OMS","Cannot do an operation on a closed statement":"OMS","supervisor restart":"OMS","oms inner service network error":"OMS","Failed to invoke":"OMS","Timeout":"DB","Add index failed":"DB","Entry not exist":"DB","unexpected end of stream":"DB","could not load system variables":"DB","No memory or reach tenant memory limit":"DB"}} struct.transfer.config The parameters related to the execution of a schema migration task. {"dbcat.ob.query.timeout": 15,"ob.parallel": 2,"independent.obj.convert.batch.size": 50,"independent.obj.convert.partition.size": 10,"independent.obj.execute.batch.size": 50,"independent.obj.execute.partition.size": 2,"independent.core.pool.size": 256,"independent.max.pool.size": 256,"independent.queue.capacity": 16,"execute.ob.query.timeout": 15,"execute.ob4x.index.parallel": 2,"global.max.parallel": 300,"project.fetch.max.parallel": 4,"project.fetch.queue.size": 1,"project.execute.max.parallel": 4,"project.execute.queue.size": 1,"project.fetch.idle.interval.ms": 1000,"project.fetch.scan.batch.size": 64,"project.fetch.group.size": 16,"project.fetch.cache.size": 128,"project.execute.idle.interval.ms": 1000,"project.execute.scan.batch.size": 4,"project.execute.group.size": 1,"project.execute.cache.size": 8,"project.async.action.watcher.idle.interval.ms": 1000} Click the edit icon in the Value column for a specified parameter.
In the Modify Value dialog box, set Value or click Reset to Default.
Click OK.
Precheck items
The following table describes precheck items that are controlled by the precheck.skippable_flags parameter. The value true indicates that the precheck item can be skipped, and the value false indicates that the precheck item cannot be skipped. For example, if the precheck of the unique key and foreign key can be skipped, you can specify the following statements to configure the precheck.skippable_flags parameter:
{
"DB_UK_INDEX": true,
"DB_FOREIGN_REFERENCE":true,
}
You can log in to the OMS console, go to the details page of a data migration task, and view the names of the precheck items on the Pre-check tab, which are prefixed with "Source-" or "Target-".

| Precheck item | Enumeration name |
|---|---|
| Check whether the LOB field exceeds 48 MB in length | DB_TABLE_LOB_SIZE |
| Check the ROW_MOVEMENT parameter | ROW_MOVEMENT |
| Check the time zone of the database | DB_TIMEZONE |
| Check the privilege of the account to create a table | DB_ACCOUNT_CREATE_PRIVILEGE |
| Check the minimum privileges for an Oracle account | DB_ORACLE_MIN_PRIVILEGE |
| Check the table type | DB_TABLE_TYPE |
| Check the connectivity of the database | RDB_CONNECT |
| Check the connectivity of the message queue | MQ_CONNECT |
| Check the connectivity of the logical table | LOGIC_DB_CONNECT |
| Check the existence of logical tables | LOGIC_TABLE_EXIST |
| Check the privilege to obtain configUrl | LOGIC_DB_ACCOUNT_INCR_DRC_READ_PRIVILEGE |
| Check the existence of message queue topics | MQ_TOPIC_EXIST |
| Check the existence of TiCDC Kafka topics | TIDB_KAFKA_TOPIC_EXIST |
| Check the existence of Datahub topics for schema synchronization | DATAHUB_TOPIC_NOT_EXIST |
| Check the existence of databases | RDB_SCHEMA_EXIST |
| Check the case-sensitivity for database names | DB_CASE_SENSITIVE |
| Check the database version | DB_VERSION |
| Check the wal_level parameter of the database | DB_WAL_LEVEL |
| Check the SQL mode of the database | DB_SQL_MODE |
| Check the incremental logs | DB_INCR_LOG |
| Check the clock synchronization of the database | DB_TIME_SYNC |
| Check the primary/standby database | DB_MASTER_SLAVE |
| Check the maximum packet size allowed | DB_MAX_ALLOWED_PACKET |
| Check the read privilege of the account on oceanbase.memstore | DB_MEMSTORE_READ_PRIVILEGE |
| Check the CREATE privilege of the MySQL account | DB_MYSQL_CREATE_PRIVILEGE |
| Check whether the MySQL account has authorized OMS to maintain heartbeat data | DB_MYSQL_UPDATE_HEARTBEAT_PRIVILEGE |
| Check the privilege of the drc_user user to read the oceanbase database in the sys tenant | DB_STRUCT_OB_SYSTEM_VIEW_READ_PRIVILEGE |
| Check the connectivity of OceanBase cluster nodes | DB_NODE_CONNECT |
| Check the uniqueness of the table name | DB_TABLE_NAME_UNIQUE |
| Check the existence of the table | RDB_TABLE_EXIST |
| Verify that no LOB field exists | LOB_FIELD_NOT_EXIST |
| Check the schema consistency of logical tables | LOGIC_TABLE_SCHEMA_CONSISTENCY |
| Verify that same target databases and source databases do not constitute a circular replication | LOGIC_TABLE_SAME_SOURCE_AND_DEST |
| Check the schema migration privilege | DB_STRUCT_PRIVILEGE |
| Check the write privilege of the account | DB_ACCOUNT_WRITER_PRIVILEGE |
| Check the full read privileges of the account | DB_ACCOUNT_FULL_READ_PRIVILEGE |
| Check the incremental read privilege of the account | DB_ACCOUNT_INCR_READ_PRIVILEGE |
| Check the character set of the database | DB_CHARSET |
| Check the maximum number of fields in table migration | DB_COLUMN_COUNT |
| Check the database constraints | DB_CONSTRAINT |
| Check the data type of the primary key | DB_DATA_TYPE_INDEX |
| Check the database engine | DB_ENGINE |
| Check the integrity of foreign key dependencies between objects | DB_FOREIGN_REFERENCE |
| Check the function-based unique index table | DB_FUNCTION_BASED |
| Check the full read privileges of the internal accounts | DB_INNER_ACCOUNT_FULL_READ_PRIVILEGE |
| Check the existence of foreign key tables | DB_NO_FOREIGN_KEY |
| Check whether the foreign key constraints of Oracle databases are supported | DB_ORACLE_FK_SUPPORT_CHECK |
| Verify that no pseudo-column exists | DB_PSEUDO_COLUMN_CHECK |
| Check the data type of the database | DB_DATA_TYPE |
| Check whether the allowlist exceeds 64 KB in length | DB_WHITE_LIST_LENGTH |
| Check the consistency of case-sensitivity configurations for database and table names | DB_LOWER_CASE_TABLE_NAMES |
| Check the read privilege on the OceanBase Database system view gv$sysstat | OB_SYS_STAT_VIEW_READ_PRIV |
| Check the integrity of dependencies between objects | RDB_OBJECT_DEPENDENCY_INTEGRITY |
| Check the limits on reverse incremental migration from Oracle databases | DB_ORACLE_REVERSE_LIMIT |
| Check the limits on reverse incremental migration | DB_REVERSE_LIMIT |
| Check whether the same table is used as the source and target | DB_TABLE_CYCLICALLY |
| Check the unique key | DB_UK_INDEX |
| Check the ROW_MOVEMENT parameter of the database | DB_ROW_MOVEMENT |
| Check the unique key of the logical table | LOGIC_TABLE_UK_INDEX |
| Check the full read privileges of the account on logical tables | LOGIC_DB_ACCOUNT_FULL_READ_PRIVILEGE |
| Check the incremental read privilege of the account on logical tables | LOGIC_DB_ACCOUNT_INCR_READ_PRIVILEGE |
| Check the schema consistency | SYNC_SCHEMA_CONSISTENCY |
| Check whether column-level supplemental logging is enabled | DB_COL_LEVEL_SUPPLEMENTAL_LOG |
| Check the read privilege on OceanBase Database system views for full migration or verification | OB_MYSQL_SYS_VIEW_READ_PRIV |
| Check the read privilege on system views | STRUCT_SYS_VIEW_READ_PRIV |
| Check the SHOW VIEW privilege of MySQL accounts | DB_MYSQL_SHOW_VIEW_PRIVILEGE |
Check whether log archiving is enabled for OceanBase Database V4.x
NoticeThis precheck item is available only for physical data sources of OceanBase Database. |
OB_ARCHIVE_LOG |