OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.0.2Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • OMS HA
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Integrate the OIDC protocol to OMS to implement SSO
    • Scale-out OMS
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Active-active disaster recovery between OceanBase databases
    • Migrate data from a TiDB database to a MySQL tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • Change the name of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start and pause a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
      • Set an incremental synchronization timestamp
    • Supported DDL operations and limits for synchronization
      • DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • Overview of DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle tenant of OceanBase Database
        • Overview
        • CREATE TABLE
          • Overview
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Normal indexes
        • ALTER TABLE
          • Modify tables
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop partitions
            • Drop subpartitions
            • Add partitions and subpartitions
            • Modify partitions
            • Truncate partitions
        • DROP TABLE
        • COMMENT
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database
      • DDL synchronization between MySQL tenants of OceanBase Database
      • DDL synchronization between Oracle tenants of OceanBase Database
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from an OceanBase database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • Change the name of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start and pause a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M tickets
      • View details of an O&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Operation audit
  • OMS O&M
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Component parameters
      • Coordinator
      • Condition
      • Source Plugin
        • Overview
        • StoreSource
        • DataFlowSource
        • LogProxySource
        • KafkaSource (TiDB)
      • Sink Plugin
        • Overview
        • JDBC-Sink
        • KafkaSink
        • DatahubSink
        • RocketMQSink
      • Store parameters
        • Parameters of an Oracle store
        • Parameters of a DB2 store
        • Parameters of a MySQL store
        • Parameters of an OceanBase store
      • Parameters of the CM component
      • Parameters of the Supervisor component
    • Set throttling
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V4.0
      • OMS V4.0.2
      • OMS V4.0.1
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms OMS HA Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Integrate the OIDC protocol to OMS to implement SSO Scale-out OMS Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Migrate data from a MySQL database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to a MySQL tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to an Oracle database Migrate data from an Oracle database to an Oracle tenant of OceanBase Database Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Active-active disaster recovery between OceanBase databases Migrate data from a TiDB database to a MySQL tenant of OceanBase Database Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project Change the name of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start and pause a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Set an incremental synchronization timestamp Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database DDL synchronization between MySQL tenants of OceanBase Database DDL synchronization between Oracle tenants of OceanBase Database Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from an OceanBase database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project Change the name of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start and pause a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & CertificationTicket
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.0.2
iconOceanBase Migration Service
V 4.0.2Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Modify system parameters

Last Updated:2026-04-14 07:36:47  Updated
share
What is on this page
Procedure
Precheck items

folded

share

OceanBase Migration Service (OMS) allows the admin user to modify system parameters and general users to view system parameters.

Procedure

  1. Log on to the OMS console.

  2. In the left-side navigation pane, choose System Management > System Parameters.

    The table on the System Parameters page contains the following columns: Parameter Name, Value, Module, Description, and Modified At.

    Parameter
    Description
    Default value
    oms.oceanbase.logproxy.pool The configurations of oblogproxy. OMS automatically identifies this parameter. For more information, see oblogproxy parameters. {"default":""}
    operation_audit_log.enable Specifies whether to enable operation audit. false
    operation_audit_log.retention_time The retention period of operation audit records. The recommended value ranges from 1 to 1095, in days. 7
    oms.captcha.enable Specifies whether to enable the verification code feature. After you change the value to true, an image verification code appears on the logon page. The image verification code will time out in 10 minutes. You must enter the image verification code to log on to OMS. A timeout or input error will cause a logon failure. false
    oms.user.password.expiration.date.config The expiration strategy for different user passwords. {"rootRolePasswordValidityDays":90,"rootViewerRolePasswordValidityDays":90,"adminRolePasswordValidityDays":90,"adminViewerRolePasswordValidityDays":90,"userRolePasswordValidityDays":90,"userViewerRolePasswordValidityDays":90,"userPasswordValidityDaysTipsThreshold":30}
    mysql.store.metabuilder.filter Specifies whether the MySQL store filters metadata based on the whitelist. Valid values: true and false.
    • true: indicates that metadata is filtered based on the whitelist.
    • false: indicates that all metadata is pulled without filtering.
    In scenarios without online DDL statements, which need to be implemented by using the RENAME TABLE statement, we recommend that you set this parameter to true to reduce the time for obtaining metadata. If online DDL statements are used, set this parameter to false. Otherwise, subsequent data cannot be consumed after an online DDL statement is executed.
    false
    mysql_to_obmysql.charset.mapping The conversion rule for character sets that are not supported in a project of migrating data from a MySQL database to a MySQL tenant of OceanBase Database. []
    Example: [{"charset":"utf16le","mappedCharset":"utf16"},{"charset":"*","mappedCharset":"utf8mb4"}]
    mysql_to_obmysql.collation.mapping The conversion rule for collations that are not supported in a project for migrating data from a MySQL database to a MySQL tenant of OceanBase Database. []
    Example: [{"collation":"utf16le_general_ci","mappedCollation":"utf16_general_ci"},{"collation":"*","mappedCollation":"utf8mb4_general_ci"}]
    alarm.thresholds The alert thresholds.
    • failedLengthOfTimeThreshold: the failure alert threshold for a project.
    • syncDelayThreshold: the delay alert threshold for a synchronization project.
    • syncFailedLengthOfTimeThreshold: the failure time alert threshold for a synchronization project.
    • migrateDelayThreshold: the delay alert threshold for a migration project.
    • migrateFailedLengthOfTimeThreshold: the failure time alert threshold for a migration project.
    • alarmRestrainTimeOfMin: the alert suppression time by alert level.
    • HIGH: the high protection level.
    • MEDIUM: the medium protection level.
    • LOW: the low protection level.
    • IGNORE: the No Protection level.
    {"delayThreshold":{"HIGH":30,"MEDIUM":300,"LOW":900},"failedLengthOfTimeThreshold":{"HIGH":30,"MEDIUM":300,"LOW":900},"alarmRestrainTimeOfMin":{"HIGH":3,"MEDIUM":3,"LOW":3,"IGNORE":100},"rule":"OMS_CONFIG_RULE_ALARM_THRESHOLDS"}
    ha.config Specifies whether to enable high availability (HA). For more information, see Modify HA configurations. {"enable":false,"enableHost":false,"enableStore":true,"perceiveStoreClientCheckpoint":false,"enableConnector":true,"enableJdbcWriter":true,"subtopicStoreNumberThreshold":5,"checkRequestIntervalSec":600,"checkJdbcWriterIntervalSec":600,"checkHostDownIntervalSec":540,"checkModuleExceptionIntervalSec":240,"clearAbnormalResourceHours":72}
    migration.checker.params.fast The parameters that must be specified when the concurrency of the Full-Import or Full-Verification component is Fast.
    • limitator.platform.threads.number: the number of threads.
    • limitator.select.batch.max: the batch query size in full data migration or verification.
    • limitator.image.insert.batch.max: the batch INSERT size in full data migration.
    • limitator.datasource.connections.max: the number of connections. If the number of concurrent threads exceeds the number of connections, the concurrent data is invalid.
    • limitator.java.opt: the Java virtual machine (JVM) parameters.
    • task.checker_jvm_param: the JVM parameters of the Full-Import and Full-Verification components.
    • task.new.migrate.job.reader.worker.num: the number of threads used to read data from the source.
    • task.new.migrate.job.writer.worker.num: the number of threads used to write data to the destination.
    • task.new.migrate.job.write.batch.size: the maximum size of a batch for writing data to the destination.
    • task.new.migrate.job.data.queue.size: the size of the queue of the data read. If the amount of data not consumed by the dispatcher exceeds the value of this parameter, the reader is blocked.
    • task.new.migrate.job.batch.queue.size: the size of the batching queue. If the number of batches not consumed by the writer exceeds the value of this parameter, the dispatcher is blocked.
    • task.new.migrate.job.splitor.queue.size: the size of the data splitting queue. If the number of data splits that are not read exceeds this value, the data splitting process is paused.
    {"limitator.platform.threads.number":32,"limitator.select.batch.max":1200,"limitator.datasource.connections.max":50,"limitator.java.opt":null,"task.checker_jvm_param":"-server -Xms16g -Xmx16g -Xmn8g -Xss512k","task.new.migrate.job.reader.worker.num":32,"task.new.migrate.job.writer.worker.num":32,"task.new.migrate.job.write.batch.size":200,"task.new.migrate.job.data.queue.size":32768,"task.new.migrate.job.batch.queue.size":256,"task.new.migrate.job.splitor.queue.size":256}
    migration.checker.params.normal The parameters that must be specified when the concurrency of the Full-Import or Full-Verification component is Normal.
    • limitator.platform.threads.number: the number of threads.
    • limitator.select.batch.max: the batch query size in full data migration or verification.
    • limitator.image.insert.batch.max: the batch INSERT size in full data migration.
    • limitator.datasource.connections.max: the number of connections. If the number of concurrent threads exceeds the number of connections, the concurrent data is invalid.
    • limitator.java.opt: the Java virtual machine (JVM) parameters.
    • task.checker_jvm_param: the JVM parameters of the Full-Import and Full-Verification components.
    • task.new.migrate.job.reader.worker.num: the number of threads used to read data from the source.
    • task.new.migrate.job.writer.worker.num: the number of threads used to write data to the destination.
    • task.new.migrate.job.write.batch.size: the maximum size of a batch for writing data to the destination.
    • task.new.migrate.job.data.queue.size: the size of the queue of the data read. If the amount of data not consumed by the dispatcher exceeds the value of this parameter, the reader is blocked.
    • task.new.migrate.job.batch.queue.size: the size of the batching queue. If the number of batches not consumed by the writer exceeds the value of this parameter, the dispatcher is blocked.
    • task.new.migrate.job.splitor.queue.size: the size of the data splitting queue. If the number of data splits that are not read exceeds this value, the data splitting process is paused.
    {"limitator.platform.threads.number":8,"limitator.select.batch.max":600,"limitator.datasource.connections.max":50,"limitator.java.opt":null,"task.checker_jvm_param":"-server -Xms8g -Xmx8g -Xmn4g -Xss512k","task.new.migrate.job.reader.worker.num":8,"task.new.migrate.job.writer.worker.num":8,"task.new.migrate.job.write.batch.size":200,"task.new.migrate.job.data.queue.size":32768,"task.new.migrate.job.batch.queue.size":256,"task.new.migrate.job.splitor.queue.size":256}
    migration.checker.params.steady The parameters that must be specified when the concurrency of the Full-Import or Full-Verification component is Steady.
    • limitator.platform.threads.number: the number of threads.
    • limitator.select.batch.max: the batch query size in full data migration or verification.
    • limitator.image.insert.batch.max: the batch INSERT size in full data migration.
    • limitator.datasource.connections.max: the number of connections. If the number of concurrent threads exceeds the number of connections, the concurrent data is invalid.
    • limitator.java.opt: the Java virtual machine (JVM) parameters.
    • task.checker_jvm_param: the JVM parameters of the Full-Import and Full-Verification components.
    • task.new.migrate.job.reader.worker.num: the number of threads used to read data from the source.
    • task.new.migrate.job.writer.worker.num: the number of threads used to write data to the destination.
    • task.new.migrate.job.write.batch.size: the maximum size of a batch for writing data to the destination.
    • task.new.migrate.job.data.queue.size: the size of the queue of the data read. If the amount of data not consumed by the dispatcher exceeds the value of this parameter, the reader is blocked.
    • task.new.migrate.job.batch.queue.size: the size of the batching queue. If the number of batches not consumed by the writer exceeds the value of this parameter, the dispatcher is blocked.
    • task.new.migrate.job.splitor.queue.size: the size of the data splitting queue. If the number of data splits that are not read exceeds this value, the data splitting process is paused.
    {"limitator.platform.threads.number":4,"limitator.select.batch.max":200,"limitator.datasource.connections.max":50,"limitator.java.opt":null,"task.checker_jvm_param":"-server -Xms4g -Xmx4g -Xmn2g -Xss512k","task.new.migrate.job.reader.worker.num":4,"task.new.migrate.job.writer.worker.num":4,"task.new.migrate.job.write.batch.size":100,"task.new.migrate.job.data.queue.size":32768,"task.new.migrate.job.batch.queue.size":256,"task.new.migrate.job.splitor.queue.size":256}
    migration.db.support_versions The source database versions supported in data migration. The key is the database type, and the value is a regular expression containing supported database versions.
    • "MYSQL": "(5.5 |5.6|5.7|8.0).*": indicates that OMS supports MySQL 5.5, 5.6, 5.7, and 8.0.
    • "MARIADB": "10.[12345].*": indicates that OMS supports MariaDB 10.1.0 to 10.5.9.
    • "ORACLE": "1[01289].*": indicates that OMS supports Oracle 10g, 11g, 12c, 18c, and 19c.
    • "DB2": "(9.7|10.1|10.5|11.1|11.5).*": indicates that OMS supports DB2 LUW 9.7, 10.1, 10.5, 11.1, and 11.5 on the Linux or AIX operating system.
    { "MYSQL": "(5.5|5.6|5.7|8.0).*", "MARIADB": "10.[12345].*", "ORACLE": "1[01289].*", "DB2": "(9.7|10.1|10.5|11.1|11.5).*", "POSTGRESQL": "(10).*"}
    migration.mysql.support_collations The whitelist of collations supported by the source MySQL database in data migration. ["binary","gbk","gb18030","utf8mb4","utf16","utf8"]
    migration.mysql.support_charsets The whitelist of character sets supported by the source MySQL database in data migration. The value is an array of character sets supported by MySQL. Each element is one MySQL character set. ["binary","gbk","gb18030","utf8mb4","utf16","utf8"]
    migration.mysql.support_datatypes The whitelist of data types supported by the source MySQL database in data migration. The value is an array of data types supported by MySQL. Each element is one MySQL data type. []
    migration.oracle.unsupport_datatypes The blacklist of data types unsupported by the source Oracle database in data migration. The value is an array of data types unsupported by Oracle. Each element is one Oracle data type. ["LONG","LONG RAW","XMLTYPE","UNDEFINED","BFILE","ROWID","UROWID"]
    ops.dms.logic_name.suffix.pattern The prefix of the DMS-based logical table synchronization task. Empty
    ops.store.max_count_per_subtopic The maximum number of active store processes allowed under a subtopic. The value indicates the maximum number of active store processes allowed. 6
    precheck.skippable_flags Specifies whether to skip the precheck. In the case of failed precheck items, if you confirm that they have no impact on the database service, you can set their values to true in the precheck.skippable_flags parameter. The value of this parameter is of the JSON type. Example:
    { "DB_ACCOUNT_FULL_READ_PRIVILEGE": true, "DB_ACCOUNT_INCR_READ_PRIVILEGE":true, "DB_SERVICE_STATUS":true }.
    For more information about the values of different precheck items, see the "Precheck items" section in this topic.
    {}
    sync.unified.config The general parameter for an OMS synchronization project. It has the following three fields:
    • enableHeartBeatRecordToDataHub: specifies whether to deliver the heartbeats.
    • enableHadoopVendorsKafkaServer: specifies whether the Kafka server supports Hadoop.
    • disableIdentificationAlgorithm: specifies whether to disable hostname (domain name) verification for the address of the created Kafka data source that requires SSL authentication. If the SSL root certificate provided does not contain the address of this Kafka data source, you can set this parameter to true to disable hostname verification.
    • checkStoreStartedMinSyncProcess: the minimum synchronization progress for determining whether the store is properly started. Default value: 3s. You can change the value and the change takes effect in real time.
      The full migration starts only when the store is running and the synchronization progress exceeds the specified minimum value.
    • fullJvmMem: the initial memory of the Full-Import component. Default value: 4096 MB.
    • incrJvmMem: the initial memory of the Incr-Sync component. Default value: 2048 MB.
    {"enableHeartBeatRecordToDataHub":false,"enableHadoopVendorsKafkaServer":false,"disableIdentificationAlgorithm":false,"checkStoreStartedMinSyncProcess":3,"fullJvmMem":4096,"incrJvmMem":2048}
    store.topic.mode.config The rule that is used to build a whitelist of store subtopics in a data synchronization project in OMS.
    • OceanBase Database supports the sharing of store subtopics within a cluster and among tenants.
      In the oceanbase field, you can specify OCEANBASE_TENANT or OCEANBASE_CLUSTER for mode. The mode_num indicates the maximum subscription granularity for the specified mode.
      • Sharing within a cluster: A store is shared within a cluster. The store configurations in tenants are invalid. The first created store is reused. A new store is created only when the current store does not meet the timestamp requirements.
      • Sharing among tenants:
        When the value of mode_num is 1, different stores are created for different tenants.
        When the value of mode_num is greater than 1, multiple tenants share the same store. The number of affected tenants is the value of mode_num minus 1, and the first created store is reused. A new store is created only when the current store does not meet the timestamp requirements.
    • OceanBase Database in MySQL mode supports the subscription of store subtopics based only on service instances.
      In the mysql field, you can specify only INSTANCE for mode.
    • OceanBase Database in Oracle mode supports the subscription of store subtopics based only on databases.
      In the oracle field, you can specify only DATABASE for mode.
    {"oceanbase":{"mode":"OCEANBASE_TENANT","modeNum":1},"mysql":{"mode":"INSTANCE","modeNum":1},"oracle":{"mode":"DATABASE","modeNum":1}}
    sync.connnector.max.size The maximum number of concurrent data synchronization projects. 2
    sync.ddl.supported The DDL operations supported for data synchronization projects. {"supportConfigs":{"ADB_SINK":["ALTER_TABLE","ALTER_TABLE_ADD_COLUMN","ALTER_TABLE_MODIFY_COLUMN"],"DATAFLOW_SINK":["ALTER_TABLE","ALTER_TABLE_ADD_COLUMN","ALTER_TABLE_MODIFY_COLUMN"]}}
    store.logic.config.url.config If the ConfigUrl of OceanBase Database Proxy (ODP) logical tables cannot be directly obtained, you must manually specify it by using this parameter. The key of configUrlMap is {ip}:{port}-{cluster}, and the value is the correct ConfigUrl. {"enabled":false,"configUrlMap":{}}
    migration.timeout The timeout period for executing migration objects. {"ddl.timeout.in.private.cloud": 120000, "ddl.timeout.in.public.cloud": 600000}
  3. Click the edit icon in the Value column for a specified parameter.

  4. In the Modify Value dialog box, set Value or click Reset to Default.

  5. Click OK.

Precheck items

The following table describes precheck items that are controlled by the precheck.skippable_flags parameter. The value true indicates that the key can be skipped, and the value false indicates that the key cannot be skipped. For example, if the precheck of the unique key and foreign key can be skipped, you can specify the following statements to configure the precheck.skippable_flags parameter:

{
  "DB_UK_INDEX": true,
  "DB_FOREIGN_REFERENCE":true,
}

You can log on to the OMS console, go to the details page of a data migration project, and view the names of the precheck items on the Precheck tab, which are prefixed with "Source-" or "Destination-".

Precheck item
Enumeration name
Unique key check DB_UK_INDEX
Incremental log check DB_INCR_LOG
Foreign key check DB_FOREIGN_REFERENCE
Account full read permission check DB_ACCOUNT_FULL_READ_PRIVILEGE
Account write permission check DB_ACCOUNT_WRITER_PRIVILEGE
Account Incremental Read Permission Check DB_ACCOUNT_INCR_READ_PRIVILEGE
OB cluster node connectivity check DB_NODE_CONNECT
White List 64K Length Limit Check DB_WHITE_LIST_LENGTH
Account read privilege on oceanbase.gv$memstore check DB_MEMSTORE_READ_PRIVILEGE
Table name uniqueness check DB_TABLE_NAME_UNIQUE
LOB field 48m limit check DB_TABLE_LOB_SIZE
Database ROW_MOVEMENT check DB_ROW_MOVEMENT
Check for the migration of no more than 508 fields or 4092 fields in table migration.
The number of checked fields depends on the version of OceanBase Database. The number of checked fields is 508 for OceanBase Database earlier than V3.2.0, and 4092 for OceanBase Database V3.2.0 and later.
DB_COLUMN_COUNT
Database data type check DB_DATA_TYPE
Database engine check DB_ENGINE
Inner account full read permission check DB_INNER_ACCOUNT_FULL_READ_PRIVILEGE
Foreign key constraint support check for Oracle DB_ORACLE_FK_SUPPORT_CHECK
Database running status check DB_SERVICE_STATUS
Account permission on table creation check DB_ACCOUNT_CREATE_PRIVILEGE
Partition table check DB_PARTITION_TABLE
Kafka topic existence check KAFKA_TOPIC
RocketMQ topic existence check ROCKETMQ_TOPIC
Datahub schema consistency check DATAHUB_SCHEMA
Datahub topic structure synchronization does not exist check DATAHUB_TOPIC_NOT_EXIST
Logical table account full read permission check LOGIC_DB_ACCOUNT_FULL_READ_PRIVILEGE
Logical table account incremental read permission check LOGIC_DB_ACCOUNT_INCR_READ_PRIVILEGE
Logical table existence check LOGIC_TABLE_EXIST
Check whether the source and target ends are the same LOGIC_TABLE_SAME_SOURCE_AND_DEST
Database clock synchronization check DB_TIME_SYNC
Check on the number of data packets DB_MAX_ALLOWED_PACKET

Previous topic

Associate with OCP
Last

Next topic

Modify HA configurations
Next
What is on this page
Procedure
Precheck items