OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.0.0Community Edition

  • OMS Documentation
  • What's new
  • OMS Community Edition Introduction
    • What is OMS Community Edition?
    • Limits
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Terms
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS Community Edition
    • Deployment type
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS Community Edition on a single node
    • Deploy OMS Community Edition on multiple nodes in a single region
    • Deploy OMS Community Edition on multiple nodes in multiple regions
    • Scale-out and deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS Community Edition console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Migrate data from a MySQL database to OceanBase Community Edition
    • Migrate data from OceanBase Community Edition to a MySQL database
    • Migrate data within OceanBase Community Edition
    • Migrate data within OceanBase Community Edition in active-active disaster recovery scenarios
    • Migrate data from a TiDB database to OceanBase Community Edition
    • Migrate data from a PostgreSQL database to OceanBase Community Edition
    • Manage data migration projects
      • View details of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start, pause, and resume a data migration project
      • Release and delete a data migration project
    • Schema migration mechanisms
    • Supported DDL operations in incremental migration and limits
      • Supported DDL operations in incremental migration from a MySQL database to OceanBase Community Edition and limits
      • Supported DDL operations in incremental migration from OceanBase Community Edition to a MySQL database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
    • Configure matching rules for migration objects
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from OceanBase Database Community Edition to a Kafka instance
    • Create a project to synchronize data from OceanBase Database Community Edition to a RocketMQ instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start, pause, and resume a data synchronization project
      • Release and delete a data synchronization project
    • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase-CE data source
      • Create a MySQL data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update quotas
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Destroy a store
      • Connector
        • View details of a connector
        • Start and pause a connector
        • Migrate a connector
        • Update the configurations of a connector
        • Batch O\&M
        • Delete a connector
      • JDBCWriter
        • View details of a JDBCWriter
        • Start and pause a JDBCWriter
        • Migrate a JDBCWriter
        • Update the configurations of a JDBCWriter
        • Batch O\&M
        • Delete a JDBCWriter
      • Checker
        • View the information about a checker
        • Start and pause a checker
        • Rerun and reverify a checker
        • Update the configurations of a checker
        • Delete a checker
    • O&M tickets
      • View details of an O\&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
  • OMS Community Edition O&M
    • Manage OMS services
    • OMS logs
    • O&M operations for the Store component
    • Store parameters
      • Parameters of a MySQL store
      • Parameters of an OceanBase store
    • O&M operations for the Supervisor component
    • Parameters of the Supervisor component
    • O&M operations for the Connector component
    • Connector parameters
      • Parameters of a destination RocketMQ instance
      • Parameters of a DataflowSink instance
      • Parameters in the destination Kafka instance
      • Parameters of the source database in full migration
      • Parameters of the source database in incremental data synchronization
      • Parameters for intermediate-layer synchronization
    • Checker parameters
    • JDBCWriter parameters
    • Parameters of the CM component
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
    • Project diagnostics
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • Full migration
        • FAQ about full migration
          • How do I query the ID of a checker?
          • How do I query log files of the Checker component of OMS?
          • How do I query the verification result files of the Checker component of OMS?
          • What do I do if the destination table does not exist?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update the configurations of a JDBCWriter?
        • How do I start or stop a JDBCWriter?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I download an image?
      • How do I upgrade Store?

Download PDF

OMS Documentation What's new What is OMS Community Edition? Limits Overview Hierarchical functional system Basic components Terms Data migration process Data synchronization process Deployment type System and network requirements Memory and disk requirements Prepare the environment Deploy OMS Community Edition on a single node Deploy OMS Community Edition on multiple nodes in a single region Deploy OMS Community Edition on multiple nodes in multiple regions Scale-out and deployment Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Migrate data from a MySQL database to OceanBase Community Edition Migrate data from OceanBase Community Edition to a MySQL database Migrate data within OceanBase Community Edition Migrate data within OceanBase Community Edition in active-active disaster recovery scenarios Migrate data from a TiDB database to OceanBase Community Edition Migrate data from a PostgreSQL database to OceanBase Community Edition View details of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start, pause, and resume a data migration project Release and delete a data migration project Schema migration mechanisms Supported DDL operations in incremental migration from a MySQL database to OceanBase Community Edition and limits Supported DDL operations in incremental migration from OceanBase Community Edition to a MySQL database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Configure matching rules for migration objects Data synchronization overview Create a project to synchronize data from OceanBase Database Community Edition to a Kafka instance Create a project to synchronize data from OceanBase Database Community Edition to a RocketMQ instance View details of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start, pause, and resume a data synchronization project Release and delete a data synchronization project Data formats Create an OceanBase-CE data source Create a MySQL data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a PostgreSQL data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database O&M overview Go to the overview page View server information Update quotas View server logs View details of an O\&M ticket Skip a ticket or sub-ticket Retry a ticket or sub-ticket Overview Manage users Manage departments View project alerts View system alerts Manage alert settings Associate with OCP Modify system parameters Modify HA configurations Manage OMS services OMS logs O&M operations for the Store component Parameters of a MySQL store Parameters of an OceanBase store O&M operations for the Supervisor component Parameters of the Supervisor component O&M operations for the Connector component Parameters of a destination RocketMQ instance Parameters of a DataflowSink instance Parameters in the destination Kafka instance Parameters of the source database in full migration Parameters of the source database in incremental data synchronization Parameters for intermediate-layer synchronization Checker parameters JDBCWriter parameters Parameters of the CM component
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.0.0
iconOceanBase Migration Service
V 4.0.0Community Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Checker parameters

Last Updated:2023-06-25 03:29:51  Updated
share
What is on this page
share
Parameter Required Default value Description
datasource.binarypk.permit No true Specifies whether the primary key can be a binary value. The value true indicates that the primary key can be a binary value. The value false indicates that the primary key cannot be a binary value.
datasource.char.trim No false Specifies whether to delete trailing spaces of Oracle char data. The value true indicates to delete the trailing spaces. The value false indicates not to delete the trailing spaces.
datasource.image.address Yes None The address of the destination database. The address format varies with the databases.
  • MySQL database and MySQL tenant in OceanBase Database: ip:port
  • Oracle database: ip:port/servicename
  • Oracle tenant in OceanBase Database: ip:port
  • DB2 database: ip:port/db
datasource.image.charset.map No {"gb18030":"gbk","gbk":"gbk","utf16":"utf16","default":"utf8"} The character set mapped from the destination OceanBase database.
datasource.image.index.ignore No false Specifies whether to directly pull the index with the lowest score in the source table. If you set this parameter to true, the index with the lowest score in the source table will be directly pulled if no new index can be matched. In this case, duplicate data may exist if the destination table has no primary key or unique key.
datasource.image.insert.error.ignore No false Specifies whether to ignore data insertion errors to avoid interruption. You can handle all the errors after the task is completed. The error information is recorded in the insertErrorIgnoreerror.log file. This log file is stored in the logs directory.
datasource.image.password Yes None The password for accessing the destination database.
datasource.image.table.notexists.ignore No false Specifies whether to ignore error messages indicating that the destination table is not found. The value "false" indicates not to ignore such error messages. The value "true" indicates to ignore such error messages.
datasource.image.table.empty.check No true Specifies whether to check whether the destination table is empty. This parameter applies only to data migration scenarios.
  • If this parameter is set to true and task.resume is set to false, the system reports an error if the destination table is not empty.
  • When you set this parameter to false, the system does not check whether the destination table is empty.
datasource.image.type Yes None The type of the destination database. Valid values: OB_IN_ORACLE_MODE and OB10. OB10 represents OceanBase Database in MySQL Mode.
datasource.image.username Yes None The password for accessing the destination database.
datasource.master.address Yes None The address of the source database. The address format varies with the data sources.
datasource.master.password Yes None The username for accessing the source database.
datasource.master.systenant.password No Same as that of datasource.master.password The password for accessing the SYS tenant.
datasource.master.systenant.username No Same as that of datasource.master.username The username for accessing the SYS tenant, for example, root@sys#ob_1008810671.admin.
datasource.master.type Yes None The type of the source database. Valid values: ORACLE, MYSQL, DB2, SYBASE, OB_IN_ORACLE_MODE, OB10, and OB05.
datasource.master.username Yes None The password for accessing the source database.
datasource.nchar.charset.map No {"AL16UTF16":"UTF16"} The character set mapped from NLS_NCHAR_CHARACTERSET in Oracle database to Java.
datasource.ob.splitor.bymarcroinfo No false The splitting strategy used by OceanBase Migration Service (OMS). This parameter is required when the source database is an OceanBase database containing a primary key table.
  • true: Sharding is performed based on system tables of OceanBase Database.
  • false: Sharding is performed based on normal indexes.
datasource.read.mod No stream The mode for reading data for migration and verification.
  • stream: reads data in streaming mode.
  • batch: reads data in batches.
datasource.sybase.charset No utf-8 The character set for the Sybase database.
datasource.sybase.metadata.uppercase No true Specifies whether table names are case sensitive. The default value is true. Set this parameter to false if the destination database is a MySQL tenant in OceanBase Database. Retain the default value in other cases. By default, table names are in lowercase for MySQL tenants in OceanBase Database, and are in uppercase for Oracle tenants in OceanBase Database.
datasource.timezone No +00:00 The time zone.
filter.master.blacklist Yes None (nullable) The blacklists. Multiple blacklists are separated by vertical lines (|). Each blacklist is in the format of schema;tablename;column. Each section is a regular expression and *=.* are wildcards. Sample code:
^db$;^table1$;.*|^db$;^table2$;.* .*;.*;.*
filter.master.whitelist Yes None (nullable) The whitelists. Multiple whitelists are separated by vertical lines (|). Each whitelist is in the format of schema;tablename;column. Each section is a regular expression and *=.* are wildcards. Sample code:
^db$;^table1$;.*|^db$;^table2$;.* .*;.*;.*
The following example represents that the OMS_OBJECT_NUMBER, OMS_RELATIVE_FNO, OMS_BLOCK_NUMBER, and OMS_ROW_NUMBER columns are included: ^OBDBA$;^ROWID_TEST$;^(?!OMS_OBJECT_NUMBER)(?!OMS_RELATIVE_FNO)(?!OMS_BLOCK_NUMBER)(?!OMS_ROW_NUMBER).*$
filter.verify.inmod.keys No 100 The maximum number of data records that can be queried by using a primary key or a unique key in a batch in the destination database.
filter.verify.inmod.tables No "" The tables that need to be verified by using the IN mode. If this parameter is not specified, the prefix indexes are verified by using the IN mode by default. If this parameter is specified, only matched tables are verified by using the IN mode. The value is in the same format as a blacklist or whitelist, for example, ^sqltest$;^prefix_index_test_bigdata$;.*. In IN-mode verification, data is queried from the source database by using the sharding column, and the destination database parses the primary key/unique key from the data queried from the source database, queries data in itself by using the key in(...) statement and the primary key/unique key, and compares the data.
Notice:
The IN-mode verification method has lower efficiency than the default comparison method and is inapplicable if the destination database has a large amount of data.
filter.verify.inmod.workers No 1 The number of concurrent IN queries in the destination database.
filter.verify.rectify.type No No Specifies whether to correct data in the destination database.
  • no: Data is not to be corrected. This is the default value.
  • now: Data is corrected during verification. In this case, the setting of the maximum number of inconsistent records allowed will be ignored.
  • review: Data is corrected after several rechecks after the verification is completed. In this case, the setting of the maximum inconsistencies allowed applies.
The corrected SQL files are saved in verify/{subId}/{schema}/rectify/suc/{table}.sql. The SQL files failed to be corrected are saved in verify/{subId}/{schema}/rectify/err/{table}.sql.
force.split.by.rowid No false If the source database has a hidden primary key, you can set this parameter to true so that the hidden column is forcibly used as the primary key during data migration and verification. For example, the Oracle database has the hidden primary key ROWID, and OceanBase Database has the hidden primary key __pk_increment. If you set this parameter to false, the primary key or unique key of the table is used as the primary key during data migration and verification. You can set this parameter to true only when the source database is in ORACLE, OB_IN_ORACLE_MODE, or OB10 mode. You must set this parameter to false for other database types.
limitator.datasource.connections.max No 50 The maximum size of the database connection pool. The setting of this parameter applies to both the source and destination databases. For example, if you set this parameter to 100, the maximum number of connections is 100 for both the source and destination databases. The value must be greater than 0 and greater than the maximum number of concurrent requests. The specific value is subject to the relationship between the number of concurrent requests and the number of connections.
limitator.datasource.image.ob10freecpu.min No 30 This is a new parameter used to prevent CPU resource exhaustion for OceanBase Database. The default value is 30, indicating that links will no longer be obtained when the CPU usage reaches 30%. If you set this parameter to 0, CPU resource exhaustion prevention is not activated. The priority of the limitator.datasource.image.ob10freecpu.min parameter is lower than that of the limitator.datasource.image.ob10freememory.min parameter, where the latter is used to prevent memory resource exhaustion. This parameter is deactivated when data write suspension has been triggered based on the setting of the limitator.datasource.image.ob10freememory.min parameter. This parameter takes effect only when the threshold specified by the limitator.datasource.image.ob10freememory.min parameter is not reached or the database is the source database, for which the limitator.datasource.image.ob10freememory.min parameter does not take effect.
limitator.datasource.image.ob10freememory.min No 20 The memory protection threshold for OceanBase Database. Value range: 10 to 100. This value is the percentage of idle memory space of OceanBase Database that triggers data write suspension. For example, the value 30 indicates that when the percentage of idle memory space is less than 30%, data writes are suspended, and the program keeps waiting until the percentage of idle memory space exceeds 30%. This parameter takes effect only on an OceanBase database that serves as the destination of a data migration link.
limitator.db2.graphic.rtirm No false Specifies whether to delete spaces when data of the graphic type is read from the DB2 database.
limitator.empty.table.select.parallel No 16 The number of concurrent hints based on which the Oracle database determines whether a table is empty. SELECT /*+parallel(%d)*/1 FROM %s WHERE ROWNUM<2
limitator.image.insert.batch.max No 500 The maximum number of records inserted into the destination database that triggers a commit. For example, the value 200 indicates that a maximum of 200 records can be inserted before a commit must be performed.
limitator.java.opt No None The runtime Java virtual machine (JVM) parameter. This parameter is checked when the checker script /home/ds/bin/checker_new.sh is booted. If this parameter is specified, the setting of this parameter is to boot the checker script.
limitator.noneed.retry.exception No None Specifies whether to directly exit SQL statement execution when an exception that does not allow retries, for example, a "table not found" exception, is encountered.
limitator.null.replace.enable No true Specifies whether to replace null values read by the JDBCWriter when the value of the gb18030 field is any one of chr(129) to chr(254) and a not-null constraint is present in the Oracle database. If you set this parameter to true, the null values are replaced with the value specified by LIMITATOR_NULL_REPLACE_STRING.
limitator.null.replace.string No Space The string used to replace null values. This parameter is used together with the limitator.null.replace.enable parameter.
limitator.oceanbase.index.useuk No true Specifies whether to use a unique key when no primary key is available and the source database is OceanBase Database.
limitator.oom.avoid No false Specifies whether to enable out-of-memory (OOM) prevention. After OOM prevention is enabled, the system measures and records the actual memory size. This may affect the system performance. To enable OOM prevention, you must set useCursorFetch to true for setFetchSize to take effect.
  • If a timestamp value 0000-00-00 00:00:00 exists, an error will be reported, because such a value conflicts with OOM prevention.
  • To use the PS protocol in a MySQL tenant in OceanBase Database, you must set useServerPrepStmts to true to avoid invalid FLOAT values. If a timestamp value 0000-00-00 00:00:00 exists, an error will be reported.
For the preceding problems, this parameter is used to determine whether to set useCursorFetch and useServerPrepStmts. Once OOM prevention is enabled, ensure that the invalid time value 0000-00-00 00:00:00 and FLOAT values beyond the precision range do not exist.
limitator.platform.split.threads.number No limitator.platform.threads.number/8<8?8:limitator.platform.threads.number/8 The number of threads in a thread pool that trigger splitting of the thread pool. The minimum value is 8. The minimum value of limitator.datasource.connections.max must be the sum of the values of limitator.platform.threads.number and limitator.platform.split.threads.number.
limitator.platform.threads.number No 3 The maximum size of the worker thread pool during migration and verification. This parameter is used together with limitator.datasource.connections.max. Generally, the number of connections must be greater than the maximum size of the worker thread pool. Otherwise, some worker threads must wait for connections.
limitator.prefix.index.action No 1 The handling method that is used when the primary key is a prefix index in a link for migrating data from a MySQL database to a MySQL tenant in OceanBase Database. Default value: 2.
  • 0: Follow the original sharding method.
  • 1: Pull the entire table without sharding.
  • 2: Select another index for sharding. If no other index is available, the table is pulled as a whole.
limitator.prepared.splitors No 1000 Task splitting suspends when the number of split tasks minus the number of running tasks is greater than the specified value of this parameter, which means that many split tasks are waiting in the queue.
limitator.queue.size No limitator.select.batch.max*4 Migration: The size of the cache queue in which the data read from the source database is stored. Verification: The size of the cache queue and JOIN cache queue in which the data read from the source and destination databases is stored. The actual queue size is the value of this parameter multiplied by 2. Default value: limitator.select.batch.max*4.
limitator.query.withorder No true Specifies whether to sort queries. The default value is true, which means that the queries are to be sorted. This parameter is implemented only for MySQL databases and MySQL tenants in OceanBase Database.
limitator.resume.verify.fromkeys No false Specifies whether to reverify only the records that are found inconsistent in the last verification. This parameter takes effect only when the following parameters are set to the specified values:
  • limitator.resume.verify.fromkeys=true
  • task.resume=true
  • task.type=verify
limitator.reviewer.period No 3 The review interval, in seconds.
limitator.reviewer.review.batch.max No 100 The number of keys queried in a review.
limitator.reviewer.rounds.max No 20 The maximum number of reviews allowed in a verification process. For inconsistent data found in verification, the review process queries these keys in the source and destination databases and performs comparison multiple times. This parameter specifies the maximum number of times of comparison allowed.
limitator.reviewer.time.max No 60 The maximum review time, in seconds.
limitator.select.batch.max No 3000 The maximum number of records read from the source database in a batch. This parameter affects stmt.setFetchSize(fetchSize). This parameter specifies the number of records migrated or verified during primary key-based migration and verification. When an Oracle ROWID is used as the primary key for migration, this parameter calculates the size of each block. The formulas are as follows: To-be-split data volume = Field length of the table × limitator.select.batch.max, Block size = To-be-split data volume/8k. This parameter affects the size of the queue read from the source database: queueSize=limitator.select.batch.max*4.
limitator.splitor.blocks No 128 The number of blocks of each shard. This parameter is valid only when tables without primary keys in the Oracle database are migrated.
limitator.splitor.block.number.max No Long.MAX_VALUE The maximum number of blocks of a shard. When this value is exceeded, data is split based on data files.
limitator.splitor.compare.threads.number No 1 The number of comparison threads during the verification of a single shard. This parameter is valid only for the verification process.
limitator.split.usecondition No false Specifies whether to use conditions in the SQL statements for querying data by using the sharding column.
limitator.splitor.writer.number No 1 The number of tasks written to the destination database by using each sharding column. This parameter is valid only for data migration projects.
limitator.sql.exec.max.last.time No 3600 The maximum SQL statement retry time, in seconds.
limitator.table.diff.max No 1000000 The maximum number of inconsistent records found during verification. If this value is exceeded, the verification ends and the review is not performed.
limitator.table.nonunique.max No 10000 The maximum number of records that can be migrated without indexes.
limitator.verify.many2one No false Specifies whether to enable the many-to-one table verification mode. In this mode, the verification is successful if the data in the source table is found in the destination table. The value "true" specifies to enable the many-to-one table verification mode.
mapper.from_master_to_image.list No None The mapping of schemas from the source database to the destination database. The value is usually in the format of sourceSchema ;*;*= destSchema ;*;* , with a maximum of four sections. Multiple mappings are separated with vertical lines (|).
rectifier.image.enable No false Specifies whether to automatically correct data in the destination table. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true.
rectifier.image.operator.delete No false Specifies whether to correct deleted data. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true.
rectifier.image.operator.insert No false Specifies whether to correct inserted data. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true.
rectifier.image.operator.update No false Specifies whether to correct updated data. This parameter is used in the rectification process. Generally, we recommend that you do not set the value to true.
sampler.verify.ratio No 100 The percentage of records sampled for comparison. The value must be greater than 0 and less than 100.
src.record.filter.mapping No None A GroovyRule configuration, which is required when task.split.mode is set to true.
task.resume No false Set the value to "false" if the task is run for the first time and to "true" if the task is run for recheck.
src.table.whitelist No None A GroovyRule configuration, which is required when task.split.mode is set to true.
task.active.active No false The active-active flag. The value "true" indicates an active-active link.
task.id Yes None The unique ID of the checker, which is related to the runtime directory.
task.subId Yes None The initial value is 1 and the value increments by 1 each time a recheck is performed. A directory is created for each sub ID in the task directory to record the corresponding running result files. /home/ds/run/{taskname}/{task.type}/{task.subId}
task.type Yes None The task type. Valid values: migrate or verify.
weak.consistency.read No false Specifies whether to enable weak consistency read. This parameter is valid in OceanBase Database. The value "true" specifies to enable weak consistency read. set @@ob_read_consistency='weak'

Previous topic

Parameters for intermediate-layer synchronization
Last

Next topic

JDBCWriter parameters
Next