OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V3.4.0Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deployment Guide
    • Deployment type
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale-out and deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery project in OceanBase Database
    • Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start, pause, and resume a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
    • Supported DDL operations in incremental migration and limits
      • Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits
      • Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database
      • Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database
      • Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database
      • Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
      • Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from an OceanBase database to a Kafka instance
    • Create a project to synchronize data from an OceanBase database to a RocketMQ instance
    • Create a project to synchronize data from an OceanBase database to a DataHub instance
    • Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from a DBP logical table to a DataHub instance
    • Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from an IDB logical table to a DataHub instance
    • Create a project to synchronize data from a MySQL database to a DataHub instance
    • Create a project to synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start, pause, and resume a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update quotas
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Destroy a store
      • Connector
        • View details of a connector
        • Start and pause a connector
        • Migrate a connector
        • Update the configurations of a connector
        • Batch O\&M
        • Delete a connector
      • JDBCWriter
        • View details of a JDBCWriter
        • Start and pause a JDBCWriter
        • Migrate a JDBCWriter
        • Update the configurations of a JDBCWriter
        • Batch O\&M
        • Delete a JDBCWriter
      • Checker
        • View the information about a checker
        • Start and pause a checker
        • Rerun and reverify a checker
        • Update the configurations of a checker
        • Delete a checker
    • O&M tickets
      • View details of an O\&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • User management
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
    • Operation audit
  • O&M Guide
    • Manage OMS services
    • OMS logs
    • O&M operations for the Store component
    • Store parameters
      • Parameters of an Oracle store
      • Parameters of a DB2 store
      • Parameters of a MySQL store
      • Parameters of an OceanBase store
    • O&M operations for the Supervisor component
    • Parameters of the Supervisor component
    • O&M operations for the Connector component
    • Connector parameters
      • Parameters of a destination RocketMQ instance
      • Parameters of a DataflowSink instance
      • Parameters in the destination Kafka instance
      • Parameters of the source database in full migration
      • Parameters of the source database in incremental data synchronization
      • Parameters of a destination DataHub instance
      • Parameters of the source Sybase database
      • Parameters for intermediate-layer synchronization
    • Checker parameters
    • JDBCWriter parameters
    • Parameters of the CM component
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • FAQ about full migration
          • How do I query the ID of a checker?
          • How do I query log files of the Checker component of OMS?
          • How do I query the verification result files of the Checker component of OMS?
          • What do I do if the destination table does not exist?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update the configurations of a JDBCWriter?
        • How do I start or stop a JDBCWriter?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment type System and network requirements Memory and disk requirements Prepare the environment Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale-out and deployment Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery project in OceanBase Database Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start, pause, and resume a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from an OceanBase database to a Kafka instance Create a project to synchronize data from an OceanBase database to a RocketMQ instance Create a project to synchronize data from an OceanBase database to a DataHub instance Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database Create a project to synchronize data from a DBP logical table to a DataHub instance Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database Create a project to synchronize data from an IDB logical table to a DataHub instance Create a project to synchronize data from a MySQL database to a DataHub instance Create a project to synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start, pause, and resume a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source Delete a data source Create a database user User privileges
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V3.4.0
iconOceanBase Migration Service
V 3.4.0Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Parameters of an Oracle store

Last Updated:2026-04-14 07:36:28  Updated
share
What is on this page
share

Notice:

  • All parameters of a store are in the deliver2store section and prefixed with logminer. unless otherwise specified, for example, logminer. oracle.url, which is corresponding to oracle.url.

  • If the Logminer Reader plug-in is used, you do not need to be concerned with the section in which the parameters are located or prefix the parameters with logminer..

Parameter Required Default value Description
back_query_retry_times No 1 The number of retries allowed when no record is found in a flashback query.
connect_task_queue_size No 20000 The size of the queue in the connector task.
connect_timeout_ms No 30000 The timeout period for connecting to the Oracle database.
convert_to_source_record_thread_num No 6 The number of concurrent converters when incremental records are converted.
enable_check_ddl_cause_row_move No true Specifies whether to check whether DDL statements have caused row movement.
enable_flashback_query No true Specifies whether to enable the flashback query feature. If this feature is enabled, the value of a historical moment is queried based on the system change number (SCN) of the log. Otherwise, the value of the current moment is queried.
enable_nopk_table_update_change_to_delete_plus_insert No true Specifies whether to convert an UPDATE statement into a DELETE statement and an INSERT statement for a table without the primary key.
enable_replace_null No true Specifies whether to replace a null value with the value specified by replace_null_string for columns with a no-null constraint. This parameter is used together with replace_null_string.
enable_skip_revise_valid_rowid_exception No false Specifies whether to skip exceptions caused by row ID correction.
enable_stage_queue_compression No false Specifies whether to compress prefetched archive files that are stored temporarily in the local disk. We recommend that you set this parameter to true. The compressed files are almost the same as the archived logs in size. If the archive files are not compressed, they will occupy a large amount of disk space.
fetch_arch_logs_max_parallelism_per_instance No 1 The maximum number of archived logs that can be fetched concurrently from an instance (thread) . The default value is 1. When the value is greater than 1, archived logs are prefetched to improve the fetch performance. Prefetched archived logs are temporarily stored in the local disk.
fetch_log_size No 10000 The value for setFetchSize in PreparedStatement for fetching logs. The value affects the fetch speed.
fetch_online_log_interval_seconds No 5 The interval for analyzing online logs.
full_table_name_black_list Yes (nullable) None The blacklist. The table names in the list must contain the full paths. The two-segment format database.table and three-segment format tenant.database.table are supported, but the format used must be consistent. You can specify the full name or use an asterisk (*) for each segment. Regular expressions are not supported. Multiple tables are joined with vertical lines (|). The whitelist allows *.*, but the blacklist does not. If the first section is an asterisk (*), the second section must be a dot plus an asterisk (.*). For example, *.table1 is not allowed.
full_table_name_white_list Yes None The whitelist. The table names in the list must contain the full paths. The two-segment format database.table and three-segment format tenant.database.table are supported, but the format used must be consistent. You can specify a full name or use an asterisk (*) for each segment. Regular expressions are not supported. Multiple tables are joined with vertical lines (|). The whitelist allows for *.*, but the blacklist does not. If the first section is an asterisk (*), the second section must be a dot plus an asterisk (.*). For example, *.table1 is not allowed.
ignore_error_code No Empty The Oracle error codes to be ignored during a record query. Separate multiple error codes with vertical lines (|).
init_connection_size No 10 The number of initial connections.
load_dictionary_thread_num No 16 The number of concurrent threads for obtaining metadata during startup. Each schema corresponds to one task.
log_after_select_queue_size No 20000 The queue size after the flashback query phase is completed. This parameter is used to check for bottlenecks in the pipeline.
log_aggregator_queue_size No 20000 The queue size during the aggregation phase.
log_analyse_queue_size No 20000 The queue size during the analysis phase.
log_converter_queue_size No 20000 The queue size during the conversion phase.
log_entry_queue_size No 20000 The queue size during the fetch phase.
master.timestamp Yes None The start timestamp for pulling logs. The unit is seconds in a store, and this parameter does not need to be prefixed with logminer.. If the Logminer Reader plug-in is used, the unit is milliseconds. This value is used as the pull start timestamp for a new start or restart. When the incremental data of the Logminer Reader plug-in is consumed, the committed timestamp of the last consumed transaction needs to be checkpointed, and the value of the last checkpoint is uploaded during recovery after a restart.
max_actively_staged_arch_logs_per_instance No Integer.MAX_VALUE The maximum number of archive files fetched in parallel from one instance (thread) that can be temporarily stored in the local disk. When the specified value is reached, prefetch is paused. This parameter is used to control the usage of the disk space. The actual number of temporarily stored files is subject to Math.max(fetch_arch_logs_max_parallelism_per_instance, max_actively_staged_arch_logs_per_instance).
max_connection_size No None (nullable) If this parameter is empty, the value is automatically calculated by using the following formula: max(load_dictionary_thread_num, selector_thread_num) + 4.
max_num_in_memory_one_transaction No 1000 The maximum number of log records that can be stored in the memory for a transaction. If the specified value is exceeded, the log records will be temporarily stored in the disk. This parameter is used to resolve the problem of large transactions.
max_wait_ms No 180000 The maximum time in milliseconds for which the Oracle database waits for the connection to the destination database to succeed.
only_fetch_archived_log No false Specifies whether to pull only archived logs.
oracle.password Yes None The password of the account for fetching logs.
oracle.url Yes None The value of ip:port or service_name of the Oracle database from which logs are to be fetched. For a pluggable database (PDB), enter the value of service_name of the PDB.
oracle.user Yes None The username of the account for fetching logs. For a pluggable database (PDB), enter the username of a regular user.
output_rowid No false Specifies whether to output the row ID. For tables without primary keys, you can perform deduplication based on the obtained row IDs.
print_data No false Specifies whether to print pulled logs for troubleshooting.
read_timeout_ms No 180000 The read timeout period of Oracle connections.
read_timeout_retry_times No 10 The number of retries upon query exceptions in the Oracle database.
replace_invalid_date No false Specifies whether to replace an unparseable DATE type with the time when the log was generated, so that the Logminer Reader plug-in can keep running.
replace_null_string No " " The value for replacing a null value. The default value is a space.
replace_rowids No Empty Specifies whether to replace a specified row ID with another value. The value is in the format of before1:after1|before2:after2.
selector_thread_num No 32 The number of flashback query threads. Flashback query is required for INSERT LOB and UPDATE statements. You can adjust the value of this parameter to increase the speed of processing the INSERT LOB and UPDATE statements. Each flashback query thread requires one connection.
session_timezone No Asia/Shanghai The time zone based on which strings are output for the timestamp with local time zone type. The value of this parameter must match that of the timezone parameter of the JDBCWriter.
skip_records No Empty Specifies whether to skip records containing the specified strings. Separate multiple strings with vertical lines (|).
skip_transactions No Empty Whether to skip transactions with specified transaction IDs. Separate multiple transaction IDs with vertical lines (|).
sql_in_clause_max_parameter_num No 200 The maximum number of tables that can be queried when the IN clause is used to query metadata.
stage_queue_directory No The current working path of the process. By default, the directory where prefetched archive files are temporarily stored is the current working path of the process.
start_timestamp_backoff_seconds No 300 The backoff period to the breakpoint time, from when the fetch starts. The value is in seconds.
use_independent_fetcher_per_instance No false Specifies whether to use an independent fetcher module to fetch transaction logs generated by each instance (thread) in the database. Default value: false.
  • If you set the value to false, the previous combined fetch structure of OceanBase Migration Service (OMS) version 1.x is used, which supports real application clusters (RACs) consisting of at most three instances.
  • If you set the value to true, the new independent fetch structure is used, which supports RACs consisting of any number of instances and non-RAC structures including only one instance.
  • fetch_arch_logs_max_parallelism_per_instance, max_actively_staged_arch_logs_per_instance, enable_stage_queue_compression, and stage_queue_directory are valid only when this parameter is set to true.
use_system_exit No true Specifies whether to use the system.exit method.

Previous topic

O&M operations for the Store component
Last

Next topic

Parameters of a DB2 store
Next