OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.3.1Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • Overview of OMS
    • Terms
    • OMS HA
    • Principles of Store
    • Principles of Full-Import and Incr-Sync
    • Data verification principles
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale out
    • Scale down deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log in to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your login password
      • Log out
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database
    • Migrate data from the Oracle compatible mode of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
    • Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data between OceanBase databases of the same tenant type
    • Configure a bidirectional synchronization task
    • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to the Oracle compatible mode of OceanBase Database
    • Migrate data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database
    • Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database
    • Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database
    • Manage data migration tasks
      • View details of a data migration task
      • Rename a data migration task
      • View and modify migration objects
      • Use tags to Manage data migration tasks
      • Perform batch operations on data migration tasks
      • Download and import settings of migration objects
      • View and modify migration parameters
      • Download a conflict log file
      • Start and pause a data migration task
      • End and delete a data migration task
    • Supported DDL operations and limits for synchronization
      • Synchronize DDL operations from a MySQL database to a MySQL-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from MySQL to OceanBase Database's MySQL compatible mode
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create indexes or constraints
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify tables
          • Operations on columns
          • Operations on constraints and indexes
          • Partition operations
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from Oracle to OceanBase Database Oracle compatible mode
        • CREATE TABLE
          • Overview for CREATE TABLE
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview for creating partitions
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Oracle compatible mode
        • ALTER TABLE
          • Overview
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop a partition
            • Drop a subpartition
            • Add partitions and subpartitions
            • Modify partitions
            • Drop partition data
        • DROP TABLE
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
      • Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
      • Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations between MySQL-compatible tenants of OceanBase Database
      • DDL synchronization between Oracle-compatible tenants of OceanBase Database
      • DDL operations for synchronizing data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database
      • DDL synchronization from PostgreSQL to OceanBase Database in Oracle compatible mode
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization tasks
      • View details of a data synchronization task
      • Change the name of a data synchronization task
      • View and modify synchronization objects
      • Use tags to Manage data synchronization tasks
      • Perform batch operations on data synchronization tasks
      • Download and import the settings of synchronization objects
      • View and modify the parameter configurations of a data synchronization task
      • Start and pause a data synchronization task
      • End and delete a data synchronization task
  • Data validation
    • Overview
    • Create a data validation task
    • Manage data validation tasks
      • View details of a data validation task
      • Change the name of a data validation task
      • View and modify validation objects
      • View and modify validation parameters
      • Manage data validation tasks by using tags
      • Import validation objects
      • Start, stop, and resume a data validation task
      • Clone a data validation task
      • Delete a data validation task
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
        • Create a standalone OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
      • Create a PolarDB-X 1.0 data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
      • Manage resource groups
    • Components
      • Store
        • Add a Store component
        • View details of a Store component
        • Update the configurations of a Store component
        • Start and pause a Store component
        • Delete a Store component
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
      • Incr-Verification
        • View details of the Incr-Verification component
        • Pause an Incr-Verification component
        • Rerun and resume an Incr-Verification component
        • Update an Incr-Verification component
      • Row-Verification
        • View details of a Row-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Task Template
        • Create a task template
        • View and edit task templates
        • Copy and export a task template
        • Delete a task template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store component parameters
        • Incr-Sync component parameters
        • Full-Import component parameters
        • Full-Verification component parameters
        • Incr-Verification component parameters
        • Parameters of the Row-Verification component
        • CM component parameters
        • Supervisor component parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View task alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Manage access keys
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • Configure DDL/DML synchronization
      • DDL synchronization scope
      • Rename databases and tables
      • Use SQL conditions to filter data
      • Set the incremental synchronization start timestamp
      • Configure matching rules for migration or synchronization objects
      • Configure matching rules for validation objects
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change a topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • CreatePolarDBDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
      • UpdateProjectConfig
      • Download schema conversion information
      • DownloadConflictData
      • ListConflictData
      • ResetIncrStartTimestamp
      • AdvanceProject
      • DescribeRegions
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • SSO
      • Integrate the OIDC protocol to OMS to implement SSO
      • Integrate the SAML 2.0 protocol to OMS to implement SSO
      • Access Microsoft Entra ID using OMS SAML 2.0
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL database instance
    • Online DDL tools
    • Supplemental logging in Oracle databases
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Task diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the target table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • Release Note
      • Version number rules
      • V4.3
        • OMS V4.3.1
        • OMS V4.3.0
      • V4.2
        • OMS V4.2.5
        • OMS V4.2.4
        • OMS V4.2.3
        • OMS V4.2.2
        • OMS V4.2.1
        • OMS V4.2.0
      • V4.1
        • OMS V4.1.0
      • V4.0
        • OMS V4.0.2
        • OMS V4.0.1
      • V3.4
        • OMS V3.4.0
      • V3.3
        • OMS V3.3.1
        • OMS V3.3.0
      • V3.2
        • OMS V3.2.2
        • OMS V3.2.1
      • V3.1
        • OMS V3.1.0
      • V2.1
        • OMS V2.1.2
        • OMS V2.1.0

Download PDF

OMS Documentation Overview of OMS Terms OMS HA Principles of Store Principles of Full-Import and Incr-Sync Data verification principles Overview Hierarchical functional system Basic components OMS Oracle full migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale out Scale down deployment Check the deployment Deploy a time-series database (Optional) Log in to the OMS console Overview Configure user information Change your login password Log out Overview Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database Migrate data from the Oracle compatible mode of OceanBase Database to an Oracle database Migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database Migrate data from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data between OceanBase databases of the same tenant type Configure a bidirectional synchronization task Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database Migrate data from a PostgreSQL database to the Oracle compatible mode of OceanBase Database Migrate data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database View details of a data migration task Rename a data migration task View and modify migration objects Use tags to Manage data migration tasks Perform batch operations on data migration tasks Download and import settings of migration objects View and modify migration parameters Download a conflict log file Start and pause a data migration task End and delete a data migration task Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a MySQL database Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a DB2 LUW database Synchronize DDL operations between MySQL-compatible tenants of OceanBase Database DDL synchronization between Oracle-compatible tenants of OceanBase Database DDL operations for synchronizing data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database DDL synchronization from PostgreSQL to OceanBase Database in Oracle compatible mode Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a MySQL-compatible tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization task Change the name of a data synchronization task View and modify synchronization objects Use tags to Manage data synchronization tasks Perform batch operations on data synchronization tasks Download and import the settings of synchronization objects View and modify the parameter configurations of a data synchronization task Start and pause a data synchronization task End and delete a data synchronization task Overview Create a data validation task View details of a data validation task Change the name of a data validation task View and modify validation objects View and modify validation parameters Manage data validation tasks by using tags Import validation objects Start, stop, and resume a data validation task Clone a data validation task Delete a data validation task Create a MySQL data source Create an Oracle data source Create a TiDB data source
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.3.1
iconOceanBase Migration Service
V 4.3.1Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Oracle store tuning

Last Updated:2025-10-09 03:34:24  Updated
share
What is on this page
Check whether the Oracle store frequently performs FGC
View FGC information in the connector.log file
View real-time GC information by using the jstat command
Adjust the JVM memory of Oracle stores
Adjust the JVM memory of the specified Oracle store
Adjust the JVM memory of all Oracle stores
Upgrade components related to the Oracle store
Performance tuning for Oracle store
Determine the tuning method based on the Oracle store metrics
Adjust Oracle store parameters

folded

share

This topic describes the performance-related parameters of the Oracle store and how to troubleshoot performance issues.

Check whether the Oracle store frequently performs FGC

When the Oracle store checkpoint proceeds slowly, first check whether the Oracle store frequently performs full garbage collection (FGC). Frequent FGC will slow down the processing of the Oracle store.

View FGC information in the connector.log file

Search for gc.G1-Old-Generation.count in the connector.log file. If the value increases all the time, the process is performing FGC. In this case, you must increase the JVM memory.

View real-time GC information by using the jstat command

Log in to the container where OMS is located. Run the following command to view the real-time GC information of the corresponding Oracle store:

jstat -gcutil `ps -ef | grep storeXXXX | grep java | awk '{print $2}'` 3000

Note

Before you run the preceding command, you must replace storeXXXX with the Oracle store to be viewed. Example: store7100.

After this command is executed, real-time GC information is printed every 3 seconds. Pay attention to the number of times that FGC is performed and the full garbage collection time (FGCT) since the process is started. If the number of FGC times increases constantly or the proportion of the FGC time to the runtime of the process exceeds 10%, increase the JVM memory of the Oracle store.

Adjust the JVM memory of Oracle stores

Adjust the JVM memory of the specified Oracle store

Perform the following steps to adjust the JVM memory of the specified Oracle store:

  1. In the OMS console, choose O&M and Monitoring > Component > Store. Stop the target store.

  2. Go to the /home/ds/store/storeXXXX/kafka/bin directory of the OMS server and modify the JVM memory.

    Change the value of KAFKA_HEAP_OPTS in the connect-drcdeliver.sh file. For example, you can change the value to -Xms32g -Xmx32g -Xmn8g. Generally, the ratio of the sizes of the three values is 4:4:1. You can adjust the values based on the memory of the server.

  3. In the OMS console, choose > Component > Store. Start the target store.

Adjust the JVM memory of all Oracle stores

Perform the following steps to adjust the JVM memory of all Oracle stores:

  1. Go to the /home/ds/kafka/bin directory of the OMS server and modify the JVM memory.

    Change the value of KAFKA_HEAP_OPTS in the connect-drcdeliver.sh file. For example, you can change the value to -Xms32g -Xmx32g -Xmn8g. Generally, the ratio of the sizes of the three values is 4:4:1. You can adjust the values based on the memory of the server.

  2. The Oracle stores started after the change will apply the new JVM configuration.

Upgrade components related to the Oracle store

If the total amount of archived data (the total amount of data in multiple instances of an RAC) in the Oracle database to be migrated exceeds 500 GB per day, or the log generation speed on a single Oracle instance exceeds 6 MB/s in peak hours and the log generation lasts a long period but a low latency of the Oracle store is demanded, you must upgrade components related to the Oracle store. For detailed version information, contact Technical Support.

Performance tuning for Oracle store

You can adjust the performance-related parameters of the Oracle store in advance based on the amount of archived data. Example:

alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
-- The log amount per hour in the last 3 days
select THREAD#,logtime,count(*),round(sum(blocks * block_size) / 1024 / 1024 / 1024) GBSIZE from (select a.THREAD#,trunc(first_time, 'hh') as logtime,a.BLOCKS,a.BLOCK_SIZE from v$archived_log a where a.DEST_ID = 1 and a.FIRST_TIME > trunc(sysdate - 2)) group by THREAD#, logtime order by THREAD#, logtime;
-- The log amount per day in the last 3 days
select THREAD#,logtime,count(*),round(sum(blocks * block_size) / 1024 / 1024 / 1024) GBSIZE from (select a.THREAD#,trunc(first_time, 'dd') as logtime,a.BLOCKS,a.BLOCK_SIZE from v$archived_log a where a.DEST_ID = 1 and a.FIRST_TIME > trunc(sysdate - 2)) group by THREAD#, logtime order by THREAD#, logtime;

You can execute the preceding SQL statements to query the amount of archived data that is generated every hour and every day in the Oracle database in the last 3 days. In a real application cluster (RAC), THREAD# has different values. You must identify different nodes. To change the query range, modify the sysdate - 2 parameter in the SQL statement.

  • If the Oracle database contains multiple RAC instances, instruct the customer to configure load balancing for each Oracle instance. In this way, the number of logs on the Oracle instances will be balanced and the processing of the Oracle store will be faster.

  • If the size of a single archived file in the Oracle database is far greater than 2 GB, instruct the customer to adjust the size of each archived file to 1 to 2 GB. In this way, the processing of the Oracle store will be faster and less memory is required.

  • If the amount of archived data of a single Oracle instance exceeds 500 GB per day, or the log generation speed of a single Oracle instance exceeds 6 MB/s in peak hours and the log generation lasts for a long time but a low latency of the Oracle store is demanded, you can adjust parameters based on the amount of archived data, such as the degree of parallelism (DOP) for log pulling. By default, concurrency is disabled during log pulling.

Set the DOP for log-pulling on the basis that 500 GB of logs on a single instance can be handled per day in each concurrent operation. You must also pay attention to the configuration and load of the Oracle database. One log-pulling concurrent operation consumes N logical cores in the Oracle database. Here N is the number of Oracle instances. The higher the log pull concurrency, the larger the overhead of the Oracle database. If the concurrency exceeds the processing capability of the Oracle database, the database performance decreases. In addition, the data write may be affected, and the Oracle service may be interrupted due to highly concurrent LogMiner tasks. If multiple links are created for an Oracle database and multiple stores are created for each link, you must set the log pull concurrency and Oracle database load for these links and stores in a unified manner. Adjust the following parameters:

  • Adjust the DOP for log pulling as follows: deliver2store.logminer.fetch_arch_logs_max_parallelism_per_instance = Daily log amount of the instance with a large data amount / 500 GB + 1. The default value is 1.

  • Adjust the number of archived files saved for each Oracle instance after log pulling as follows: Change the value of deliver2store.logminer.max_actively_staged_arch_logs_per_instance to twice the value of deliver2store.logminer.fetch_arch_logs_max_parallelism_per_instance.

  • Change the value of the deliver2store.logminer.staged_arch_logs_in_memory parameter, which indicates whether to save pulled logs to the memory, to true. The default value is false, which indicates saving pulled logs to the disk. When this parameter is set to true, you must also adjust the JVM memory as follows: Size of a single archived file × Log amplification times 4 × Number of Oracle instances × Number of archived files saved for a single instance + 8 GB. If the logs on multiple Oracle instances are seriously unbalanced, when you calculate the JVM memory size, use the number of archived files saved for the instance with a large log amount. Example:

    An Oracle RAC contains two instances, the size of a single archived file is 1 GB, logs are balanced between the two instances, and 1.25 TB of logs are generated per day for each instance (2.5 TB of logs are generated per day for the two instances). You can set the DOP for log pulling to 3, the number of archived files saved for a single instance to 6, and the JVM memory to 1 GB × 4 × 2 × 6 + 8 = 56 GB.

    An Oracle RAC contains two instances. The size of a single archived file is 1 GB. Logs are seriously unbalanced between the two instances and are mainly generated on one instance. An instance can generate 2.5 TB of logs per day. Assume that you have 1 Oracle instance, the DOP for log pulling is set to 6 and the number of archived files saved for a single instance is set to 12, then you can set the JVM memory to 1 GB × 4 × 1 × 12 + 8 = 56 GB.

  • A larger amount of logs and a higher DOP in the Oracle database result in a higher overhead of the Oracle database. Accordingly, the memory and CPU overheads of the Oracle store are higher.

Determine the tuning method based on the Oracle store metrics

An Oracle store records a metric log in the connector.log file every 10 seconds to monitor the data accumulation conditions of each queue and help resolve processing bottlenecks. You can use the grep command to search for a queue by keyword and find the information of this queue. The changes within a specified period are recorded. The default size of a queue is 20000.

An Oracle store uses the pipeline model for processing, as shown in the following figure.

architecture9-en

General troubleshooting method: Analyze the size of each queue within a period, such as 1 hour. Check the queues from the top down and the first empty queue is the bottleneck. If the size of a queue is close to 0 for a long time, it is considered empty.

If the Oracle database is an RAC with multiple instances, the name of each queue is prefixed instance_${instance_num}_. In this case, you must separately analyze the former three queues for each instance. For example, you can separately run the grep 'instance_1_log_entry_queue_size' connector.log and grep 'instance_2_log_entry_queue_size' connector.log commands. After you finish analyzing log_entry_queue for all instances and find that this phase has no bottlenecks, you can analyze the next phase, log_analyse_queue.

The following table describes the queues.

Queue Description
log_entry_queue_size The output queue of Oracle logs pulled by using LogMiner.
log_analyse_queue_size The output queue for log pre-parsing.
log_aggregator_queue_size The output queue after transaction aggregation. Transactions that are rolled back are excluded. Only committed transactions are recorded in this queue.
log_prepared_converter_queue_size The number of pre-converted records from the log_aggregator_queue. This metric is meaningless for a single-node Oracle cluster. If the Oracle database is an RAC, the records in multiple log_aggregator_queues are merged and sorted. If the log_aggregator_queue on each log stream has an unhandled commit record, the records are merged and sorted based on the commit sequence in each log stream. Assume that in a two-node RAC, the log_aggregator_queue_size of one node is full and the log_aggregator_queue_size of the other node is empty. The log_prepared_converter_queue_size will also be empty. This is because records will be merged and sorted only when the queue on the other node has commit records. In this case, you must analyze the bottleneck based on the former three queues of the instance whose log_prepared_converter_queue_size is empty.
log_converter_queue_size The output queue for log parsing.
log_after_select_queue_size The output queue for flashback queries.
connect_task_queue_size The output queue for LogMiner Reader.

The following table describes the tuning method for an empty queue and the tuning effect observation method.

Queue Tuning method for an empty queue Tuning effect observation method
log_entry_queue_size Possible causes:
The load on the Oracle server is heavy, which slows down the processing of LogMiner.
The network between the OMS server and the Oracle server is unstable, or the two servers are not in the same region, which slows down data transmission.
Search for start loading log entries from log file in the connector.log file and check whether the log file being analyzed is archived. If yes, LogMiner of a single process reaches its performance bottleneck. In this case, you must configure concurrency for pulling archived files.
log_entry_tps: the TPS (actually RPS) of LogMiner pulling entries. You can calculate the pulling throughput based on log_entry_redo_size_avg.
log_entry_redo_size_avg: the average size of each log entry pulled by using LogMiner, in bytes.
log entries from: You can view the pulling condition of each archived file or REDO file.
log_analyse_queue_size Increase the value of deliver2store.logminer.analyser_thread_num. Default value: 4. log_entry_analyse_tps: the TPS (RPS in fact) in the log analysis phase.
long_analyse_tps: the TPS (RPS in fact) of records generated in the log analysis phase. This value will be smaller than that of log_entry_analyse_tps. This is because some tables not in the allowlist are filtered and sometimes multiple log entries form a single record.
log_aggregator_queue_size
log_aggregator_local_store_size: the number of uncommitted transactions cached by the Oracle store. More uncommitted transactions in the Oracle database result in a larger value of this parameter. If the value becomes very large, large transactions exist in the Oracle database.The Oracle store will process large transactions only after they are committed. This causes latency. In this case, you must check whether the large transactions can be split up so that the Oracle store can promptly process data.
log_aggregator_in_disk_transaction_num: the number of transactions cached to the disk. When the number of records of a single transaction exceeds the value (which is 1000 by default) of deliver2store.logminer.max_num_in_memory_one_transaction, the transaction will be cached to the disk. When large transactions exist and the JVM memory can be increased, you can increase the value of deliver2store.logminer.max_num_in_memory_one_transaction so that the transactions are saved in the memory, which improves the processing efficiency. You can also check whether the large transactions can be split up.
log_aggregator_local_store_size
log_aggregator_in_disk_transaction_num
log_converter_queue_size Increase the value of deliver2store.logminer.converter_thread_num. Default value: 8. log_converter_tps: the TPS in the log conversion phase.
log_converter_rps: the RPS in the log conversion phase.
log_after_select_queue_size Increase the value of deliver2store.logminer.selector_thread_num. Default value: 32. log_select_tps: the TPS (RPS in fact) in the log flashback query phase.
connect_task_queue_size Increase the value of deliver2store.logminer.max_landing_message_num. Default value: 40000.
Adjust the value of deliver2store.logminer.flush_threshold_bytes. Default value: 12288.
connect_record_poll_tps: the output TPS (RPS in fact) of LogMiner Reader.

Adjust Oracle store parameters

  1. Log in to the OMS console.

  2. In the left-side navigation pane, click Data Migration.

  3. On the Data Migration page, click More > View Component Monitoring of the target task.

  4. In the View Component Monitoring dialog box, click Update after the target component.

  5. In the Update Configuration dialog box, hover your mouse over the target configuration and click the edit icon to update the configuration.
    If there is no parameter for configuration, you need to hover your mouse over the blank space next to the root parameter and click the add icon to configure the Key Name and the corresponding value.

    oracle-store-en

  6. Click OK.
    After the update, Oracle store will automatically restart for the parameter configuration to take effect.

Previous topic

Incr-Sync/Full-Import tuning
Last

Next topic

Set throttling
Next
What is on this page
Check whether the Oracle store frequently performs FGC
View FGC information in the connector.log file
View real-time GC information by using the jstat command
Adjust the JVM memory of Oracle stores
Adjust the JVM memory of the specified Oracle store
Adjust the JVM memory of all Oracle stores
Upgrade components related to the Oracle store
Performance tuning for Oracle store
Determine the tuning method based on the Oracle store metrics
Adjust Oracle store parameters