OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V3.4.0Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deployment Guide
    • Deployment type
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale-out and deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery project in OceanBase Database
    • Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start, pause, and resume a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
    • Supported DDL operations in incremental migration and limits
      • Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits
      • Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database
      • Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database
      • Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database
      • Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
      • Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from an OceanBase database to a Kafka instance
    • Create a project to synchronize data from an OceanBase database to a RocketMQ instance
    • Create a project to synchronize data from an OceanBase database to a DataHub instance
    • Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from a DBP logical table to a DataHub instance
    • Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from an IDB logical table to a DataHub instance
    • Create a project to synchronize data from a MySQL database to a DataHub instance
    • Create a project to synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start, pause, and resume a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update quotas
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Destroy a store
      • Connector
        • View details of a connector
        • Start and pause a connector
        • Migrate a connector
        • Update the configurations of a connector
        • Batch O\&M
        • Delete a connector
      • JDBCWriter
        • View details of a JDBCWriter
        • Start and pause a JDBCWriter
        • Migrate a JDBCWriter
        • Update the configurations of a JDBCWriter
        • Batch O\&M
        • Delete a JDBCWriter
      • Checker
        • View the information about a checker
        • Start and pause a checker
        • Rerun and reverify a checker
        • Update the configurations of a checker
        • Delete a checker
    • O&M tickets
      • View details of an O\&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • User management
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
    • Operation audit
  • O&M Guide
    • Manage OMS services
    • OMS logs
    • O&M operations for the Store component
    • Store parameters
      • Parameters of an Oracle store
      • Parameters of a DB2 store
      • Parameters of a MySQL store
      • Parameters of an OceanBase store
    • O&M operations for the Supervisor component
    • Parameters of the Supervisor component
    • O&M operations for the Connector component
    • Connector parameters
      • Parameters of a destination RocketMQ instance
      • Parameters of a DataflowSink instance
      • Parameters in the destination Kafka instance
      • Parameters of the source database in full migration
      • Parameters of the source database in incremental data synchronization
      • Parameters of a destination DataHub instance
      • Parameters of the source Sybase database
      • Parameters for intermediate-layer synchronization
    • Checker parameters
    • JDBCWriter parameters
    • Parameters of the CM component
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • FAQ about full migration
          • How do I query the ID of a checker?
          • How do I query log files of the Checker component of OMS?
          • How do I query the verification result files of the Checker component of OMS?
          • What do I do if the destination table does not exist?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update the configurations of a JDBCWriter?
        • How do I start or stop a JDBCWriter?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment type System and network requirements Memory and disk requirements Prepare the environment Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale-out and deployment Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery project in OceanBase Database Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start, pause, and resume a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from an OceanBase database to a Kafka instance Create a project to synchronize data from an OceanBase database to a RocketMQ instance Create a project to synchronize data from an OceanBase database to a DataHub instance Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database Create a project to synchronize data from a DBP logical table to a DataHub instance Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database Create a project to synchronize data from an IDB logical table to a DataHub instance Create a project to synchronize data from a MySQL database to a DataHub instance Create a project to synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start, pause, and resume a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source Delete a data source Create a database user User privileges
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V3.4.0
iconOceanBase Migration Service
V 3.4.0Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

How do I perform performance tuning for Oracle Store?

Last Updated:2026-04-14 07:36:28  Updated
share
What is on this page
Metrics in connector.log
Check whether garbage collection occurs in Oracle Store
Check garbage collection in connector.log
Use the jstat command to check garbage collection in real time
Adjust the JVM memory of Oracle Store
Adjust the JVM memory for a specified Oracle store
Adjust the JVM memory for all Oracle stores
Configure store parameters

folded

share

This topic describes the performance metrics of Oracle Store and when and how to adjust these metrics.

Metrics in connector.log

Oracle Store prints monitoring logs in connector.log once every 10s to monitor the data backlog in each queue, which helps you troubleshoot processing bottlenecks. You can run the grep command to search for the corresponding queue information by using the following keywords and check the changes over a period of time. The default size of each queue is 20000.

Metric

  • log_entry_queue_size: The data obtained from LogMiner is directly pushed to the queue. If the value remains at 0, OMS is bottlenecked when it pulls incremental data from the Oracle database. This may be caused by the following issues:

    • The load on the Oracle server is too large, resulting in slow operation of LogMiner.

    • The network between the OMS server and the Oracle server is unstable, or the two servers are not in the same region, resulting in slow data transmission.

    • If neither is the case, search for start loading log entries from log file in the connector.log file to check whether the log file being analyzed is an archive file. If the file is an archive file, the single logminer thread may have reached its performance limit. You need to configure concurrent logminer threads by using the following metrics: deliver2store.logminer.fetch_arch_logs_max_parallelism_per_instance and deliver2store.logminer.max_actively_staged_arch_logs_per_instance. Set deliver2store.logminer.max_actively_staged_arch_logs_per_instance to 2 * deliver2store.logminer.fetch_arch_logs_max_parallelism_per_instance.

    For example, you can set deliver2store.logminer.fetch_arch_logs_max_parallelism_per_instance=3,deliver2store.logminer.max_actively_staged_arch_logs_per_instance=6. Concurrent pulling takes up more memory. Therefore, when you adjust these two metrics, you must increase the JVM memory of the corresponding Oracle Store as well.

  • log_analyse_queue_size: The length of the data output queue after the Oracle Store parses the result returned by the LogMiner. If the queue is empty, OMS is bottlenecked in log parsing.

  • log_aggregator_queue_size: The number of records being assembled. This metric alone does not assist in troubleshooting. It must be used along with the log_converter_queue_size metric.

  • log_converter_queue_size: The length of the output queue of data conversion. If the value this metric is 0, OMS is bottlenecked on the converter and aggregator. If the value of the log_aggregator_queue_size metric is close to 20000, you need to increase the number of converter threads, specified by the deliver2store.logminer.converter_thread_num metric. The default value is 8.

  • log_select_queue_size: The length of the entry queue of flashback queries. If the queue size is close to 20000, OMS is bottlenecked in flashback queries. In this case, increase the number of flashback query threads, which is specified by the deliver2store.logminer.selector_thread_num metric. The default value is 32.

  • connect_task_queue_size: The length of the output queue of the LogMiner reader. If the value is 0, no bottleneck exists in the downstream. If the value is not 0, a bottleneck occurs in the Store framework.

  • log_aggregator_local_store_size: The number of uncommitted transactions cached by Oracle Store. A larger value of this metric indicates more records of uncommitted transactions in Oracle. You can collect values of this metric over a period of time. If the metric remains at a large value and suddenly drops, a large transaction exists in Oracle. Oracle Store will start processing only after the large transaction is committed, which causes a certain delay. You can check whether the large transaction can be broken into small ones from the business perspective, so that Oracle Store can process and output data in time.

  • log_aggregator_in_disk_transaction_num: The number of transactions cached to the disk. When the number of records for a transaction exceeds the value of the deliver2store.logminer.max_num_in_memory_one_transaction metric (default value: 1000), the transaction is cached to the disk. When a large transaction exists and the JVM memory can be increased, you can increase the value of deliver2store.logminer.max_num_in_memory_one_transaction to cache the transaction in memory as much as possible to improve processing efficiency.

Check whether garbage collection occurs in Oracle Store

When the efficiency is low, check whether garbage collection occurs in the Oracle store. Garbage collection may slow down data processing in the Oracle store. Eventually, all processing queues are jammed, resulting a stuck process.

Check garbage collection in connector.log

Search for the gc.G1-Old-Generation.count and gc.G1-Young-Generation.count metrics in the connector.log file. Check whether the values of these two metrics, especially the value of gc.G1-Old-Generation.count, are continuously increasing. Then, you can determine whether garbage collection occurs in the process, and whether the JVM memory needs to be increased.

Use the jstat command to check garbage collection in real time

Log on to the OMS container and run the following command to view the real-time garbage collection of the corresponding Oracle store. Before you execute the command, replace storexxxx with the ID of the corresponding Oracle store. After the command is executed, information about real-time garbage collection is printed every 3s. Focus on the FGC and FGCT metrics, which respectively indicate the number of full garbage collections and the time consumed since the process was started. When the FGC value increases continuously or the FGCT value accounts for over 10% the total running time of the process, you need to increase the JVM memory of the Oracle store.

jstat -gcutil `ps -ef | grep storexxxx | grep java | awk '{print $2}'` 3000

GC

Adjust the JVM memory of Oracle Store

Adjust the JVM memory for a specified Oracle store

  1. Stop the store.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, choose OPS & Monitoring > Component. The Store tab automatically appears.

    3. On the Store tab, click Pause in the Actions column of the target store, which must be in the Running state.

      • For single-region deployment, the region information is displayed on the right of Store.

      • For multi-region deployment, you can filter stores by region and pause the stores that are not running in the target region.

    4. In the dialog box that appears, click OK.

  2. Go to the /home/ds/store/storeXXX/kafka/bin directory on the OMS server, and modify the value of KAFKA_HEAP_OPTS in the connect-drcdeliver.sh file to modify the JVM memory. For example, change the value to -Xms32g -Xmx32g -Xmn8g. Generally, the three values in the string is in the 4:4:1 ratio. You can set them to reasonable values based on the total memory on the server.

  3. On the OMS console, choose OPS & Monitoring > Component > Store, and start the store.

Adjust the JVM memory for all Oracle stores

On the OMS server, go to the /home/ds/kafka/bin directory to modify the JVM memory. For more information, see Step 2. After the modification is completed, the new JVM configuration takes effect on all newly started Oracle stores.

Configure store parameters

  1. Log on to the OMS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the Migration Projects list, click the name of the target project to go to its details page.

  4. Click View Component Monitoring in the upper-right corner.

  5. In the View Component Monitoring dialog box, click Update in the Actions column of the target component.

  6. In the Update Configurations dialog box, hover the pointer over the parameter to be modified and click the edit icon that appears.

  7. Modify the value in the text box and then click the confirm icon.

    You can also hover the pointer over the blank area after root and click the add icon to add a key-value pair.

  8. After you modify the YAML configuration as required, click Update.

    After the update is completed, Oracle Store automatically restarts to make the parameter changes take effect.

Previous topic

How do I troubleshoot common problems with Oracle Store?
Last

Next topic

What do I do when Oracle Store reports an error at the isUpdatePK stack?
Next
What is on this page
Metrics in connector.log
Check whether garbage collection occurs in Oracle Store
Check garbage collection in connector.log
Use the jstat command to check garbage collection in real time
Adjust the JVM memory of Oracle Store
Adjust the JVM memory for a specified Oracle store
Adjust the JVM memory for all Oracle stores
Configure store parameters