OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.2.3Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • What is OMS?
    • Terms
    • OMS HA
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full data migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Single-node deployment
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Integrate the OIDC protocol to OMS to implement SSO
    • Scale out OMS
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Active-active disaster recovery between OceanBase databases
    • Migrate data from a TiDB database to a MySQL tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Migrate incremental data from an Oracle tenant of OceanBase Database to a MySQL database
    • Manage data migration projects
      • View details of a data migration project
      • Rename a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Perform batch operations on data migration projects
      • Download and import settings of migration objects
      • Start and pause a data migration project
      • Release and delete a data migration project
    • Supported DDL operations and limits for synchronization
      • DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • Overview of DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle tenant of OceanBase Database
        • Overview
        • CREATE TABLE
          • Overview
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Normal indexes
        • ALTER TABLE
          • Modify tables
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop partitions
            • Drop subpartitions
            • Add partitions and subpartitions
            • Modify partitions
            • Truncate partitions
        • DROP TABLE
        • COMMENT
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle tenant of OceanBase Database
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database
      • DDL synchronization between MySQL tenants of OceanBase Database
      • DDL synchronization between Oracle tenants of OceanBase Database
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • Change the name of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Perform batch operations on data synchronization projects
      • Download and import the settings of synchronization objects
      • Start and pause a data synchronization project
      • Release and delete a data synchronization project
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Project Template
        • Create a project template
        • View and edit project templates
        • Copy and export a project template
        • Delete a project template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store parameters
        • Incr-Sync parameters
        • Full-Import parameters
        • Full-Verification parameters
        • CM parameters
        • Supervisor parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • DML filtering
      • DDL synchronization
      • Rename a migration or synchronization object
      • Use SQL conditions to filter data
      • Set an incremental synchronization timestamp
      • Configure matching rules for migration objects
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change the topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL instance
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V4.2
      • OMS V4.2.2
      • OMS V4.2.1
      • OMS V4.2.0
    • V4.1
      • OMS V4.1.0
    • V4.0
      • OMS V4.0.2
      • OMS V4.0.1
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What is OMS? Terms OMS HA Overview Hierarchical functional system Basic components OMS Oracle full data migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Single-node deployment Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Integrate the OIDC protocol to OMS to implement SSO Scale out OMS Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Overview Migrate data from a MySQL database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to a MySQL tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to an Oracle database Migrate data from an Oracle database to an Oracle tenant of OceanBase Database Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Active-active disaster recovery between OceanBase databases Migrate data from a TiDB database to a MySQL tenant of OceanBase Database Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database Migrate incremental data from an Oracle tenant of OceanBase Database to a MySQL database View details of a data migration project Rename a data migration project View and modify migration objects Use tags to manage data migration projects Perform batch operations on data migration projects Download and import settings of migration objects Start and pause a data migration project Release and delete a data migration project Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle tenant of OceanBase Database Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database DDL synchronization between MySQL tenants of OceanBase Database DDL synchronization between Oracle tenants of OceanBase Database Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project Change the name of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Perform batch operations on data synchronization projects Download and import the settings of synchronization objects Start and pause a data synchronization project Release and delete a data synchronization project Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database Minimum privileges required when an Oracle database serves as the source O&M overview Go to the overview page View server information Update the quota View server logs View O&M tasks Skip a task or subtask Retry a task or subtask
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.2.3
iconOceanBase Migration Service
V 4.2.3Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Upgrade OMS in multi-node deployment mode

Last Updated:2024-08-16 06:00:53  Updated
share
What is on this page
Background
Upgrade OMS from V4.0.2 or later to V4.2.3
Upgrade OMS to V4.2.3 from V3.2.1 or a version later than V3.2.1 and earlier than V4.0.2
Prerequisites
Procedure

folded

share

You can directly upgrade OceanBase Migration Service (OMS) V3.2.1 and later versions to OMS V4.2.3. This topic describes how to upgrade OMS in multi-node deployment mode in different scenarios.

Background

An upgrade to OMS V4.2.3 can be classified into the following two version scenarios:

  • The current OMS version is V3.2.1 or a version later than V3.2.1 and earlier than V4.0.2.

    Notice

    To upgrade OMS of a version earlier than V3.2.1, you must upgrade it first to V3.2.1.

  • The current version is V4.0.2 or later.

To upgrade OMS to V4.2.3 from V3.2.1 or a version later than V3.2.1 and earlier than V4.0.2, you must perform the following two more steps than upgrading from V4.0.2 or later to V4.2.3:

  • Check the prerequisites below.

  • Execute the upgrade package in the .jar format during the upgrade.

Upgrade OMS from V4.0.2 or later to V4.2.3

  1. If high availability (HA) is enabled, disable it first.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, choose System Management > System Parameters.

    3. On the System Parameters page, find ha.config.

    4. Click the edit icon in the Value column of the parameter.

    5. In the Modify Value dialog box, set enable to false to disable HA.

  2. Back up the databases.

    1. Log on to the two hosts where the container of OMS V4.2.3 is deployed by using their respective IP addresses, and suspend the container.

      sudo docker stop ${CONTAINER_NAME}
      

      Note

      CONTAINER_NAME specifies the name of the container.

    2. Log on to the cluster management (CM) heartbeat database specified in the configuration file and back up data.

      # Log on to the CM heartbeat database specified in the configuration file.
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -Dcm_hb_422
      
      # Create an intermediate table.
      CREATE TABLE IF NOT EXISTS `heatbeat_sequence_bak` (
      `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'PK',
      `gmt_created` datetime NOT NULL,
      `gmt_modified` datetime NOT NULL,
      PRIMARY KEY (`id`)
      ) DEFAULT CHARSET=utf8 COMMENT='Heartbeat sequence table';
      
      # Back up data to the intermediate table.
      INSERT INTO heatbeat_sequence_bak SELECT `id`,`gmt_created`,`gmt_modified` FROM heatbeat_sequence ORDER BY `id` DESC LIMIT 1;
      
      # Rename the heatbeat_sequence table and the intermediate table.
      # The heatbeat_sequence table provides auto-increment IDs and reports the heartbeat.
      ALTER TABLE `heatbeat_sequence` RENAME TO `heatbeat_sequence_bak2`;
      ALTER TABLE `heatbeat_sequence_bak` RENAME TO `heatbeat_sequence`;
      
      # Delete the original table.
      DROP TABLE heatbeat_sequence_bak2;
      
    3. Run the following commands to back up the rm, cm, and cm_hb databases as SQL files and make sure that the sizes of the files are not 0.

      If you have deployed databases in multiple regions, you must back up the cm_hb database in all regions. For example, if you have deployed databases in two regions, you must back up the following four databases: rm, cm, cm_hb1, and cm_hb2.

      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false rm_422 > /home/admin/rm_422.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_422 > /home/admin/cm_422.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_hb_422 > /home/admin/cm_hb_422.sql
      
      Parameter Description
      -h The IP address of the host from which the data is exported.
      -P The port number used to connect to the database.
      -u The username used to connect to the database.
      -p The password used to connect to the database.
      --triggers The data export trigger. The default value is false, which disables data export.
      rm_422, cm_422, and cm_hb_422 Specifies to back up the rm, cm, and cm_hb databases to SQL files named in the format of database name > SQL file storage path.sql. You need to replace the values based on the actual environment.
  3. Load the downloaded OMS installation package to the local image repository of the Docker container.

    docker load -i <OMS installation package>
    
  4. Start the new container of OMS V4.2.3.

    You can access the OMS console by using an HTTP or HTTPS URL. To securely access the OMS console, install a self-signed Secure Sockets Layer (SSL) certificate and mount it to the specified directory in the container. The certificate is not required for HTTP access.

    Notice

    • Before you start the container of OMS V4.2.3, make sure that the three disk mounting paths of OMS are the same as those before the upgrade.
      You can run the sudo docker inspect ${CONTAINER_NAME} | grep -A5 'Binds' command to view the paths of disks mounted to the old OMS container.

    • The -e IS_UPGRADE=true parameter is provided in OMS V3.3.1 or later. This parameter is provided only to support OMS upgrades and must be specified when you upgrade OMS.

    OMS_HOST_IP=xxx
    CONTAINER_NAME=oms_xxx
    IMAGE_TAG=feature_x.x.x
    
    docker run -dit --net host \
    -v /data/config.yaml:/home/admin/conf/config.yaml \
    -v /data/oms/oms_logs:/home/admin/logs \
    -v /data/oms/oms_store:/home/ds/store \
    -v /data/oms/oms_run:/home/ds/run \
    # If you mount the SSL certificate in the OMS container, you must set the following two parameters.
    -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt
    -v /data/oms/https_key:/etc/pki/nginx/oms_server.key
    -e OMS_HOST_IP=${OMS_HOST_IP} \
    -e IS_UPGRADE=true \
    --privileged=true \
    --pids-limit -1 \
    --ulimit nproc=65535:65535 \
    --name ${CONTAINER_NAME} \
    work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}
    
    Parameter Description
    OMS_HOST_IP The IP address of the host.
    Notice
    The value of OMS_HOST_IP is different for each node.
    CONTAINER_NAME The name of the container in the oms_xxx format. Specify xxx based on the actual OMS version. For example, if you use OMS V4.2.3, the value is oms_423.
    IMAGE_TAG The unique identifier of the loaded image. After you load the OMS installation package by using Docker, run the docker images command to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image. The obtained value is the unique identifier of the loaded image.
    /data/oms/oms_logs
    /data/oms/oms_store
    /data/oms/oms_run
    You can replace /data/oms/oms_logs, /data/oms/oms_store, and /data/oms/oms_run with the mount directories created on the server where OMS is deployed. The mount directories store the logs generated during the operation of OMS and files generated by the store and synchronization components, respectively, to persistently retain the files on the server.
    Notice
    The mount directories must remain unchanged during subsequent redeployment or upgrades.
    /home/admin/logs
    /home/ds/store
    /home/ds/run
    /home/admin/logs, /home/ds/store, and /home/ds/run are default directories in the container and cannot be modified.
    /data/oms/https_crt (optional)
    /data/oms/https_key (optional)
    The mount directory of the SSL certificate in the OMS container. If you mount an SSL certificate, the NGINX service in the OMS container runs in HTTPS mode. In this case, you can access the OMS console by using only the HTTPS URL.
    IS_UPGRADE To upgrade OMS, you must set the IS_UPGRADE parameter to true.
    privileged Specifies whether to grant extended privileges on the container.
    pids-limit Specifies whether to limit the number of container processes. The value -1 indicates that the number is unlimited.
    ulimit nproc The maximum number of user processes.
  5. Go to the new container.

    docker exec -it ${CONTAINER_NAME} bash  
    
  6. Perform metadata initialization in the root directory.

    • If the cm_nodes settings are the same in all regions, you need to run the docker_init.sh command only on one of the nodes and perform the following operations on the other nodes:

      1. Run sed -i 's/autostart = false/autostart = true/g' /etc/supervisor/conf.d/oms_console.ini.

      2. Run the supervisorctl status; command to check the component status.

      3. If the oms_console component is not running, run the supervisorctl start oms_console command.

    • If the cm_nodes settings are inconsistent across the regions, run the docker_init.sh command on each node.

    Note

    • After you run the preceding command, the script automatically implements schema changes of the three OMS databases.

    • If metadata initialization is performed before the upgrade package in the .jar format is executed on all nodes, the metadata of some regions is not updated. In this case, services may fail to start directly.

  7. After the docker_init.sh script is executed, verify that the server list is normal and all servers are in the Online state.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, choose OPS & Monitoring > Server.

    3. On the Servers page, check whether the server list is normal. Check whether all servers are in the Online state.

  8. After you upgrade OMS on two nodes, enable HA on the System Parameters page, and configure the parameters.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, choose System Management > System Parameters.

    3. On the System Parameters page, find ha.config.

    4. Click the edit icon in the Value column of the parameter.

    5. In the Modify Value dialog box, set enable to true to enable HA, and record the time T2.

    6. We recommend that you set the perceiveStoreClientCheckpoint parameter to true. After that, you do not need to record T1 and T2.

      After you set the perceiveStoreClientCheckpoint parameter to true, you can use the default value 30min of the refetchStoreIntervalMin parameter. HA is enabled, so the system starts the Store component based on the earliest request time of downstream components minus the value of the refetchStoreIntervalMin parameter. For example, if the earliest request time of the downstream Connector or JDBC-Connector component is 12:00:00 and the refetchStoreIntervalMin parameter is set to 30 minutes, the system starts the Store component at 11:30:00.

      If you set the perceiveStoreClientCheckpoint parameter to false, you need to modify the value of the refetchStoreIntervalMin parameter based on your business needs, which specifies the time interval, in minutes, for pulling data from the Store component. The value must be greater than T2 minus T1.

  9. (Optional) To roll back, perform the following steps:

    1. Disable the HA feature based on Step 1.

    2. Suspend the new container and record the time T3.

      sudo docker stop ${CONTAINER_NAME}
      
    3. Connect to the MetaDB and run the following commands:

      drop database rm_422;
      drop database cm_422;
      drop database cm_hb_422;
      
      create database rm_422;
      create database cm_422;
      create database cm_hb_422;
      
    4. Restore the original databases based on the SQL files created in Step 2.

      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/rm_422.sql" -Drm_422
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_422.sql" -Dcm_422
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_hb_422.sql" -Dcm_hb_422
      
    5. Restart the container of OMS V4.2.3.

      sudo docker restart ${CONTAINER_NAME}
      
    6. On the System Parameters page, enable HA.

      Note

      • We recommend that you set the perceiveStoreClientCheckpoint parameter to true.

      • The HA feature automatically starts disaster recovery and the Incr-Sync component. However, you must manually recover the Full-Import component.

  10. After the upgrade is complete, clear the browser cache before you log on to OMS.

Upgrade OMS to V4.2.3 from V3.2.1 or a version later than V3.2.1 and earlier than V4.0.2

Prerequisites

  • Before the upgrade, check for data migration and synchronization projects with duplicate names. If data migration and synchronization projects with duplicate names exist, rename the projects to ensure that all project names are unique.

    Run the following command to check whether projects with duplicate names exist:

    • Data migration project

      SELECT project_name,count(*) AS count,group_concat(id) AS ids FROM oms_project WHERE project_status != "DELETED" GROUP BY project_name HAVING count(*) > 1;
      
    • Data synchronization project

      SELECT project_name,count(*) AS count,group_concat(id) AS ids FROM oms_sync_project WHERE project_status != "DELETED" GROUP BY project_name HAVING count(*) > 1;
      

    If projects with duplicate names exist, rename the projects in sequence. The syntax is as follows:

    • Data migration project

      UPDATE oms_project SET project_name=<New name of the data migration project> WHERE id=<ID of the data migration project>;
      
    • Data synchronization project

      UPDATE oms_sync_project SET project_name=<New name of the data synchronization project> WHERE id=<ID of the data synchronization project>;
      
  • If you use an OceanBase Database data source as both the destination of one data synchronization project and the source of another project, and you have updated the blackRegionNo parameter of JDBCWriter, perform the following steps:

    1. In the OMS container, run the following command to obtain the value of cm_location:

      cat /home/admin/conf/config.yaml  | grep 'cm_location'
      
    2. Log on to the drc_cm database of OMS and run the following command:

      SELECT * FROM config_job WHERE `key`='sourceFile.blackRegionNo' AND VALUE!=xxx;
      

      If the query result is not empty and a data source is still used as both the destination of one data synchronization project and the source of another project, contact OMS Technical Support. If the query result is empty, proceed with the upgrade operations.

Procedure

The following procedure takes the upgrade of OMS from V3.4.0 to V4.2.3 as an example.

  1. If HA is enabled, disable it first.

  2. Back up the databases.

    1. Log on to the two hosts where the container of OMS V3.4.0 is deployed by using their respective IP addresses, and suspend the container.

      sudo docker stop ${CONTAINER_NAME}
      

      Note

      CONTAINER_NAME specifies the name of the container.

    2. Log on to the CM heartbeat database specified in the configuration file and back up data.

      # Log on to the CM heartbeat database specified in the configuration file.
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -Dcm_hb_340
      
      # Create an intermediate table.
      CREATE TABLE IF NOT EXISTS `heatbeat_sequence_bak` (
      `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'PK',
      `gmt_created` datetime NOT NULL,
      `gmt_modified` datetime NOT NULL,
      PRIMARY KEY (`id`)
      ) DEFAULT CHARSET=utf8 COMMENT='Heartbeat sequence table';
      
      # Back up data to the intermediate table.
      INSERT INTO heatbeat_sequence_bak SELECT `id`,`gmt_created`,`gmt_modified` FROM heatbeat_sequence ORDER BY `id` DESC LIMIT 1;
      
      # Rename the heatbeat_sequence table and the intermediate table.
      # The heatbeat_sequence table provides auto-increment IDs and reports the heartbeat.
      ALTER TABLE `heatbeat_sequence` RENAME TO `heatbeat_sequence_bak2`;
      ALTER TABLE `heatbeat_sequence_bak` RENAME TO `heatbeat_sequence`;
      
      # Delete the original table.
      DROP TABLE heatbeat_sequence_bak2;
      
    3. Run the following commands to back up the rm, cm, and cm_hb databases as SQL files and make sure that the sizes of the files are not 0.

      If you have deployed databases in multiple regions, you must back up the cm_hb database in all regions. For example, if you have deployed databases in two regions, you must back up the following four databases: rm, cm, cm_hb1, and cm_hb2.

      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false rm_340 > /home/admin/rm_340.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_340 > /home/admin/cm_340.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_hb_340 > /home/admin/cm_hb_340.sql
      
  3. Load the downloaded OMS installation package to the local image repository of the Docker container.

    docker load -i <OMS installation package>
    
  4. Start the new container of OMS V4.2.3.

    You can access the OMS console by using an HTTP or HTTPS URL. To securely access the OMS console, install a self-signed SSL certificate and mount it to the specified directory in the container. The certificate is not required for HTTP access.

    Notice

    • Before you start the container of OMS V4.2.3, make sure that the three disk mounting paths of OMS are the same as those before the upgrade.
      You can run the sudo docker inspect ${CONTAINER_NAME} | grep -A5 'Binds' command to view the paths of disks mounted to the old OMS container.

    • The -e IS_UPGRADE=true parameter is provided in OMS V3.3.1 or later. This parameter is provided only to support OMS upgrades and must be specified when you upgrade OMS.

    OMS_HOST_IP=xxx
    CONTAINER_NAME=oms_xxx
    IMAGE_TAG=feature_x.x.x
    
    docker run -dit --net host \
    -v /data/config.yaml:/home/admin/conf/config.yaml \
    -v /data/oms/oms_logs:/home/admin/logs \
    -v /data/oms/oms_store:/home/ds/store \
    -v /data/oms/oms_run:/home/ds/run \
    # If you mount the SSL certificate in the OMS container, you must set the following two parameters.
    -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt
    -v /data/oms/https_key:/etc/pki/nginx/oms_server.key
    -e OMS_HOST_IP=${OMS_HOST_IP} \
    -e IS_UPGRADE=true \
    --privileged=true \
    --pids-limit -1 \
    --ulimit nproc=65535:65535 \
    --name ${CONTAINER_NAME} \
    work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}
    
  5. Go to the new container.

    docker exec -it ${CONTAINER_NAME} bash  
    
  6. Run the following command to make the OMS console enter the STOPPED state:

    supervisorctl stop oms_console
    
  7. After the CM/Supervisor component is started, run the following command to execute the upgrade package in the .jar format.

    Notice

    Replace the parameter values based on the actual situation.

    /opt/alibaba/java/bin/java -jar correction-1.0-SNAPSHOT-jar-with-dependencies.jar -mupgrade -y/home/admin/conf/config.yaml -ltrue
    
    Parameter Description
    -m The running mode. The valid value is UPGRADE.
    -y The absolute path of the OMS configuration file.
    -l Specifies whether this upgrade node is the last one. In single-region scenarios, set this parameter to true. In multi-region scenarios, set this parameter to false for the first several regions, and to true for the last region only.
    Note: In multi-region, multi-node scenarios, you need to execute the upgrade package in the .jar format only for the first node in each region. When you perform an upgrade for the last region, set the -l parameter to true.
  8. Perform metadata initialization in the root directory.

    • If the cm_nodes settings are the same in all regions, you need to run the docker_init.sh command only on one of the nodes and perform the following operations on the other nodes:

      1. Run sed -i 's/autostart = false/autostart = true/g' /etc/supervisor/conf.d/oms_console.ini.

      2. Run the supervisorctl status; command to check the component status.

      3. If the oms_console component is not running, run the supervisorctl start oms_console command.

    • If the cm_nodes settings are inconsistent across the regions, run the docker_init.sh command on each node.

    Note

    • After you run the preceding command, the script automatically implements schema changes of the three OMS databases.

    • If metadata initialization is performed before the upgrade package in the .jar format is executed on all nodes, the metadata of some regions is not updated. In this case, services may fail to start directly.

  9. After the docker_init.sh script is executed, verify that the server list is normal and all servers are in the Online state.

  10. After you upgrade OMS on two nodes, enable HA on the System Parameters page, and configure the parameters.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, choose System Management > System Parameters.

    3. On the System Parameters page, find ha.config.

    4. Click the edit icon in the Value column of the parameter.

    5. In the Modify Value dialog box, set enable to true to enable HA, and record the time T2.

    6. We recommend that you set the perceiveStoreClientCheckpoint parameter to true. After that, you do not need to record T1 and T2.

      After you set the perceiveStoreClientCheckpoint parameter to true, you can use the default value 30min of the refetchStoreIntervalMin parameter. HA is enabled, so the system starts the Store component based on the earliest request time of downstream components minus the value of the refetchStoreIntervalMin parameter. For example, if the earliest request time of the downstream Connector or JDBC-Connector component is 12:00:00 and the refetchStoreIntervalMin parameter is set to 30 minutes, the system starts the Store component at 11:30:00.

      If you set the perceiveStoreClientCheckpoint parameter to false, you need to modify the value of the refetchStoreIntervalMin parameter based on your business needs, which specifies the time interval, in minutes, for pulling data from the Store component. The value must be greater than T2 minus T1.

  11. (Optional) To roll back, perform the following steps:

    1. Disable the HA feature based on Step 1.

    2. Suspend the new container and record the time T3.

      sudo docker stop ${CONTAINER_NAME}
      
    3. Connect to the MetaDB and run the following commands:

      drop database rm_340;
      drop database cm_340;
      drop database cm_hb_340;
      
      create database rm_340;
      create database cm_340;
      create database cm_hb_340;
      
    4. Restore the original databases based on the SQL files created in Step 2.

      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/rm_340.sql" -Drm_340
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_340.sql" -Dcm_340
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_hb_340.sql" -Dcm_hb_340
      
    5. Restart the container of OMS V3.4.0.

      sudo docker restart ${CONTAINER_NAME}
      
    6. On the System Parameters page, enable HA.

      Note

      • We recommend that you set the perceiveStoreClientCheckpoint parameter to true.

      • The HA feature automatically starts disaster recovery and the Incr-Sync component. However, you must manually recover the Full-Import component.

  12. After the upgrade is complete, clear the browser cache before you log on to OMS.

Previous topic

Upgrade OMS in single-node deployment mode
Last

Next topic

FAQ
Next
What is on this page
Background
Upgrade OMS from V4.0.2 or later to V4.2.3
Upgrade OMS to V4.2.3 from V3.2.1 or a version later than V3.2.1 and earlier than V4.0.2
Prerequisites
Procedure