OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V3.4.0Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deployment Guide
    • Deployment type
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale-out and deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery project in OceanBase Database
    • Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start, pause, and resume a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
    • Supported DDL operations in incremental migration and limits
      • Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits
      • Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database
      • Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database
      • Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database
      • Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
      • Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from an OceanBase database to a Kafka instance
    • Create a project to synchronize data from an OceanBase database to a RocketMQ instance
    • Create a project to synchronize data from an OceanBase database to a DataHub instance
    • Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from a DBP logical table to a DataHub instance
    • Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from an IDB logical table to a DataHub instance
    • Create a project to synchronize data from a MySQL database to a DataHub instance
    • Create a project to synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start, pause, and resume a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update quotas
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Destroy a store
      • Connector
        • View details of a connector
        • Start and pause a connector
        • Migrate a connector
        • Update the configurations of a connector
        • Batch O\&M
        • Delete a connector
      • JDBCWriter
        • View details of a JDBCWriter
        • Start and pause a JDBCWriter
        • Migrate a JDBCWriter
        • Update the configurations of a JDBCWriter
        • Batch O\&M
        • Delete a JDBCWriter
      • Checker
        • View the information about a checker
        • Start and pause a checker
        • Rerun and reverify a checker
        • Update the configurations of a checker
        • Delete a checker
    • O&M tickets
      • View details of an O\&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • User management
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
    • Operation audit
  • O&M Guide
    • Manage OMS services
    • OMS logs
    • O&M operations for the Store component
    • Store parameters
      • Parameters of an Oracle store
      • Parameters of a DB2 store
      • Parameters of a MySQL store
      • Parameters of an OceanBase store
    • O&M operations for the Supervisor component
    • Parameters of the Supervisor component
    • O&M operations for the Connector component
    • Connector parameters
      • Parameters of a destination RocketMQ instance
      • Parameters of a DataflowSink instance
      • Parameters in the destination Kafka instance
      • Parameters of the source database in full migration
      • Parameters of the source database in incremental data synchronization
      • Parameters of a destination DataHub instance
      • Parameters of the source Sybase database
      • Parameters for intermediate-layer synchronization
    • Checker parameters
    • JDBCWriter parameters
    • Parameters of the CM component
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • FAQ about full migration
          • How do I query the ID of a checker?
          • How do I query log files of the Checker component of OMS?
          • How do I query the verification result files of the Checker component of OMS?
          • What do I do if the destination table does not exist?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update the configurations of a JDBCWriter?
        • How do I start or stop a JDBCWriter?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment type System and network requirements Memory and disk requirements Prepare the environment Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale-out and deployment Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery project in OceanBase Database Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start, pause, and resume a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from an OceanBase database to a Kafka instance Create a project to synchronize data from an OceanBase database to a RocketMQ instance Create a project to synchronize data from an OceanBase database to a DataHub instance Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database Create a project to synchronize data from a DBP logical table to a DataHub instance Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database Create a project to synchronize data from an IDB logical table to a DataHub instance Create a project to synchronize data from a MySQL database to a DataHub instance Create a project to synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start, pause, and resume a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source Delete a data source Create a database user User privileges
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V3.4.0
iconOceanBase Migration Service
V 3.4.0Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Deploy OMS on multiple nodes in multiple regions

Last Updated:2026-04-14 07:36:28  Updated
share
What is on this page
Background
Prerequisites
Deploy OMS without the configuration file
Deploy OMS with the configuration file available
Template and example of the configuration file
Configuration file template
Example

folded

share

This topic describes how to deploy OceanBase Migration Service (OMS) on multiple nodes in multiple regions by using deployment tools.

Background

As more users apply OMS in data migration, OMS must adapt to increasingly diverse scenarios. In addition to single-region data migration and data synchronization, OMS supports data synchronization across regions, data migration between IDCs in different regions, and active-active data synchronization.

You can deploy OMS on one or more nodes in each region. OMS can be deployed on multiple nodes in a region to build an environment with high availability. In this way, OMS can start components on appropriate nodes based on the tasks.

For example, if you want to synchronize data from Region Hangzhou to Region Heilongjiang, OMS starts stores on a node in Region Hangzhou to collect the incremental logs and starts writers on a node in Region Heilongjiang to synchronize the data.

Notes:

  • You can deploy OMS on a single node first and scale out to multiple nodes. For more information, see Scale out OMS.

  • To deploy OMS on multiple nodes, you must apply for a virtual IP address (VIP) and use it as the mount point for the OMS console. In addition, you must configure the mapping rules of Ports 8088 and 8089 in the VIP network strategy.

    You can use the VIP to access the OMS console even if an OMS node fails.

Prerequisites

  • The installation environment meets the system requirements. For more information, see System requirements.

  • The MetaDB cluster is prepared as the OMS MetaDB.

  • The OMS installation package is obtained. Generally, the package is a tar.gz file whose name starts with oms.

  • Make sure that the server to deploy OMS can connect to all other servers.

  • The downloaded OMS image file has been loaded to the local image repository of the Docker container on each server node.

    docker load -i <Storage path of the OMS image>

    In this example, the name of the loaded image file is OMS_IMAGE. You need to replace it with the actual name of your image file.

  • Make sure that all servers involved in the multi-node deployment can connect to each other and that you can obtain root permissions on a node by using its username and password.

Deploy OMS without the configuration file

  1. Log on to the server where OMS is to be deployed.

  2. Optional. Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <deploy_tool_workdir> -i <IP address of the server> -d <OMS_IMAGE>
    
  5. Follow the prompts to complete the deployment. After you set each parameter, press Enter to move on to the next parameter.

    1. Select the deployment mode.

      Select Multiple Regions.

    2. Select the task.

      Select No Configuration File. Deploy OMS Starting from Configuration File Generation.

    3. Specify the following MetaDB information:

      1. IP address of the MetaDB

      2. Port number of the MetaDB

      3. Username used to connect to the MetaDB

      4. Password used to connect to the MetaDB

      5. Prefix for names of databases in the MetaDB

        For example, if you set the prefix to oms, the final database names are oms_rm, oms_cm, and oms_cm_hb.

    4. Confirm your settings.

      If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    5. If the system displays "The specified database names already exist in the metadatabase. Are you sure that you want to continue?", it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to re-specify the settings.

    6. Perform the following operations to configure the OMS cluster settings:

      1. Enter the region ID, for example: cn-hangzhou.

      2. Specify the URL of the Cluster Manager (CM) service, which is the virtual IP address (VIP) or domain name to which all CM servers in the region are mounted. The original parameter is cm-url.

        You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the <IP address>:<port number> format.

        Note:

        The http:// prefix in the URL is optional.

      3. Enter the IP addresses of all servers in the region. Separate them with commas (,).

      4. Specify whether to preferentially access the current region.

        In the multi-region deployment mode, you must set this parameter to true for at least one region. If yes, enter y and press Enter. If not, enter n and press Enter.

      5. Confirm the displayed settings of the OMS cluster.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    7. Confirm whether to enable OMS historical data monitoring.

      • If you have deployed a time-series database, enter y and press Enter to go to the next step to configure the time-series database and enable the monitoring of OMS historical data.

      • If you did not deploy a time-series database, enter n and press Enter to go to the step of "determining whether to enable the audit log feature and setting Simple Log Service (SLS) parameters". In this case, OMS does not monitor the historical data after deployment.

    8. Configure the time-series database.

      Perform the following operations:

      1. Confirm whether you have deployed a time-series database.

        If yes, enter y and press Enter. If not, enter n and press Enter to go to the step of "determining whether to enable the audit log feature and setting SLS parameters".

      2. Set the type of the time-series database to INFLUXDB.

        Notice:

        At present, only INFLUXDB is supported.

      3. Enter the URL of the time-series database.

        You can separately enter the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the <IP address>:<port number> format.

      4. Enter the username used to connect to the time-series database.

      5. Enter the password used to connect to the time-series database.

      6. Confirm whether the displayed settings are correct.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    9. Determine whether to enable the audit log feature and set SLS parameters.

      To enable the audit log feature, enter y and press Enter to go to the next step to specify the SLS parameters.

      To start deployment on each node, enter n and press Enter to go to the step of "starting the deployment on each node one after another". In this case, OMS does not audit the logs after deployment.

    10. Specify the following SLS parameters:

    11. URL of SLS

    12. access-key used to access SLS

    13. secret-key used to access SLS

    14. user-site-topic of SLS

    15. ops-site-topic of SLS

    16. Confirm whether the displayed settings are correct.

      If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    17. Start the deployment on each node one after another.

    18. Perform the following operations to specify additional information that is required for the deployment on a node:

    19. Enter the username used to connect to the server.

    20. Enter the password used to connect to the server.

    21. Specify the path of the config.yaml file, which must end with a slash (/).

    22. Specify the root directory to which the OMS container is mounted in the host.

      Use a directory with a large capacity.
    
    1. Confirm whether the OMS image file can be named as OMS_IMAGE.
      If yes, enter `y` and press **Enter**. If not, enter `n` and press **Enter**.
    
    1. Confirm whether to install an Secure Sockets Layer (SSL) certificate for the OMS container.
      If yes, enter `y`, press **Enter**, and specify the `https_key` and `https_crt` directories as prompted. If not, enter `n` and press **Enter**.
    
    1. Go back to the step of "starting the deployment on each node one after another", until the deployment is completed on all nodes in the current region.

    2. Confirm whether to deploy OMS in a new region.

    After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

    If yes, enter y and press Enter to proceed. If not, enter n and press Enter to end the deployment process.

    1. A message is displayed, showing the names and IDs of existing regions, to help you avoid using an existing name or ID for a new region.

    2. Repeat the step of "performing the following operations to configure the OMS cluster settings" to the step of "going back to the step of 'starting the deployment on each node one after another', until the deployment is completed on all nodes in the current region". Then, end the process.

Deploy OMS with the configuration file available

  1. Log on to the server where OMS is to be deployed.

  2. Optional. Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <deploy_tool_workdir> -c <directory of the config.yaml file> -i <IP address of the server> -d <OMS_IMAGE>
    

    Note:

    For more information about settings of the config.yaml file, see the "Template and example of the configuration file" section.

  5. Follow the prompts to complete the deployment. After you set each parameter, press Enter to move on to the next parameter.

    1. Select the deployment mode.

      Select Multiple Regions.

    2. Select the task.

      Select Use Configuration File Uploaded with Script Option [-c].

    3. If the system displays "The specified database names already exist in the metadatabase. Are you sure that you want to continue?", it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to re-specify the settings.

    4. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      If the configuration file fails the check, modify the configuration information as prompted.

    5. Start the deployment on each node one after another.

    6. Perform the following operations to specify additional information that is required for the deployment on a node:

      1. Enter the username used to connect to the server.

      2. Enter the password used to connect to the server.

      3. Specify the path of the config.yaml file, which must end with a slash (/).

      4. Specify the root directory to which the OMS container is mounted in the host.

        Use a directory with a large capacity.

      5. Confirm whether the OMS image file can be named as OMS_IMAGE.

        If yes, enter y and press Enter. Otherwise, enter n and press Enter to modify it.

      6. Confirm whether to install an SSL certificate for the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. If not, enter n and press Enter.

    7. Go back to the step of "starting the deployment on each node one after another", until the deployment is completed on all nodes in the current region.

    8. Confirm whether to deploy OMS in a new region.

      After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

      If yes, enter y and press Enter to proceed. If not, enter n and press Enter to end the deployment process.

    9. A message is displayed, showing the names and IDs of existing regions, to help you avoid using an existing name or ID for a new region.

    10. Perform the following operation to configure the OMS cluster settings:

    Enter the region ID, for example: cn-hangzhou.

    1. Confirm whether the displayed settings are correct.

    If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    1. Repeat the step of "starting the deployment on each node one after another" to the step of "confirming whether to deploy OMS in a new region", until you have deployed OMS in all required regions. Then, end the process.

Template and example of the configuration file

Configuration file template

Notice:

  • When multiple regions exist, you must set the cm_is_default parameter to true for only one region, and set it to false for all other regions. In addition, you must sequentially run commands in each region.

  • To deploy multiple nodes in the Hangzhou region, specify the IP addresses of all nodes for the cm_nodes parameter.

  • You must replace the sample values of required parameters based on your actual deployment environment. Optional parameters are commented in this example. You can modify the optional parameters or uncomment the parameters as needed.

  • In the config.yaml file, you must specify the parameters in the key: value format, with a space after the colon (:).

In the following examples of the config.yaml file for the multi-node multi-region deployment mode, OMS is deployed on two nodes in the Hangzhou and Heilongjiang regions.

  • The following example describes a template of the config.yaml file for you to deploy OMS in the Hangzhou region:

    # Information about the OMS MetaDB
    oms_meta_host: ${oms_meta_host}
    oms_meta_port: ${oms_meta_port}
    oms_meta_user: ${oms_meta_user}
    oms_meta_password: ${oms_meta_password}
    
    # You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS.
    drc_rm_db: ${drc_rm_db}
    drc_cm_db: ${drc_cm_db}
    drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
    
    # The user that consumes the incremental data of OceanBase Database.
    # To read incremental logs of OceanBase Database, create the user in the sys tenant.
    # You must create the drc_user in the sys tenant of the OceanBase cluster to be migrated and specify the drc_user in the config.yaml file.
    drc_user: ${drc_user}
    drc_password: '${drc_password}'
    
    # Configure the OMS cluster in the Hangzhou region.
    # To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
    cm_url: ${cm_url}
    cm_location: ${cm_location}
    cm_region: ${cm_region}
    cm_is_default: true
    cm_nodes:
     - ${host_ip1}
     -  ${host_ip2}
    # Configurations of the time-series database
    # Default value: false. To enable metric reporting, set the parameter to `true`.
    # tsdb_enabled: false
    # If the `tsdb_enabled` parameter is set to `true`, delete comments for the following parameters and specify the values based on your actual configurations.
    # tsdb_service: 'INFLUXDB'
    # tsdb_url: '${tsdb_url}'
    # tsdb_username: ${tsdb_user}
    # tsdb_password: ${tsdb_password}
    
    Parameter Description Required
    oms_meta_host The IP address of the MetaDB, which can be the IP address of a MySQL database or a MySQL tenant of OceanBase Database.
    Notice:
    This parameter is valid only in OceanBase Database V2.0 and later.
    Yes
    oms_meta_port The port number of the MetaDB. Yes
    oms_meta_user The username of the MetaDB. Yes
    oms_meta_password The user password of the MetaDB. Yes
    drc_rm_db The name of the database for the OMS console. Yes
    drc_cm_db The name of the database for the CM service. Yes
    drc_cm_heartbeat_db The name of the heartbeat database for the CM service. Yes
    drc_user The user that reads the incremental logs of OceanBase Database. You need to create the user in the sys tenant. No
    drc_password The password of the drc_user account. No
    cm_url The URL of the OMS CM service. Example: http://VIP:8088.
    Note:
    To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted. We do not recommend that you set it to http://127.0.0.1:8088.
    The access URL of the OMS console is in the format of IP address of the host on which OMS is deployed:8089. Example: http://xxx.xxx.xxx.1:8089, or https://xxx.xxx.xxx.1:8089. Port 8088 is used for program calls, and Port 8089 is used for web page access. You must specify Port 8088.
    Yes
    cm_location The code of the region. Value range: [0,127]. You can select one number for each region.
    Notice:
    If you upgrade to OMS V3.2.1 from an earlier version, you must set the cm_location parameter to 0.
    Yes
    cm_region The name of the region. Example: cn-hangzhou.
    Notice:
    If you use OMS with the Alibaba Cloud Multi-Site High Availability (MSHA) service in an active-active disaster recovery scenario, use the region configured for the Alibaba Cloud service.``
    Yes
    cm_nodes The IP addresses of servers on which the OMS CM service is deployed. In multi-node deployment mode, you must specify multiple IP addresses for the parameter. Yes
    cm_is_default Indicates whether the OMS CM service is enabled by default. No. Default value: true
    tsdb_service The type of the time-series database. Valid values: INFLUXDB and CERESDB. No. Default value: CERESDB
    tsdb_enabled Indicates whether metric reporting is enabled for monitoring. Valid values: true and false. No. Default value: false
    tsdb_url The IP address of the server where InfluxDB is deployed, which needs to be modified based on the actual environment. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. After the time-series database is deployed, it maps to OMS deployed for the whole cluster. This means that although OMS is deployed in multiple regions, all regions map to the same time-series database. No
    tsdb_username The username used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. After you deploy the time-series database, manually create a user and specify the username and password. No
    tsdb_password The password used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. No
  • The following example describes a template of the config.yaml file for you to deploy OMS in the Heilongjiang region:

    The operations are the same as those for deploying OMS in the Hangzhou region, except that you must modify the following parameters in the config.yaml file: drc_cm_heartbeat_db, cm_url, cm_location, cm_region, cm_is_default, and cm_nodes.

    Notice:

    • When multiple regions exist, you must set the cm_is_default parameter to true for only one region, and set it to false for all other regions.

    • To deploy multiple nodes in the Heilongjiang region, specify the IP addresses of all nodes for the cm_nodes parameter.

    • You must execute the docker_init.sh script on at least one node in each region.

    # Information about the OMS MetaDB
    oms_meta_host: ${meta_ip}
    oms_meta_port: ${meta_port}
    oms_meta_user: ${meta_user}
    oms_meta_password: ${meta_password}
    
    # You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS.
    drc_rm_db: ${drc_rm_db}
    drc_cm_db: ${drc_cm_db}
    drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
    
    # The user that consumes the incremental data of OceanBase Database.
    # To read incremental logs of OceanBase Database, create the user in the sys tenant.
    # You must create the drc_user in the sys tenant of the OceanBase cluster to be migrated and specify the drc_user in the config.yaml file.
    drc_user: ${drc_user}
    drc_password: ${drc_passwrord}
    
    # Configure the OMS cluster in the Heilongjiang region.
    # To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
    cm_url: ${cm_url}
    cm_location: ${cm_location}
    cm_region: ${cm_region}
    cm_is_default: false
    cm_nodes:
     - ${host_ip1}
     - ${host_ip2}
    # Configurations of the time-series database
    # tsdb_service: 'INFLUXDB'
    # Default value: false. Set the value based on your actual configuration.
    # tsdb_enabled: false
    
    # The IP address of the server where InfluxDB is deployed.
    # You need to modify the following parameters based on the actual environment if you set the tsdb_enabled parameter to true.
    # tsdb_url: ${tsdb_url}
    # tsdb_username: ${tsdb_user}
    # tsdb_password: ${tsdb_password}
    

Example

  • The following example of the config.yaml file shows sample settings when OMS is deployed in the Hangzhou region:

    oms_meta_host: xxx.xxx.xxx.1
    oms_meta_port: 2883
    oms_meta_user: root@oms#obcluster
    oms_meta_password: oms
    drc_rm_db: oms_rm
    drc_cm_db: oms_cm
    drc_cm_heartbeat_db: oms_cm_heartbeat
    drc_user: drc_user_name
    drc_password: 'OceanBase#oms'
    cm_url: http://VIP:8088
    cm_location: 1
    cm_region: cn-hangzhou
    cm_is_default: true
    cm_nodes:
     - xxx.xxx.xxx.2
     - xxx.xxx.xxx.3
    tsdb_service: 'INFLUXDB'
    tsdb_enabled: true
    tsdb_url: 'xxx.xxx.xxx.5:8086'
    tsdb_username: username
    tsdb_password: 123456
    
  • The following example of the config.yaml file shows sample settings when OMS is deployed in the Heilongjiang region:

    oms_meta_host: xxx.xxx.xxx.1
    oms_meta_port: 2883
    oms_meta_user: root@oms#obcluster
    oms_meta_password: oms
    drc_rm_db: oms_rm
    drc_cm_db: oms_cm
    drc_cm_heartbeat_db: oms_cm_heartbeat_1
    drc_user: drc_user_name
    drc_password: 'OceanBase#oms'
    cm_url: http://xxx.xxx.xxx.6:8088
    cm_location: 2
    cm_region: cn-heilongjiang
    cm_is_default: false
    cm_nodes:
     - xxx.xxx.xxx.6
     - xxx.xxx.xxx.7
    tsdb_service: 'INFLUXDB'
    tsdb_enabled: true
    tsdb_url: 'xxx.xxx.xxx.5:8086'
    tsdb_username: username
    tsdb_password: 123456
    

Previous topic

Deploy OMS on multiple nodes in a single region
Last

Next topic

Scale-out and deployment
Next
What is on this page
Background
Prerequisites
Deploy OMS without the configuration file
Deploy OMS with the configuration file available
Template and example of the configuration file
Configuration file template
Example