OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V3.4.0Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deployment Guide
    • Deployment type
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale-out and deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery project in OceanBase Database
    • Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start, pause, and resume a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
    • Supported DDL operations in incremental migration and limits
      • Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits
      • Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database
      • Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database
      • Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database
      • Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
      • Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from an OceanBase database to a Kafka instance
    • Create a project to synchronize data from an OceanBase database to a RocketMQ instance
    • Create a project to synchronize data from an OceanBase database to a DataHub instance
    • Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from a DBP logical table to a DataHub instance
    • Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from an IDB logical table to a DataHub instance
    • Create a project to synchronize data from a MySQL database to a DataHub instance
    • Create a project to synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start, pause, and resume a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update quotas
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Destroy a store
      • Connector
        • View details of a connector
        • Start and pause a connector
        • Migrate a connector
        • Update the configurations of a connector
        • Batch O\&M
        • Delete a connector
      • JDBCWriter
        • View details of a JDBCWriter
        • Start and pause a JDBCWriter
        • Migrate a JDBCWriter
        • Update the configurations of a JDBCWriter
        • Batch O\&M
        • Delete a JDBCWriter
      • Checker
        • View the information about a checker
        • Start and pause a checker
        • Rerun and reverify a checker
        • Update the configurations of a checker
        • Delete a checker
    • O&M tickets
      • View details of an O\&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • User management
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
    • Operation audit
  • O&M Guide
    • Manage OMS services
    • OMS logs
    • O&M operations for the Store component
    • Store parameters
      • Parameters of an Oracle store
      • Parameters of a DB2 store
      • Parameters of a MySQL store
      • Parameters of an OceanBase store
    • O&M operations for the Supervisor component
    • Parameters of the Supervisor component
    • O&M operations for the Connector component
    • Connector parameters
      • Parameters of a destination RocketMQ instance
      • Parameters of a DataflowSink instance
      • Parameters in the destination Kafka instance
      • Parameters of the source database in full migration
      • Parameters of the source database in incremental data synchronization
      • Parameters of a destination DataHub instance
      • Parameters of the source Sybase database
      • Parameters for intermediate-layer synchronization
    • Checker parameters
    • JDBCWriter parameters
    • Parameters of the CM component
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • FAQ about full migration
          • How do I query the ID of a checker?
          • How do I query log files of the Checker component of OMS?
          • How do I query the verification result files of the Checker component of OMS?
          • What do I do if the destination table does not exist?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update the configurations of a JDBCWriter?
        • How do I start or stop a JDBCWriter?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment type System and network requirements Memory and disk requirements Prepare the environment Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale-out and deployment Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery project in OceanBase Database Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start, pause, and resume a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from an OceanBase database to a Kafka instance Create a project to synchronize data from an OceanBase database to a RocketMQ instance Create a project to synchronize data from an OceanBase database to a DataHub instance Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database Create a project to synchronize data from a DBP logical table to a DataHub instance Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database Create a project to synchronize data from an IDB logical table to a DataHub instance Create a project to synchronize data from a MySQL database to a DataHub instance Create a project to synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start, pause, and resume a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source Delete a data source Create a database user User privileges
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V3.4.0
iconOceanBase Migration Service
V 3.4.0Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Deploy OMS on multiple nodes in a single region

Last Updated:2026-04-14 07:36:28  Updated
share
What is on this page
Background
Prerequisites
Multi-node deployment architecture
Deploy OMS without the configuration file
Deploy OMS with the configuration file available
Template and example of the configuration file
Configuration file template
Example

folded

share

This topic describes how to deploy OceanBase Migration Service (OMS) on multiple nodes in a single region by using deployment tools.

Background

  • You can deploy OMS on a single node first and scale out to multiple nodes.

  • If you choose to deploy OMS with the config.yaml configuration file, note that the settings are slightly different from those for the single-node deployment mode. For more information, see the "Template and example of the configuration file" section.

  • To deploy OMS on multiple nodes, you must apply for a virtual IP address (VIP) and use it as the mount point for the OMS console. In addition, you must configure the mapping rules of Ports 8088 and 8089 in the VIP network strategy.

    You can use the VIP to access the OMS console even if an OMS node fails.

Prerequisites

  • The installation environment meets the system requirements. For more information, see System requirements.

  • The MetaDB cluster is prepared as the OMS MetaDB.

  • The OMS installation package is obtained. Generally, the package is a tar.gz file whose name starts with oms.

  • Make sure that the server to deploy OMS can connect to all other servers.

  • The downloaded OMS image file has been loaded to the local image repository of the Docker container on each server node.

    docker load -i <Storage path of the OMS image>

    In this example, the name of the loaded image file is OMS_IMAGE. You need to replace it with the actual name of your image file.

  • Make sure that all servers involved in the multi-node deployment can connect to each other and that you can obtain root permissions on a node by using its username and password.

Multi-node deployment architecture

The following figure shows an architecture of multi-node deployment. Store is the log pulling component, and JDBCWriter and Connector are real-time synchronization components. When the OMS A fails, store, JDBCWriter, and connector processes running on the node are protected by the HA service and switched to OMS B or OMS C.

Notice:

By default, the high availability feature is disabled. To ensure the high availability of stores, JDBCWriters, and connectors, enable HA in the OMS console. For more information, see Modify HA configurations.

Architecture

Deploy OMS without the configuration file

  1. Log on to the server where OMS is to be deployed.

  2. Optional. Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <deploy_tool_workdir> -i <IP address of the server> -d <OMS_IMAGE>
    
  5. Follow the prompts to complete the deployment. After you set each parameter, press Enter to move on to the next parameter.

    1. Select the deployment mode.

      Select Multiple Nodes in Single Region.

    2. Select the task.

      Select No Configuration File. Deploy OMS Starting from Configuration File Generation.

    3. Specify the following MetaDB information:

      1. IP address of the MetaDB

      2. Port number of the MetaDB

      3. Username used to connect to the MetaDB

      4. Password used to connect to the MetaDB

      5. Prefix for names of databases in the MetaDB

        For example, if you set the prefix to oms, the final database names are oms_rm, oms_cm, and oms_cm_hb.

    4. Confirm your settings.

      If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    5. If the system displays "The specified database names already exist in the metadatabase. Are you sure that you want to continue?", it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to re-specify the settings.

    6. Perform the following operations to configure the Cluster Manager (CM) service of OMS:

      1. Specify the URL of the CM service, which is the virtual IP address (VIP) or domain name to which all CM servers in the region are mounted. The original parameter is cm-url.

        You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the : format.

        Note:

        The http:// prefix in the URL is optional.

      2. Enter the IP addresses of all servers in the region. Separate them with commas (,).

      3. Confirm the displayed CM settings.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    7. Confirm whether to enable OMS historical data monitoring.

      • If you have deployed a time-series database, enter y and press Enter to go to the next step to configure the time-series database and enable the monitoring of OMS historical data.

      • If you did not deploy a time-series database, enter n and press Enter to go to the step of "determining whether to enable the audit log feature and setting Simple Log Service (SLS) parameters". In this case, OMS does not monitor the historical data after deployment.

    8. Configure the time-series database.

      Perform the following operations:

      1. Confirm whether you have deployed a time-series database.

        If yes, enter y and press Enter. If not, enter n and press Enter to go to the step of "determining whether to enable the audit log feature and setting SLS parameters".

      2. Set the type of the time-series database to INFLUXDB.

        Notice:

        At present, only INFLUXDB is supported.

      3. Enter the URL of the time-series database.

        You can separately enter the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the : format.

      4. Enter the username used to connect to the time-series database.

      5. Enter the password used to connect to the time-series database.

      6. Confirm whether the displayed settings are correct.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    9. Determine whether to enable the audit log feature and set SLS parameters.

      To enable the audit log feature, enter y and press Enter to go to the next step to specify the SLS parameters.

      To start deployment on each node, enter n and press Enter to go to the step of "starting the deployment on each node one after another". In this case, OMS does not audit the logs after deployment.

    10. Specify the following SLS parameters:

    11. URL of SLS

    12. access-key used to access SLS

    13. secret-key used to access SLS

    14. user-site-topic of SLS

    15. ops-site-topic of SLS

    16. Confirm whether the displayed settings are correct.

      If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    17. Start the deployment on each node one after another.

    18. Perform the following operations to specify additional information that is required for the deployment on a node:

    19. Enter the username used to connect to the server.

    20. Enter the password used to connect to the server.

    21. Specify the path of the config.yaml file, which must end with a slash (/).

    22. Specify the root directory to which the OMS container is mounted in the host.

      Use a directory with a large capacity.
    
    1. Confirm whether the OMS image file can be named as OMS_IMAGE.
      If yes, enter `y` and press **Enter**. If not, enter `n` and press **Enter**.
    
    1. Confirm whether to install a Secure Sockets Layer (SSL) certificate for the OMS container.
      If yes, enter `y`, press **Enter**, and specify the `https_key` and `https_crt` directories as prompted. If not, enter `n` and press **Enter**.
    
    1. Repeat the operations in the previous step on each node until the deployment is completed on all nodes.

Deploy OMS with the configuration file available

  1. Log on to the server where OMS is to be deployed.

  2. Optional. Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <deploy_tool_workdir> -c <directory of the config.yaml file> -i <IP address of the server> -d <OMS_IMAGE>
    

    Note:

    For more information about settings of the config.yaml file, see the "Template and example of the configuration file" section.

  5. Follow the prompts to complete the deployment. After you set each parameter, press Enter to move on to the next parameter.

    1. Select the deployment mode.

      Select Multiple Nodes in Single Region.

    2. Select the task.

      Select Use Configuration File Uploaded with Script Option [-c].

    3. If the system displays "The specified database names already exist in the metadatabase. Are you sure that you want to continue?", it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to re-specify the settings.

    4. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      If the configuration file fails the check, modify the configuration information as prompted.

    5. Start the deployment on each node one after another.

    6. Perform the following operations to specify additional information that is required for the deployment on a node:

      1. Enter the username used to connect to the server.

      2. Enter the password used to connect to the server.

      3. Specify the path of the config.yaml file, which must end with a slash (/).

      4. Specify the root directory to which the OMS container is mounted in the host.

        Use a directory with a large capacity.

      5. Confirm whether the OMS image file can be named as OMS_IMAGE.

        If yes, enter y and press Enter. Otherwise, enter n and press Enter to modify it.

      6. Confirm whether to install an SSL certificate for the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. If not, enter n and press Enter.

    7. Repeat the operations in the previous step on each node until the deployment is completed on all nodes.

Template and example of the configuration file

Configuration file template

Notice:

  • The same configuration file applies to all nodes in the multi-node deployment architecture. In the configuration file, you must specify the IP addresses of multiple nodes for the cm_nodes parameter and set the cm_url parameter to the VIP corresponding to Port 8088.

  • You must replace the sample values of required parameters based on your actual deployment environment. Optional parameters are commented in this example. You can modify the optional parameters or uncomment the parameters as needed.

  • In the config.yaml file, you must specify the parameters in the key: value format, with a space after the colon (:).

# Information about the OMS MetaDB
oms_meta_host: ${oms_meta_host}
oms_meta_port: ${oms_meta_port}
oms_meta_user: ${oms_meta_user}
oms_meta_password: ${oms_meta_password}

# You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS.
drc_rm_db: ${drc_rm_db}
drc_cm_db: ${drc_cm_db}
drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}

# The user that consumes the incremental data of OceanBase Database.
# To read incremental logs of OceanBase Database, create the user in the sys tenant.
# You must create the drc_user in the sys tenant of the OceanBase cluster to be migrated and specify the drc_user in the config.yaml file.
drc_user: ${drc_user}
drc_password: '${drc_password}'

# Configurations of the OMS cluster
# To deploy OMS on multiple nodes in a single region, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
cm_url: ${cm_url}
cm_location: ${cm_location}
# The cm_region parameter is not required for single-region deployment.
# cm_region: ${cm_region}
cm_is_default: true
cm_nodes:
 - ${host_ip1}
 - ${host_ip2}

# Configurations of the time-series database
# Default value: false. To enable metric reporting, set the parameter to `true` and delete the comments for the parameter.
# tsdb_enabled: false
# If the `tsdb_enabled` parameter is set to `true`, delete comments for the following parameters and specify the values based on your actual configurations.
# tsdb_service: 'INFLUXDB'
# tsdb_url: '${tsdb_url}'
# tsdb_username: ${tsdb_user}
# tsdb_password: ${tsdb_password}
Parameter Description Required
oms_meta_host The IP address of the MetaDB, which can be the IP address of a MySQL database or a MySQL tenant of OceanBase Database.
Notice:
This parameter is valid only in OceanBase Database V2.0 and later.
Yes
oms_meta_port The port number of the MetaDB. Yes
oms_meta_user The username of the MetaDB. Yes
oms_meta_password The user password of the MetaDB. Yes
drc_rm_db The name of the database for the OMS console. Yes
drc_cm_db The name of the MetaDB for the CM service. Yes
drc_cm_heartbeat_db The name of the heartbeat database for the CM service. Yes
drc_user The user that reads the incremental logs of OceanBase Database. You need to create the user in the sys tenant. For more information, see the "User privileges" topic in OMS User Guide. No
drc_password The password of the drc_user account. No
cm_url The URL of the OMS CM service. Example: http://VIP:8088.
Note:
To deploy OMS on multiple nodes in a single region, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
We do not recommend that you use http://127.0.0.1:8088 because this IP address does not support the multi-node multi-region deployment mode.
The access URL of the OMS console is in the format of IP address of the host on which OMS is deployed:8089. Example: http://xxx.xxx.xxx.1:8089, or https://xxx.xxx.xxx.1:8089. Port 8088 is used for program calls, and Port 8089 is used for web page access. You must specify Port 8088.
Yes
cm_location The code of the region. Value range: [0,127]. You can select one number for each region.
Notice:
If you upgrade to OMS V3.2.1 from an earlier version, you must set the cm_location parameter to 0.
Yes
cm_region The name of the region. Example: cn-jiangsu.
Notice:
If you use OMS with the Alibaba Cloud Multi-Site High Availability (MSHA) service in an active-active disaster recovery scenario, use the region configured for the Alibaba Cloud service.
No
cm_nodes The IP addresses of servers on which the OMS CM service is deployed. In multi-node deployment mode, you must specify multiple IP addresses for the parameter. Yes
cm_is_default Indicates whether the OMS CM service is enabled by default. No. Default value: true
tsdb_service The type of the time-series database. Valid values: INFLUXDB and CERESDB. No. Default value: CERESDB
tsdb_enabled Indicates whether metric reporting is enabled for monitoring. Valid values: true and false. No. Default value: false
tsdb_url The IP address of the server where InfluxDB is deployed. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. No
tsdb_username The username used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. After you deploy the time-series database, manually create a user and specify the username and password. No
tsdb_password The password used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. No

Example

oms_meta_host: xxx.xxx.xxx.1
oms_meta_port: 2883
oms_meta_user: root@oms#obcluster
oms_meta_password: oms
drc_rm_db: oms_rm
drc_cm_db: oms_cm
drc_cm_heartbeat_db: oms_cm_heartbeat
drc_user: drc_user_name
drc_password: 'OceanBase#oms'
cm_url: http://xxx.xxx.xxx.2:8088
cm_location: 100
cm_region: cn-anhui
cm_is_default: true
cm_nodes:
 - xxx.xxx.xxx.2
 - xxx.xxx.xxx.3
tsdb_service: 'INFLUXDB'
tsdb_enabled: true
tsdb_url: 'xxx.xxx.xxx.5:8086'
tsdb_username: username
tsdb_password: 123456

Previous topic

Deploy OMS on a single node
Last

Next topic

Deploy OMS on multiple nodes in multiple regions
Next
What is on this page
Background
Prerequisites
Multi-node deployment architecture
Deploy OMS without the configuration file
Deploy OMS with the configuration file available
Template and example of the configuration file
Configuration file template
Example