OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.3.1Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • Overview of OMS
    • Terms
    • OMS HA
    • Principles of Store
    • Principles of Full-Import and Incr-Sync
    • Data verification principles
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale out
    • Scale down deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log in to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your login password
      • Log out
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database
    • Migrate data from the Oracle compatible mode of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
    • Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data between OceanBase databases of the same tenant type
    • Configure a bidirectional synchronization task
    • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to the Oracle compatible mode of OceanBase Database
    • Migrate data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database
    • Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database
    • Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database
    • Manage data migration tasks
      • View details of a data migration task
      • Rename a data migration task
      • View and modify migration objects
      • Use tags to Manage data migration tasks
      • Perform batch operations on data migration tasks
      • Download and import settings of migration objects
      • View and modify migration parameters
      • Download a conflict log file
      • Start and pause a data migration task
      • End and delete a data migration task
    • Supported DDL operations and limits for synchronization
      • Synchronize DDL operations from a MySQL database to a MySQL-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from MySQL to OceanBase Database's MySQL compatible mode
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create indexes or constraints
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify tables
          • Operations on columns
          • Operations on constraints and indexes
          • Partition operations
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from Oracle to OceanBase Database Oracle compatible mode
        • CREATE TABLE
          • Overview for CREATE TABLE
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview for creating partitions
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Oracle compatible mode
        • ALTER TABLE
          • Overview
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop a partition
            • Drop a subpartition
            • Add partitions and subpartitions
            • Modify partitions
            • Drop partition data
        • DROP TABLE
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
      • Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
      • Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations between MySQL-compatible tenants of OceanBase Database
      • DDL synchronization between Oracle-compatible tenants of OceanBase Database
      • DDL operations for synchronizing data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database
      • DDL synchronization from PostgreSQL to OceanBase Database in Oracle compatible mode
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization tasks
      • View details of a data synchronization task
      • Change the name of a data synchronization task
      • View and modify synchronization objects
      • Use tags to Manage data synchronization tasks
      • Perform batch operations on data synchronization tasks
      • Download and import the settings of synchronization objects
      • View and modify the parameter configurations of a data synchronization task
      • Start and pause a data synchronization task
      • End and delete a data synchronization task
  • Data validation
    • Overview
    • Create a data validation task
    • Manage data validation tasks
      • View details of a data validation task
      • Change the name of a data validation task
      • View and modify validation objects
      • View and modify validation parameters
      • Manage data validation tasks by using tags
      • Import validation objects
      • Start, stop, and resume a data validation task
      • Clone a data validation task
      • Delete a data validation task
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
        • Create a standalone OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
      • Create a PolarDB-X 1.0 data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
      • Manage resource groups
    • Components
      • Store
        • Add a Store component
        • View details of a Store component
        • Update the configurations of a Store component
        • Start and pause a Store component
        • Delete a Store component
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
      • Incr-Verification
        • View details of the Incr-Verification component
        • Pause an Incr-Verification component
        • Rerun and resume an Incr-Verification component
        • Update an Incr-Verification component
      • Row-Verification
        • View details of a Row-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Task Template
        • Create a task template
        • View and edit task templates
        • Copy and export a task template
        • Delete a task template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store component parameters
        • Incr-Sync component parameters
        • Full-Import component parameters
        • Full-Verification component parameters
        • Incr-Verification component parameters
        • Parameters of the Row-Verification component
        • CM component parameters
        • Supervisor component parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View task alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Manage access keys
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • Configure DDL/DML synchronization
      • DDL synchronization scope
      • Rename databases and tables
      • Use SQL conditions to filter data
      • Set the incremental synchronization start timestamp
      • Configure matching rules for migration or synchronization objects
      • Configure matching rules for validation objects
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change a topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • CreatePolarDBDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
      • UpdateProjectConfig
      • Download schema conversion information
      • DownloadConflictData
      • ListConflictData
      • ResetIncrStartTimestamp
      • AdvanceProject
      • DescribeRegions
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • SSO
      • Integrate the OIDC protocol to OMS to implement SSO
      • Integrate the SAML 2.0 protocol to OMS to implement SSO
      • Access Microsoft Entra ID using OMS SAML 2.0
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL database instance
    • Online DDL tools
    • Supplemental logging in Oracle databases
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Task diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the target table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • Release Note
      • Version number rules
      • V4.3
        • OMS V4.3.1
        • OMS V4.3.0
      • V4.2
        • OMS V4.2.5
        • OMS V4.2.4
        • OMS V4.2.3
        • OMS V4.2.2
        • OMS V4.2.1
        • OMS V4.2.0
      • V4.1
        • OMS V4.1.0
      • V4.0
        • OMS V4.0.2
        • OMS V4.0.1
      • V3.4
        • OMS V3.4.0
      • V3.3
        • OMS V3.3.1
        • OMS V3.3.0
      • V3.2
        • OMS V3.2.2
        • OMS V3.2.1
      • V3.1
        • OMS V3.1.0
      • V2.1
        • OMS V2.1.2
        • OMS V2.1.0

Download PDF

OMS Documentation Overview of OMS Terms OMS HA Principles of Store Principles of Full-Import and Incr-Sync Data verification principles Overview Hierarchical functional system Basic components OMS Oracle full migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale out Scale down deployment Check the deployment Deploy a time-series database (Optional) Log in to the OMS console Overview Configure user information Change your login password Log out Overview Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database Migrate data from the Oracle compatible mode of OceanBase Database to an Oracle database Migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database Migrate data from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data between OceanBase databases of the same tenant type Configure a bidirectional synchronization task Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database Migrate data from a PostgreSQL database to the Oracle compatible mode of OceanBase Database Migrate data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database View details of a data migration task Rename a data migration task View and modify migration objects Use tags to Manage data migration tasks Perform batch operations on data migration tasks Download and import settings of migration objects View and modify migration parameters Download a conflict log file Start and pause a data migration task End and delete a data migration task Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a MySQL database Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a DB2 LUW database Synchronize DDL operations between MySQL-compatible tenants of OceanBase Database DDL synchronization between Oracle-compatible tenants of OceanBase Database DDL operations for synchronizing data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database DDL synchronization from PostgreSQL to OceanBase Database in Oracle compatible mode Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a MySQL-compatible tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization task Change the name of a data synchronization task View and modify synchronization objects Use tags to Manage data synchronization tasks Perform batch operations on data synchronization tasks Download and import the settings of synchronization objects View and modify the parameter configurations of a data synchronization task Start and pause a data synchronization task End and delete a data synchronization task Overview Create a data validation task View details of a data validation task Change the name of a data validation task View and modify validation objects View and modify validation parameters Manage data validation tasks by using tags Import validation objects Start, stop, and resume a data validation task Clone a data validation task Delete a data validation task Create a MySQL data source Create an Oracle data source Create a TiDB data source
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.3.1
iconOceanBase Migration Service
V 4.3.1Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Deploy OMS on multiple nodes in multiple regions

Last Updated:2025-10-09 03:34:24  Updated
share
What is on this page
Background information
Prerequisites
Terms
Deployment procedure without a configuration file
Integrated deployment mode
Separate deployment mode
Deployment procedure with a configuration file
Integrated deployment mode
Separate deployment mode
Template and example of the configuration file
Configuration file template
Configuration file sample

folded

share

This topic describes how to deploy OceanBase Migration Service (OMS) on multiple nodes in multiple regions by using the deployment tool.

Background information

As more users apply OMS in data migration, OMS must adapt to increasingly diverse scenarios. In addition to single-region data migration and data synchronization, OMS supports data synchronization across regions, data migration between IDCs in different regions, and active-active data synchronization.

architecture10-en

You can deploy OMS on one or more nodes in each region. OMS can be deployed on multiple nodes in a region to build an environment with high availability. In this way, OMS can start components on appropriate nodes based on the tasks.

For example, if you want to synchronize data from Region Hangzhou to Region Heilongjiang, OMS starts the Score component on a node in the Hangzhou region to collect incremental logs and starts the Incr-Sync component on a node in the Heilongjiang region to synchronize incremental data.

Observe the following notes on multi-node deployment:

  • You can deploy OMS on a single node first and then scale out to multiple nodes. For more information, see Scale out OMS.

  • To deploy OMS on multiple nodes across multiple regions, you must apply for a virtual IP address (VIP) for each region and use it as the mount point for the OMS console. In addition, you must configure the mapping rules for Ports 8088 and 8089 in the VIP-based network strategy.

    You can use the VIP to access the OMS console even if an OMS node fails.

OMS V4.3.1 and later support setting the primary region when deploying OMS across multiple regions. After setting the primary region, you can start the management process only on the primary region, and other regions will not start the management process.

Notice

You must deploy the primary region first before deploying other regions.

Prerequisites

  • The installation environment meets the system and network requirements. For more information, see System and network requirements.

  • You have created a resource manager (RM) database, as well as a cluster manager (CM) database and a heartbeat database for each region for the MetaDB of OMS.

  • The server to deploy OMS can connect to all other servers.

  • All servers involved in the multi-node deployment can connect to each other and you can obtain root permissions on a node by using its username and password.

  • You have obtained the installation package of OMS, which is generally a tar.gz file whose name starts with oms.

  • You have downloaded the OMS installation package and loaded it to the local image repository of the Docker container on each server node.

    docker load -i <OMS installation package>
    
  • You have prepared a directory for mounting the OMS container. In the mount directory, OMS will create the /home/admin/logs, /home/ds/store, and /home/ds/run directories for storing the component information and logs generated during the running of OMS.

  • (Optional) You have prepared a time-series database for storing performance monitoring data and DDL/DML statistics of OMS.

Terms

You need to replace variable names in some commands and prompts. A variable name is enclosed with angle brackets (<>).

  • OMS container mount directory: See the description of the mount directory in the "Prerequisites" section of this topic.

  • IP address of the server: the IP address of the host that executes the script.

  • OMS_IMAGE: the unique identifier of the loaded image. After you load the OMS installation package by using Docker, run the docker images command to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image. The obtained value is the unique identifier of the loaded image. Here is an example:

    $sudo docker images
    REPOSITORY                                          TAG                 IMAGE ID        
    work.oceanbase-dev.com/obartifact-store/oms     feature_3.4.0       2a6a77257d35      
    

    In this example, <OMS_IMAGE> can be work.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0 or 2a6a77257d35. Replace the value of <OMS_IMAGE> in related commands with the preceding value.

  • Directory of the config.yaml file: If you want to deploy OMS based on an existing config.yaml configuration file, this directory is the one where the configuration file is located.

Deployment procedure without a configuration file

To modify the configuration after deployment, perform the following steps:

Notice

If you deploy OMS on multiple nodes in multiple regions, you must manually modify the configuration of each node.

  1. Log in to the OMS container.

  2. Modify the config.yaml file in the /home/admin/conf/ directory as needed.

  3. Initialize the metadata.

    sh /root/docker_init.sh
    

Integrated deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy.sh from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    

    Here is an example:

    sudo docker run -d --net host --name oms-config-tool work.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0 bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <Mount directory of the OMS container> -i <IP address of the server> -d <OMS_IMAGE>
    
  5. Complete the deployment for the first region as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Regions.

    2. Select a task.

      Select No configuration file. Deploy OMS for the first time. Start from generating the configuration file.

    3. Determine whether to deploy the RM database and CM database separately. By default, they do not need to be deployed separately.

      If yes, enter y.

    4. Configure the MetaDB for storing the metadata generated during the running of OMS.

      1. Enter the IP address, port, username, and password of the MetaDB.

        If you need to deploy the RM database and CM database separately, enter the IP address, port, username, and password of the RM database and CM database.

      2. Set a prefix for names of databases in the MetaDB.

        For example, when the prefix is set to oms, the databases in the MetaDB are named oms_rm, oms_cm, and oms_cm_hb.

      3. Confirm your settings.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      4. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    5. Perform the following operations to configure the OMS cluster settings:

      1. Enter the region ID, for example, cn-hangzhou.

      2. Enter the region ID for the parameter cm_region_cn. It is the same as cm_region, for example, cn-hangzhou.

      3. Specify the URL of the CM service, which is the VIP or domain name to which all CM servers in the region are mounted. The original parameter is cm-url.

        Enter the VIP or domain name as the URL of the CM service. You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the <IP address>:<port number> format.

        Note

        The http:// prefix in the URL is optional.

      4. Enter the IP addresses of all servers in the region. Separate them with commas (,).

      5. Set a region ID for the current region (Region name in Chinese). Value range: [0,127].

        An ID uniquely identifies a region.

      6. Confirm whether the displayed OMS cluster settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    6. Determine whether to monitor historical data of OMS.

      • If you have deployed a time-series database in Step 2, enter y and press Enter to go to the step of configuring the time-series database and enable monitoring for OMS historical data.

      • If you chose not to deploy a time-series database in Step 2, enter n and press Enter to go to the step of determining whether to enable the audit log feature and configure Simple Log Service (SLS) parameters. In this case, OMS does not monitor the historical data after deployment.

    7. Configure the time-series database.

      Perform the following operations:

      1. Confirm whether you have deployed a time-series database.

        Enter the value based on the actual situation. If yes, enter y and press Enter. If no, enter n and press Enter to go to the step of determining whether to enable the audit log feature and set SLS parameters.

      2. Set the type of the time-series database to INFLUXDB.

        Notice

        At present, only INFLUXDB is supported.

      3. Enter the URL, username, and password of the time-series database. For more information, see Deploy a time-series database.

      4. Confirm whether the displayed settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    8. Determine whether to enable the audit log feature and write the audit logs to SLS.

      To enable the audit log feature, enter y and press Enter to go to the next step to specify the SLS parameters.

      Otherwise, enter n and press Enter to start the deployment. In this case, OMS does not audit the logs after deployment.

    9. Specify the SLS parameters.

      1. Set the SLS parameters as prompted.

      2. Confirm whether the displayed settings are correct.

      If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    10. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

      If the configuration file fails the check, modify the settings as prompted.

    11. Start the deployment on each node one after another.

      1. Specify the directory to which the OMS container is mounted on the node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

  6. Start deployment for a new region as prompted after you complete the deployment for the first region.

    1. Determine whether to deploy OMS in a new region.

      After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

      If yes, enter y and press Enter to proceed. If no, enter n and press Enter to end the deployment process.

    2. Enter the OMS cluster configuration information as prompted and confirm the settings.

    3. Determine whether to separately deploy the RM database and the CM database for the new region. By default, they do not need to be deployed separately.

      If you need to deploy the RM database and CM database separately, enter the IP address, port, username, and password of the CM database for the new region as prompted.

    4. Deploy the nodes in this region one by one as prompted.

      If the deployment fails, you can log in to the OMS container and view logs in the .log files prefixed with docker_init in the /home/admin/logs directory. If the OMS container fails to be started, you cannot obtain logs.

  7. Determine whether to deploy OMS in a new region.

    After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

    If yes, enter y and press Enter to proceed. If no, enter n and press Enter to end the deployment process.

Separate deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy_v2.sh from the loaded image:

    sudo docker run -d --net host --name oms-config-tool <management image or component image> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy_v2.sh . && sudo docker rm -f oms-config-tool
    

    Here is an example:

    sudo docker run -d --net host --name oms-config-tool 0719**** bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy_v2.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy_v2.sh -o <OMS container mount directory> -i <IP address of the server> -v <management image> -s <component image>
    

    Here is an example:

    sh docker_remote_deploy_v2.sh -o /home/l****.***/l****_oms_run_022102 -i xxx.xxx.xxx.1 -v 0719**** -s 188a****
    
  5. Complete the deployment for the first region as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Regions.

    2. Select a task.

      Select No configuration file. Deploy OMS for the first time. Start from generating the configuration file.

    3. Determine whether to deploy the RM database and CM database separately. By default, they do not need to be deployed separately.

      If yes, enter y.

    4. Configure the MetaDB for storing the metadata generated during the running of OMS.

      1. Enter the IP address, port, username, and password of the MetaDB.

        If you need to deploy the RM database and CM database separately, enter the IP address, port, username, and password of the RM database and CM database.

      2. Set a prefix for names of databases in the MetaDB.

        For example, when the prefix is set to oms, the databases in the MetaDB are named oms_rm, oms_cm, and oms_cm_hb.

      3. Confirm your settings.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      4. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    5. Perform the following operations to configure the OMS cluster settings:

      1. Enter the region ID, for example, cn-hangzhou.

      2. Enter the region ID for the parameter cm_region_cn. It is the same as cm_region, for example, cn-hangzhou.

      3. Specify the URL of the CM service, which is the VIP or domain name to which all CM servers in the region are mounted. The original parameter is cm-url.

        Enter the VIP or domain name as the URL of the CM service. You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the <IP address>:<port number> format. For example, enter xxx.xxx.xxx.1:8088.

        Note

        The http:// prefix in the URL is optional.

      4. Enter the IP addresses of all management nodes in the region. Separate them with commas (,).

        For example, enter the IP address of one management node: xxx.xxx.xxx.1.

      5. Enter the IP addresses of all component nodes in the region. Separate them with commas (,).

        For example, enter the IP address of two component nodes: xxx.xxx.xxx.1,xxx.xxx.xxx.2.

      6. Set a region ID for the current region (Region name in Chinese). Value range: [0,127].

        An ID uniquely identifies a region.

      7. Confirm whether the current region is the primary region.

        If yes, enter y and press Enter to proceed. In the OMS cluster settings of the next step, primary_region_ip will be empty.

        If no, enter n and press Enter. Then, you need to enter the IP address of the primary region. In the OMS cluster settings of the next step, primary_region_ip will display the IP address of the primary region.

      8. Confirm whether the displayed OMS cluster settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    6. Determine whether to monitor historical data of OMS.

      • If you have deployed a time-series database in Step 2, enter y and press Enter to go to the step of configuring the time-series database and enable monitoring for OMS historical data.

      • If you chose not to deploy a time-series database in Step 2, enter n and press Enter to go to the step of determining whether to enable the audit log feature and configure Simple Log Service (SLS) parameters. In this case, OMS does not monitor the historical data after deployment.

    7. Configure the time-series database.

      Perform the following operations:

      1. Confirm whether you have deployed a time-series database.

        Enter the value based on the actual situation. If yes, enter y and press Enter. If no, enter n and press Enter to go to the step of determining whether to enable the audit log feature and set SLS parameters.

      2. Set the type of the time-series database to INFLUXDB.

        Notice

        At present, only INFLUXDB is supported.

      3. Enter the URL, username, and password of the time-series database. For more information, see Deploy a time-series database.

      4. Confirm whether the displayed settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    8. Determine whether to enable the audit log feature and write the audit logs to SLS.

      To enable the audit log feature, enter y and press Enter to go to the next step to specify the SLS parameters.

      Otherwise, enter n and press Enter to start the deployment. In this case, OMS does not audit the logs after deployment.

    9. Specify the SLS parameters.

      1. Set the SLS parameters as prompted.

      2. Confirm whether the displayed settings are correct.

      If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    10. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

    11. Deploy the management nodes one by one as prompted.

      1. Enter the mount directory on the management node for deploying the OMS container.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config.yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the management nodes.

        To deploy multiple management nodes, complete the deployment on one server and then another until all management nodes are deployed.

    12. Deploy the component nodes one by one as prompted.

      1. Specify the directory to which the OMS container is mounted on the component node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config_yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the component nodes.

        To deploy multiple component nodes, complete the deployment on one server and then another until all component nodes are deployed.

  6. Start deployment for a new region as prompted after you complete the deployment for the first region.

    1. Determine whether to deploy OMS in a new region.

      After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

      If yes, enter y and press Enter to proceed. If no, enter n and press Enter to end the deployment process.

    2. Enter the OMS cluster configuration information as prompted and confirm the settings.

    3. Determine whether to separately deploy the RM database and the CM database for the new region. By default, they do not need to be deployed separately.

      If you need to deploy the RM database and CM database separately, enter the IP address, port, username, and password of the CM database for the new region as prompted.

    4. Deploy the management nodes one by one as prompted.

      To deploy multiple management nodes, complete the deployment on one server and then another until all management nodes are deployed.

    5. Deploy the component nodes one by one as prompted.

      To deploy multiple component nodes, complete the deployment on one server and then another until all component nodes are deployed.

  7. Determine whether to deploy OMS in a new region.

    After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

    If yes, enter y and press Enter to proceed. If no, enter n and press Enter to end the deployment process.

Deployment procedure with a configuration file

Integrated deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy.sh from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <Mount directory of the OMS container> -c <Directory of the existing config.yaml file> -i <IP address of the host> -d <OMS_IMAGE>
    

    Note

    For more information about settings of the config.yaml file, see the "Template and example of a configuration file" section.

  5. Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Regions.

    2. Select a task.

      Select Reference configuration file has been passed in through the [-c] option of the script. Start to configure based on the file.

    3. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the RM database and CM database of the MetaDB in the original configuration file. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    4. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

      If the configuration file fails the check, modify the settings as prompted.

    5. Start the deployment on each node one after another.

      1. Specify the directory to which the OMS container is mounted on the node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

    6. Determine whether to deploy OMS in a new region.

      After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

      If yes, enter y and press Enter to proceed. If no, enter n and press Enter to end the deployment process.

    7. Perform the following operations to configure the OMS cluster settings:

      1. Enter the region ID, for example, cn-hangzhou.

      2. Enter the region ID for the parameter cm_region_cn. It is the same as cm_region, for example, cn-hangzhou.

      3. Set a region ID for the current region (Region name in Chinese). Value range: [0,127].

        An ID uniquely identifies a region.

    8. A message is displayed, showing the names and IDs of existing regions, to help you avoid using an existing name or ID for a new region.

    9. Repeat the deployment steps on each node in the region.

      If the deployment fails, you can log in to the OMS container and view logs in the .log files prefixed with docker_init in the /home/admin/logs directory. If the OMS container fails to be started, you cannot obtain logs.

Separate deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy_v2.sh from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy_v2.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy_v2.sh -o <Mount directory of the OMS container> -c <Directory of the existing config.yaml file> -i <IP address of the host> -d <OMS_IMAGE>
    

    Note

    For more information about settings of the config.yaml file, see the "Template and example of a configuration file" section.

  5. Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Regions.

    2. Select a task.

      Select Reference configuration file has been passed in through the [-c] option of the script. Start to configure based on the file.

    3. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the RM database and CM database of the MetaDB in the original configuration file. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    4. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

      If the configuration file fails the check, modify the settings as prompted.

    5. Deploy the management nodes one by one as prompted.

      1. Enter the mount directory on the management node for deploying the OMS container.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config.yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the management nodes.

        To deploy multiple management nodes, complete the deployment on one server and then another until all management nodes are deployed.

    6. Deploy the component nodes one by one as prompted.

      1. Specify the directory to which the OMS container is mounted on the component node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config_yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the component nodes.

        To deploy multiple component nodes, complete the deployment on one server and then another until all component nodes are deployed.

    7. Determine whether to deploy OMS in a new region.

      After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"

      If yes, enter y and press Enter to proceed. If no, enter n and press Enter to end the deployment process.

    8. Perform the following operations to configure the OMS cluster settings:

      1. Enter the region ID, for example, cn-hangzhou.

      2. Enter the region ID for the parameter cm_region_cn. It is the same as cm_region, for example, cn-hangzhou.

      3. Set a region ID for the current region (Region name in Chinese). Value range: [0,127].

        An ID uniquely identifies a region.

    9. A message is displayed, showing the names and IDs of existing regions, to help you avoid using an existing name or ID for a new region.

    10. Repeat the deployment steps on each management node and component node in the region until all nodes are deployed.

Template and example of the configuration file

Configuration file template

The configuration file template in this topic is used for the regular password-based login method. If you log in to the OMS console by using single sign-on (SSO), you must integrate the OpenID Connect (OIDC) protocol and add parameters in the config.yaml file template. For more information, see Integrate the OIDC protocol to OMS to implement SSO.

Notice

  • To deploy multiple nodes in the Hangzhou region, specify the IP addresses of all nodes for the cm_nodes parameter.

  • You must replace the sample values of required parameters based on your actual deployment environment. Both the required and optional parameters are described in the following table. You can specify the optional parameters as needed.

  • In the config.yaml file, you must specify the parameters in the key: value format, with a space after the colon (:).

In the following example of the config.yaml file for the multi-node multi-region deployment mode, OMS is deployed on two nodes separately in the Hangzhou and Heilongjiang regions. OMS V4.3.1 and later support the primary region feature. In this example, Hangzhou is the primary region.

  • Here is a template of the config.yaml file for you to deploy OMS in the primary region Hangzhou:

    # Information about the RM database and CM database
    oms_cm_meta_host: ${oms_cm_meta_host}
    oms_cm_meta_password: ${oms_cm_meta_password}
    oms_cm_meta_port: ${oms_cm_meta_port}
    oms_cm_meta_user: ${oms_cm_meta_user}
    oms_rm_meta_host: ${oms_rm_meta_host}
    oms_rm_meta_password: ${oms_rm_meta_password}
    oms_rm_meta_port: ${oms_rm_meta_port}
    oms_rm_meta_user: ${oms_rm_meta_user}
    
    # You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS.
    drc_rm_db: ${drc_rm_db}
    drc_cm_db: ${drc_cm_db}
    drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
    
    # Configure the OMS cluster in the Hangzhou region.
    # To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
    cm_url: ${cm_url}
    cm_location: ${cm_location}
    cm_region: ${cm_region}
    cm_region_cn: ${cm_region_cn}
    cm_nodes:
     - ${host_ip1}
     -  ${host_ip2}
    
    console_nodes:
     - ${host_ip3}
     - ${host_ip4}
    
    # primary_region_ip is empty to indicate that the current region is the primary region.
    "primary_region_ip": ""
    
    # Configurations of the time-series database
    # The default value of `tsdb_enabled`, which specifies whether to configure a time-series database, is `false`. To enable metric reporting, set the parameter to `true`.
    # tsdb_enabled: false 
    # If the `tsdb_enabled` parameter is set to `true`, delete comments for the following parameters and specify the values based on your actual configurations.
    # tsdb_service: 'INFLUXDB'
    # tsdb_url: '${tsdb_url}'
    # tsdb_username: ${tsdb_user}
    # tsdb_password: ${tsdb_password}
    
    Parameter Description Required?
    oms_cm_meta_host The IP address of the CM database. It can only be a MySQL-compatible tenant of OceanBase Database V2.0 or later. Yes
    oms_cm_meta_password The password for connecting to the CM database. Yes
    oms_cm_meta_port The port number for connecting to the CM database. Yes
    oms_cm_meta_user The username for connecting to the CM database. Yes
    oms_rm_meta_host The IP address of the RM database. It can only be a MySQL-compatible tenant of OceanBase Database V2.0 or later. Yes
    oms_rm_meta_password The password for connecting to the RM database. Yes
    oms_rm_meta_port The port number for connecting to the RM database. Yes
    oms_rm_meta_user The username for connecting to the RM database. Yes
    drc_rm_db The name of the database for the OMS console. Yes
    drc_cm_db The name of the database for the CM service. Yes
    drc_cm_heartbeat_db The name of the heartbeat database for the CM service. Yes
    cm_url The URL of the OMS CM service, for example, http://VIP:8088.
    Note
    To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the current region are mounted. We recommend that you do not set it to http://127.0.0.1:8088.
    The access URL of the OMS console is in the following format: IP address of the host where OMS is deployed:8089, for example, http://xxx.xxx.xxx.xxx:8089 or https://xxx.xxx.xxx.xxx:8089. Port 8088 is used for program calls, and Port 8089 is used for web page access. You must specify port 8088.
    Yes
    cm_location The code of the region. Value range: [0,127]. You can select one number for each region.
    Notice
    If you upgrade to OMS V3.2.1 from an earlier version, you must set the cm_location parameter to 0.
    Yes
    cm_region The name of the region, for example, cn-hangzhou.
    Notice
    If you use OMS with the Alibaba Cloud Multi-Site High Availability (MSHA) service in an active-active disaster recovery scenario, use the region configured for the Alibaba Cloud service. The active-active disaster recovery feature is deprecated in OMS V4.3.1.
    Yes
    cm_region_cn The value here is the same as the value of cm_region. Yes
    cm_nodes In multi-node deployment mode, you must specify multiple IP addresses for the parameter.
    • In integrated deployment mode, cm_nodes indicates the IP addresses of servers on which the OMS CM service is deployed.
    • In separate deployment mode, cm_nodes indicates on which servers the components are deployed.
    Yes
    console_nodes
    • In integrated deployment mode, the value of console_nodes is the same as that of cm_nodes.
    • In separate deployment mode, console_nodes indicates on which servers the management nodes are deployed.
    This parameter is optional in integrated deployment mode.
    This parameter is required in separate deployment mode.
    primary_region_ip If this parameter is not specified or is empty, it indicates that the current region is the primary region. If this parameter is not empty, it indicates that the current server is in another region and the console is not to be started. During initialization, requests are sent to the specified primary_region_ip. For example, "primary_region_ip": "xxx.xxx.xxx.1". No
    tsdb_service The type of the time-series database. Valid values: INFLUXDB and CERESDB. No. Default value: INFLUXDB.
    tsdb_enabled Specifies whether metric reporting is enabled for monitoring. Valid values: true and false. No. Default value: false.
    tsdb_url The IP address of the server where InfluxDB is deployed, which needs to be changed based on the actual environment. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. After the time-series database is deployed, it maps to OMS deployed for the whole cluster. This means that although OMS is deployed in multiple regions, all regions map to the same time-series database. No
    tsdb_username The username used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. After you deploy a time-series database, manually create a user and specify the username and password. No
    tsdb_password The password used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. No
  • Here is a template of the config.yaml file for you to deploy OMS in the Heilongjiang region:

    The operations are the same as those for deploying OMS in the Hangzhou region, except that you must modify the following parameters in the config.yaml file: drc_cm_heartbeat_db, cm_url, cm_location, cm_region, cm_region_cn, and cm_nodes.

    Notice

    • To deploy multiple nodes in the Heilongjiang region, specify the IP addresses of all nodes for the cm_nodes parameter.

    • You must execute the docker_init.sh script on at least one node in each region.

    # Information about the RM database and CM database
    oms_cm_meta_host: ${oms_cm_meta_host}
    oms_cm_meta_password: ${oms_cm_meta_password}
    oms_cm_meta_port: ${oms_cm_meta_port}
    oms_cm_meta_user: ${oms_cm_meta_user}
    oms_rm_meta_host: ${oms_rm_meta_host}
    oms_rm_meta_password: ${oms_rm_meta_password}
    oms_rm_meta_port: ${oms_rm_meta_port}
    oms_rm_meta_user: ${oms_rm_meta_user}
    
    # You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS.
    drc_rm_db: ${drc_rm_db}
    drc_cm_db: ${drc_cm_db}
    drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
    
    # Configure the OMS cluster in the Heilongjiang region.
    # To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
    cm_url: ${cm_url}
    cm_location: ${cm_location}
    cm_region: ${cm_region}
    cm_region_cn: ${cm_region_cn}
    cm_nodes:
     - ${host_ip1}
     - ${host_ip2}
    
    console_nodes:
     - ${host_ip3}
     - ${host_ip4}
    
    # Configurations of the time-series database
    # tsdb_service: 'INFLUXDB'
    # Default value: false. Set the value based on your actual configuration.
    # tsdb_enabled: false
    
    # The IP address of the server where InfluxDB is deployed.
    # You need to modify the following parameters based on the actual environment if you set the tsdb_enabled parameter to true.
    # tsdb_url: ${tsdb_url}
    # tsdb_username: ${tsdb_user}
    # tsdb_password: ${tsdb_password}
    

Configuration file sample

Replace related parameters with the actual values in the target deployment environment.

  • Here is a sample config.yaml file for you to deploy OMS in the primary region Hangzhou:

    oms_cm_meta_host: xxx.xxx.xxx.xxx
    oms_cm_meta_password: **********
    oms_cm_meta_port: 2883
    oms_cm_meta_user: oms_cm_meta_user
    oms_rm_meta_host: xxx.xxx.xxx.xxx
    oms_rm_meta_password: **********
    oms_rm_meta_port: 2883
    oms_rm_meta_user: oms_rm_meta_user
    drc_rm_db: oms_rm
    drc_cm_db: oms_cm
    drc_cm_heartbeat_db: oms_cm_heartbeat
    cm_url: http://VIP:8088
    cm_location: 1
    cm_region: cn-hangzhou
    cm_region_cn: cn-hangzhou
    cm_nodes:
     - xxx.xxx.xxx.xx1
     - xxx.xxx.xxx.xx2
    console_nodes:
     - xxx.xxx.xxx.xx3
     - xxx.xxx.xxx.xx4
    "primary_region_ip": ""
    tsdb_service: 'INFLUXDB'
    tsdb_enabled: true
    tsdb_url: 'xxx.xxx.xxx.xxx:8086'
    tsdb_username: username
    tsdb_password: *************
    
  • Here is a sample config.yaml file for you to deploy OMS in the Heilongjiang region:

    oms_cm_meta_host: xxx.xxx.xxx.xxx
    oms_cm_meta_password: **********
    oms_cm_meta_port: 2883
    oms_cm_meta_user: oms_cm_meta_user
    oms_rm_meta_host: xxx.xxx.xxx.xxx
    oms_rm_meta_password: **********
    oms_rm_meta_port: 2883
    oms_rm_meta_user: oms_rm_meta_user
    drc_rm_db: oms_rm
    drc_cm_db: oms_cm
    drc_cm_heartbeat_db: oms_cm_heartbeat_1
    cm_url: http://VIP:8088
    cm_location: 2
    cm_region: cn-heilongjiang
    cm_region_cn: cn-heilongjiang
    cm_nodes:
     - xxx.xxx.xxx.xx1
     - xxx.xxx.xxx.xx2
    tsdb_service: 'INFLUXDB'
    tsdb_enabled: true
    tsdb_url: 'xxx.xxx.xxx.xxx:8086'
    tsdb_username: username
    tsdb_password: *************
    

Previous topic

Deploy OMS on multiple nodes in a single region
Last

Next topic

Scale out
Next
What is on this page
Background information
Prerequisites
Terms
Deployment procedure without a configuration file
Integrated deployment mode
Separate deployment mode
Deployment procedure with a configuration file
Integrated deployment mode
Separate deployment mode
Template and example of the configuration file
Configuration file template
Configuration file sample