This topic describes how to deploy OceanBase Migration Service (OMS) on multiple nodes in multiple regions by using the deployment tool.
Background information
As more users apply OMS in data migration, OMS must adapt to increasingly diverse scenarios. In addition to single-region data migration and data synchronization, OMS supports data synchronization across regions, data migration between IDCs in different regions, and active-active data synchronization.

You can deploy OMS on one or more nodes in each region. OMS can be deployed on multiple nodes in a region to build an environment with high availability. In this way, OMS can start components on appropriate nodes based on the tasks.
For example, if you want to synchronize data from Region Hangzhou to Region Heilongjiang, OMS starts the Score component on a node in the Hangzhou region to collect incremental logs and starts the Incr-Sync component on a node in the Heilongjiang region to synchronize incremental data.
Observe the following notes on multi-node deployment:
You can deploy OMS on a single node first and then scale out to multiple nodes. For more information, see Scale out OMS.
To deploy OMS on multiple nodes across multiple regions, you must apply for a virtual IP address (VIP) for each region and use it as the mount point for the OMS console. In addition, you must configure the mapping rules for Ports 8088 and 8089 in the VIP-based network strategy.
You can use the VIP to access the OMS console even if an OMS node fails.
Prerequisites
The installation environment meets the system and network requirements. For more information, see System and network requirements.
You have created a MetaDB cluster for OMS.
The server to deploy OMS can connect to all other servers.
All servers involved in the multi-node deployment can connect to each other and you can obtain root permissions on a node by using its username and password.
You have obtained the installation package of OMS, which is generally a
tar.gzfile whose name starts withoms.You have downloaded the OMS installation package and loaded it to the local image repository of the Docker container on each server node.
docker load -i <OMS installation package>You have prepared a directory for mounting the OMS container. In the mount directory, OMS will create the
/home/admin/logs,/home/ds/store, and/home/ds/rundirectories for storing the component information and logs generated during the running of OMS.(Optional) You have prepared a time-series database for storing performance monitoring data and DDL/DML statistics of OMS.
Terms
You need to replace variable names in some commands and prompts. A variable name is enclosed with angle brackets (<>).
Mount directory of the OMS container: See the description of the mount directory in the "Prerequisites" section of this topic.
IP address of the server: the IP address of the host that executes the script. In a single-node deployment scenario, by default, it is the IP address in the cluster manager (CM) configuration information.
OMS_IMAGE: the unique identifier of the loaded image. After you load the OMS installation package by using Docker, run the
docker imagescommand to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image. The obtained value is the unique identifier of the loaded image. Here is an example:$sudo docker images REPOSITORY TAG IMAGE ID work.oceanbase-dev.com/obartifact-store/oms feature_3.4.0 2a6a77257d35In this example,
<OMS_IMAGE>can bework.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0or2a6a77257d35. Replace the value of<OMS_IMAGE>in related commands with the preceding value.Directory of the
config.yamlfile: If you want to deploy OMS based on an existingconfig.yamlconfiguration file, this directory is the one where the configuration file is located.
Deployment procedure without a configuration file
Log in to the server where OMS is to be deployed.
(Optional) Deploy a time-series database.
If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.
Run the following command to obtain the deployment script
docker_remote_deploy.shfrom the loaded image:sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-toolHere is an example:
sudo docker run -d --net host --name oms-config-tool work.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0 bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-toolUse the deployment script to start the deployment tool.
sh docker_remote_deploy.sh -o <Mount directory of the OMS container> -i <IP address of the host> -d <OMS_IMAGE>Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.
Select a deployment mode.
Select Multiple Regions.
Select a task.
Select No Configuration File. Deploy OMS Starting from Configuration File Generation.
Enter the configuration information of the MetaDB as follows:
Enter the URL, port, username, and password of the MetaDB.
Set a prefix for names of databases in the MetaDB.
For example, when the prefix is set to
oms, the databases in the MetaDB are namedoms_rm,oms_cm, andoms_cm_hb.Confirm your settings.
If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.If the system displays The specified database names already exist in the MetaDB. Are you sure that you want to continue?, it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter
yand press Enter to proceed, or enternand press Enter to modify the settings.
Perform the following operations to configure the OMS cluster settings:
Enter the region ID, for example, cn-hangzhou.
Enter the region ID for the parameter cm_region_cn. It is the same as cm_region, for example, cn-hangzhou.
Specify the URL of the CM service, which is the VIP or domain name to which all CM servers in the region are mounted. The original parameter is
cm-url.Enter the VIP or domain name as the URL of the CM service. You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the <IP address>:<port number> format.
Note
The
http://prefix in the URL is optional.Enter the IP addresses of all servers in the region. Separate them with commas (,).
Set a region ID for the current region (Region name in Chinese). Value range: [0,127].
An ID uniquely identifies a region.
Specify whether to preferentially access the current region.
In multi-region deployment mode, this parameter can be set to
trueonly for one region. If yes, enteryand press Enter. If no, enternand press Enter.Confirm whether the displayed OMS cluster settings are correct.
If yes, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.
Determine whether to monitor historical data of OMS.
If you have deployed a time-series database in Step 2, enter
yand press Enter to go to the next step of configuring the time-series database and enable monitoring for OMS historical data.If you chose not to deploy a time-series database in Step 2, enter
nand press Enter to go to the step of determining whether to enable the audit log feature and configure Simple Log Service (SLS) parameters. In this case, OMS does not monitor the historical data after deployment.
Configure the time-series database.
Perform the following operations:
Confirm whether you have deployed a time-series database.
Enter the value based on the actual situation. If yes, enter
yand press Enter. If no, enternand press Enter to go to the step of determining whether to enable the audit log feature and set SLS parameters.Set the type of the time-series database to
INFLUXDB.Notice
At present, only INFLUXDB is supported.
Enter the URL, username, and password of the time-series database. For more information, see Deploy a time-series database.
Confirm whether the displayed settings are correct.
If yes, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.
Determine whether to enable the audit log feature and write the audit logs to SLS.
To enable the audit log feature, enter
yand press Enter to go to the next step to specify the SLS parameters.Otherwise, enter
nand press Enter to start the deployment. In this case, OMS does not audit the logs after deployment.Specify the SLS parameters.
Set the SLS parameters as prompted.
Confirm whether the displayed settings are correct.
If yes, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter
nand press Enter to proceed. Otherwise, enteryand press Enter to modify the settings.If the configuration file fails the check, modify the settings as prompted.
Start the deployment on each node one after another.
Specify the directory to which the OMS container is mounted on the node.
Specify a directory with a large capacity.
For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.
Confirm whether the OMS image file can be named <OMS_IMAGE>.
If yes, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.Determine whether to mount an SSL certificate to the OMS container.
If yes, enter
y, press Enter, and specify thehttps_keyandhttps_crtdirectories as prompted. Otherwise, enternand press Enter.
Determine whether to deploy OMS in a new region.
After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"
If yes, enter
yand press Enter to proceed. If no, enternand press Enter to end the deployment process.Perform the following operations to configure the OMS cluster settings:
Enter the region ID, for example, cn-hangzhou.
Enter the region ID for the parameter cm_region_cn. It is the same as cm_region, for example, cn-hangzhou.
Set a region ID for the current region (Region name in Chinese). Value range: [0,127].
An ID uniquely identifies a region.
A message is displayed, showing the names and IDs of existing regions, to help you avoid using an existing name or ID for a new region.
Repeat the deployment steps on each node in the region.
If the deployment fails, you can log in to the OMS container and view logs in the
.logfiles prefixed withdocker_initin the/home/admin/logsdirectory. If the OMS container fails to be started, you cannot obtain logs.
Deployment procedure with a configuration file
Log in to the server where OMS is to be deployed.
(Optional) Deploy a time-series database.
If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.
Run the following command to obtain the deployment script
docker_remote_deploy.shfrom the loaded image:sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-toolUse the deployment script to start the deployment tool.
sh docker_remote_deploy.sh -o <Mount directory of the OMS container> -c <Directory of the existing config.yaml file> -i <IP address of the host> -d <OMS_IMAGE>Note
For more information about settings of the
config.yamlfile, see the "Template and example of a configuration file" section.Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.
Select a deployment mode.
Select Multiple Regions.
Select a task.
Select Use Configuration File Uploaded with Script Option [-c].
If the system displays The specified database names already exist in the MetaDB. Are you sure that you want to continue?, it indicates that the database names you specified already exist in the MetaDB in the original configuration file. This may be caused by repeated deployment or upgrade of OMS. You can enter
yand press Enter to proceed, or enternand press Enter to modify the settings.If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter
nand press Enter to proceed. Otherwise, enteryand press Enter to modify the settings.If the configuration file fails the check, modify the settings as prompted.
Start the deployment on each node one after another.
Specify the directory to which the OMS container is mounted on the node.
Specify a directory with a large capacity.
For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.
Confirm whether the OMS image file can be named <OMS_IMAGE>.
If yes, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.Determine whether to mount an SSL certificate to the OMS container.
If yes, enter
y, press Enter, and specify thehttps_keyandhttps_crtdirectories as prompted. Otherwise, enternand press Enter.
Determine whether to deploy OMS in a new region.
After the deployment is completed, the system displays "OMS has been deployed in Regions [<Region ID 1>,<Region ID 2>…]. Do you want to deploy OMS in a new region?"
If yes, enter
yand press Enter to proceed. If no, enternand press Enter to end the deployment process.Perform the following operations to configure the OMS cluster settings:
Enter the region ID, for example, cn-hangzhou.
Enter the region ID for the parameter cm_region_cn. It is the same as cm_region, for example, cn-hangzhou.
Set a region ID for the current region (Region name in Chinese). Value range: [0,127].
An ID uniquely identifies a region.
A message is displayed, showing the names and IDs of existing regions, to help you avoid using an existing name or ID for a new region.
Repeat the deployment steps on each node in the region.
If the deployment fails, you can log in to the OMS container and view logs in the
.logfiles prefixed withdocker_initin the/home/admin/logsdirectory. If the OMS container fails to be started, you cannot obtain logs.
To modify the configuration after deployment, perform the following steps:
Notice
If you deploy OMS on multiple nodes in multiple regions, you must manually modify the configuration of each node.
Log in to the OMS container.
Modify the
config.yamlfile in the/home/admin/conf/directory based on business needs.Run the
python -m omsflow.scripts.units.oms_init_manager --init-config-filecommand.Run the
supervisorctl restart oms_console oms_drc_supervisorcommand.
Template and example of a configuration file
Configuration file template
The configuration file template in this topic is used for the regular password-based login method. If you log in to the OMS console by using single sign-on (SSO), you must integrate the OpenID Connect (OIDC) protocol and add parameters in the config.yaml file template. For more information, see Integrate the OIDC protocol to OMS to implement SSO.
Notice
When multiple regions exist, you must set the
cm_is_defaultparameter totruefor only one region, and set it tofalsefor all other regions. In addition, you must sequentially run commands in each region.To deploy multiple nodes in the Hangzhou region, specify the IP addresses of all nodes for the
cm_nodesparameter.You must replace the sample values of required parameters based on your actual deployment environment. Both the required and optional parameters are described in the following table. You can specify the optional parameters as needed.
In the
config.yamlfile, you must specify the parameters in the key: value format, with a space after the colon (:).
In the following examples of the config.yaml file for the multi-node multi-region deployment mode, OMS is deployed on two nodes separately in the Hangzhou and Heilongjiang regions.
Here is a template of the
config.yamlfile for you to deploy OMS in the Hangzhou region:# Information about the OMS MetaDB oms_meta_host: ${oms_meta_host} oms_meta_port: ${oms_meta_port} oms_meta_user: ${oms_meta_user} oms_meta_password: ${oms_meta_password} # You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS. drc_rm_db: ${drc_rm_db} drc_cm_db: ${drc_cm_db} drc_cm_heartbeat_db: ${drc_cm_heartbeat_db} # Configure the OMS cluster in the Hangzhou region. # To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted. cm_url: ${cm_url} cm_location: ${cm_location} cm_region: ${cm_region} cm_region_cn: ${cm_region_cn} cm_is_default: true cm_nodes: - ${host_ip1} - ${host_ip2} # Configurations of the time-series database # The default value of `tsdb_enabled`, which specifies whether to configure a time-series database, is `false`. To enable metric reporting, set the parameter to `true`. # tsdb_enabled: false # If the `tsdb_enabled` parameter is set to `true`, delete comments for the following parameters and specify the values based on your actual configurations. # tsdb_service: 'INFLUXDB' # tsdb_url: '${tsdb_url}' # tsdb_username: ${tsdb_user} # tsdb_password: ${tsdb_password}Parameter Description Required? oms_meta_host The IP address of the MetaDB, which can only be the IP address of a MySQL-compatible tenant of OceanBase Database V2.0 or later. Yes oms_meta_port The port number of the MetaDB. Yes oms_meta_user The username of the MetaDB. Yes oms_meta_password The user password of the MetaDB. Yes drc_rm_db The name of the database for the OMS console. Yes drc_cm_db The name of the database for the CM service. Yes drc_cm_heartbeat_db The name of the heartbeat database for the CM service. Yes cm_url The URL of the OMS CM service, for example, http://VIP:8088.
Note
To deploy OMS on multiple nodes in multiple regions, you must set thecm_urlparameter to a VIP or domain name to which all CM servers in the current region are mounted. We recommend that you do not set it tohttp://127.0.0.1:8088.
The access URL of the OMS console is in the following format:IP address of the host where OMS is deployed:8089, for example,http://xxx.xxx.xxx.xxx:8089orhttps://xxx.xxx.xxx.xxx:8089. Port 8088 is used for program calls, and Port 8089 is used for web page access. You must specify port 8088.Yes cm_location The code of the region. Value range: [0,127]. You can select one number for each region.
Notice
If you upgrade to OMS V3.2.1 from an earlier version, you must set thecm_locationparameter to0.Yes cm_region The name of the region, for example, cn-hangzhou.
Notice
If you use OMS with the Alibaba Cloud Multi-Site High Availability (MSHA) service in an active-active disaster recovery scenario, use the region configured for the Alibaba Cloud service.Yes cm_region_cn The value here is the same as the value of cm_region. Yes cm_nodes The IP addresses of servers on which the OMS CM service is deployed. In multi-node deployment mode, you must specify multiple IP addresses for the parameter. Yes cm_is_default Specifies whether the CM service is enabled for OMS by default. No. Default value: true.tsdb_service The type of the time-series database. Valid values: INFLUXDBandCERESDB.No. Default value: INFLUXDB.tsdb_enabled Specifies whether metric reporting is enabled for monitoring. Valid values: trueandfalse.No. Default value: false.tsdb_url The IP address of the server where InfluxDB is deployed, which needs to be changed based on the actual environment. You need to modify this parameter based on the actual environment if you set the tsdb_enabledparameter totrue. After the time-series database is deployed, it maps to OMS deployed for the whole cluster. This means that although OMS is deployed in multiple regions, all regions map to the same time-series database.No tsdb_username The username used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabledparameter totrue. After you deploy a time-series database, manually create a user and specify the username and password.No tsdb_password The password used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabledparameter totrue.No Here is a template of the
config.yamlfile for you to deploy OMS in the Heilongjiang region:The operations are the same as those for deploying OMS in the Hangzhou region, except that you must modify the following parameters in the
config.yamlfile:drc_cm_heartbeat_db,cm_url,cm_location,cm_region,cm_region_cn,cm_is_default, andcm_nodes.Notice
When multiple regions exist, you must set the
cm_is_defaultparameter totruefor only one region, and set it tofalsefor all other regions.To deploy multiple nodes in the Heilongjiang region, specify the IP addresses of all nodes for the
cm_nodesparameter.You must execute the
docker_init.shscript on at least one node in each region.
# Information about the OMS MetaDB oms_meta_host: ${meta_ip} oms_meta_port: ${meta_port} oms_meta_user: ${meta_user} oms_meta_password: ${meta_password} # You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS. drc_rm_db: ${drc_rm_db} drc_cm_db: ${drc_cm_db} drc_cm_heartbeat_db: ${drc_cm_heartbeat_db} # Configure the OMS cluster in the Heilongjiang region. # To deploy OMS on multiple nodes in multiple regions, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted. cm_url: ${cm_url} cm_location: ${cm_location} cm_region: ${cm_region} cm_region_cn: ${cm_region_cn} cm_is_default: false cm_nodes: - ${host_ip1} - ${host_ip2} # Configurations of the time-series database # tsdb_service: 'INFLUXDB' # Default value: false. Set the value based on your actual configuration. # tsdb_enabled: false # The IP address of the server where InfluxDB is deployed. # You need to modify the following parameters based on the actual environment if you set the tsdb_enabled parameter to true. # tsdb_url: ${tsdb_url} # tsdb_username: ${tsdb_user} # tsdb_password: ${tsdb_password}
Sample configuration file
Replace related parameters with the actual values in the target deployment environment.
Here is a sample
config.yamlfile for you to deploy OMS in the Hangzhou region:oms_meta_host: xxx.xxx.xxx.xxx oms_meta_port: 2883 oms_meta_user: oms_meta_user oms_meta_password: ********* drc_rm_db: oms_rm drc_cm_db: oms_cm drc_cm_heartbeat_db: oms_cm_heartbeat cm_url: http://VIP:8088 cm_location: 1 cm_region: cn-hangzhou cm_region_cn: cn-hangzhou cm_is_default: true cm_nodes: - xxx.xxx.xxx.xx1 - xxx.xxx.xxx.xx2 tsdb_service: 'INFLUXDB' tsdb_enabled: true tsdb_url: 'xxx.xxx.xxx.xxx:8086' tsdb_username: username tsdb_password: *************Here is a sample
config.yamlfile for you to deploy OMS in the Heilongjiang region:oms_meta_host: xxx.xxx.xxx.xxx oms_meta_port: 2883 oms_meta_user: oms_meta_user oms_meta_password: *********** drc_rm_db: oms_rm drc_cm_db: oms_cm drc_cm_heartbeat_db: oms_cm_heartbeat_1 cm_url: http://VIP:8088 cm_location: 2 cm_region: cn-heilongjiang cm_region_cn: cn-heilongjiang cm_is_default: false cm_nodes: - xxx.xxx.xxx.xx1 - xxx.xxx.xxx.xx2 tsdb_service: 'INFLUXDB' tsdb_enabled: true tsdb_url: 'xxx.xxx.xxx.xxx:8086' tsdb_username: username tsdb_password: *************