This topic describes how to deploy OceanBase Migration Service (OMS) on multiple nodes in a single region by using deployment tools.
Background
You can deploy OMS on a single node first and scale out to multiple nodes.
If you choose to deploy OMS with the
config.yamlconfiguration file, note that the settings are slightly different from those for the single-node deployment mode. For more information, see the "Template and example of a configuration file" section.To deploy OMS on multiple nodes, you must apply for a virtual IP address (VIP) and use it as the mount point for the OMS console. In addition, you must configure the mapping rules of Ports 8088 and 8089 in the VIP network strategy.
You can use the VIP to access the OMS console even if an OMS node fails.
Prerequisites
The installation environment meets the system and network requirements. For more information, see System and network requirements.
A MetaDB cluster is prepared as the OMS MetaDB.
Make sure that the server to deploy OMS can connect to all other servers.
Make sure that all servers involved in the multi-node deployment can connect to each other and that you can obtain root permissions on a node by using its username and password.
The OMS installation package is obtained. Generally, the package is a
tar.gzfile whose name starts withoms.The downloaded OMS installation package has been loaded into the local image repository of the Docker container on each server node.
docker load -i <OMS installation package>The directory to which the OMS container is mounted has been prepared. OMS creates three mapping directories in this directory:
/home/admin/logs,/home/ds/store, and/home/ds/run. These directories store the runtime components and logs of OMS.(Optional) A time-series database is installed to store OMS performance monitoring data and DDL and DML statistics.
Terms
You need to replace variable names in some commands and prompts. A variable name is enclosed with angle brackets (<>).
Directory to which the OMS container is mounted: See the description of the mount directory in Prerequisites.
IP address of the server: the IP address of the host that executes the script. In a single-node deployment scenario, by default, it refers to the IP address in the cluster management (CM) configuration information.
OMS_IMAGE: the unique identifier of the loaded image. After you load the OMS installation package by using Docker, run the
docker imagescommand to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image. The obtained value is the unique identifier of the loaded image. Example:$sudo docker images REPOSITORY TAG IMAGE ID work.oceanbase-dev.com/obartifact-store/oms feature_3.4.0 2a6a77257d35In this example,
<OMS_IMAGE>can bework.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0or2a6a77257d35. Replace the value of<OMS_IMAGE>in related commands with the preceding value.Directory of the
config.yamlfile: If you want to deploy OMS based on the currentconfig.yamlconfiguration file, this directory is the one where the current configuration file is located.
Multi-node deployment architecture
The following figure shows the multi-node deployment architecture. Store is the incremental pulling component, Full-Import is the full import component, and Incr-Sync is the incremental synchronization component. When OMS A fails, the Store, Full-Import, and Incr-Sync processes running on the node are protected by the high availability (HA) service and switched over to OMS B or OMS C.
Notice
By default, the HA feature is disabled. To ensure HA for the Store, Full-Import, and Incr-Sync components, manually enable this feature in the OMS console. For more information, see Modify high availability configurations.

Deploy OMS without a configuration file
Log on to the server where OMS is to be deployed.
Optional. Deploy a time-series database.
If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.
Run the following command to obtain the deployment script from the loaded image:
sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-toolExample:
sudo docker run -d --net host --name oms-config-tool work.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0 bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-toolUse the deployment script to start the deployment tool.
sh docker_remote_deploy.sh -o <deploy_tool_workdir> -i <IP address of the server> -d <OMS_IMAGE>Follow the prompts to complete the deployment. After you set each parameter, press Enter to move on to the next parameter.
Select the deployment mode.
Select Multiple Nodes in Single Region.
Select the task.
Select No Configuration File. Deploy OMS Starting from Configuration File Generation.
Specify the following MetaDB information:
Enter the URL, port, username, and password of the MetaDB.
Set the prefix for names of databases in the MetaDB
For example, when the prefix is set to
oms, the databases in the MetaDB are namedoms_rm,oms_cm, andoms_cm_hb.Confirm your settings.
If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.If the system displays "The specified database names already exist in the MetaDB. Are you sure that you want to continue?", it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter
yand press Enter to proceed, or enternand press Enter to re-specify the settings.
Perform the following operations to configure the CM service of OMS:
Specify the URL of the CM service, which is the VIP or domain name to which all CM servers in the region are mounted. The original parameter is
cm-url.Enter the VIP or domain name as the URL of the CM service. You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the
: format. Note
The
http://prefix in the URL is optional.Enter the IP addresses of all servers in the region. Separate them with commas (,).
Confirm the displayed CM settings.
If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.
Confirm whether to enable OMS historical data monitoring.
If you have deployed a time-series database, enter
yand press Enter to go to the next step to configure the time-series database and enable the monitoring of OMS historical data.If you did not deploy a time-series database, enter
nand press Enter to go to the step of "determining whether to enable the audit log feature and setting Simple Log Service (SLS) parameters". In this case, OMS does not monitor the historical data after deployment.
Configure the time-series database.
Perform the following operations:
Confirm whether you have deployed a time-series database.
Enter the value based on the actual situation. If yes, enter
yand press Enter. If not, enternand press Enter to go to the step of "determining whether to enable the audit log feature and setting SLS parameters".Set the type of the time-series database to
INFLUXDB.Notice
At present, only INFLUXDB is supported.
Enter the URL, username, and password of the time-series database.
Confirm whether the displayed settings are correct.
If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.
Determine whether to enable the audit log feature and write the audit logs to Simple Log Service (SLS).
To enable the audit log feature, enter
yand press Enter to go to the next step to specify the SLS parameters.Otherwise, enter
nand press Enter to start the deployment. In this case, OMS does not audit the logs after deployment.Specify the following SLS parameters:
Set the SLS parameters as prompted.
Confirm whether the displayed settings are correct.
If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.Start the deployment on each node one after another.
Specify the directory to which the OMS container is mounted in the host.
Use a directory with a large capacity.
For a remote node, the username and password for logging on to the remote node are required. The corresponding user account must have the sudo privileges on the remote node.
Confirm whether the OMS image file can be named as
OMS_IMAGE.If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.Confirm whether to install a Secure Sockets Layer (SSL) certificate for the OMS container.
If yes, enter
y, press Enter, and specify thehttps_keyandhttps_crtdirectories as prompted. If not, enternand press Enter.
Start the deployment on the node.
Deploy OMS with a configuration file
Log on to the server where OMS is to be deployed.
Optional. Deploy a time-series database.
If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.
Run the following command to obtain the deployment script from the loaded image:
sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-toolUse the deployment script to start the deployment tool.
sh docker_remote_deploy.sh -o <deploy_tool_workdir> -c <directory of the config.yaml file> -i <IP address of the server> -d <OMS_IMAGE>Note
For more information about settings of the
config.yamlfile, see the "Template and example of a configuration file" section.Follow the prompts to complete the deployment. After you set each parameter, press Enter to move on to the next parameter.
Select the deployment mode.
Select Multiple Nodes in Single Region.
Select the task.
Select Use Configuration File Uploaded with Script Option [-c].
If the system displays "The specified database names already exist in the MetaDB. Are you sure that you want to continue?", it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter
yand press Enter to proceed, or enternand press Enter to re-specify the settings.If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.If the configuration file fails the check, modify the configuration information as prompted.
Start the deployment on each node one after another.
Specify the directory to which the OMS container is mounted in the host.
Use a directory with a large capacity.
For a remote node, the username and password for logging on to the remote node are required. The corresponding user account must have the sudo privileges on the remote node.
Confirm whether the OMS image file can be named as
OMS_IMAGE.If the settings are correct, enter
yand press Enter to proceed. Otherwise, enternand press Enter to modify the settings.Confirm whether to install an SSL certificate for the OMS container.
If yes, enter
y, press Enter, and specify thehttps_keyandhttps_crtdirectories as prompted. If not, enternand press Enter.
Start the deployment.
To modify the configuration after deployment, log on to the OMS container and perform the following steps:
Notice
If you deploy OMS on multiple nodes in a single region, you must manually modify the configuration of each node.
Modify the
config.yamlfile based on business needs.Run the
python -m omsflow.scripts.units.oms_init_manager --init-config-filecommand.Run the
supervisorctl restart oms_console oms_drc_supervisorcommand.
Template and example of a configuration file
Configuration file template
The configuration file template in this topic is used for the regular password-based logon method. If you log on to the OMS console by using single sign-on (SSO), you need to connect OMS to the OpenID Connect (OIDC) protocol and add parameters in the config.yaml file. For more information, see Integrate the OIDC protocol to OMS to implement SSO.
Notice
The same configuration file applies to all nodes in the multi-node deployment architecture. In the configuration file, you must specify the IP addresses of multiple nodes for the
cm_nodesparameter and set thecm_urlparameter to the VIP corresponding to Port 8088.You must replace the sample values of required parameters based on your actual deployment environment. Both the required and optional parameters are described in the following table. You can specify the optional parameters as needed.
In the
config.yamlfile, you must specify the parameters in the key: value format, with a space after the colon (:).
# Information about the OMS MetaDB
oms_meta_host: ${oms_meta_host}
oms_meta_port: ${oms_meta_port}
oms_meta_user: ${oms_meta_user}
oms_meta_password: ${oms_meta_password}
# You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS.
drc_rm_db: ${drc_rm_db}
drc_cm_db: ${drc_cm_db}
drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
# Configurations of the OMS cluster
# To deploy OMS on multiple nodes in a single region, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
cm_url: ${cm_url}
cm_location: ${cm_location}
# The cm_region parameter is not required for single-region deployment.
# cm_region: ${cm_region}
# The cm_region_cn parameter is not required for single-region deployment.
# cm_region_cn: ${cm_region_cn}
cm_is_default: true
cm_nodes:
- ${host_ip1}
- ${host_ip2}
# Configurations of the time-series database
# Default value: false. To enable metric reporting, set the parameter to `true` and delete the comments for the parameter.
# tsdb_enabled: false
# If the `tsdb_enabled` parameter is set to `true`, delete comments for the following parameters and specify the values based on your actual configurations.
# tsdb_service: 'INFLUXDB'
# tsdb_url: '${tsdb_url}'
# tsdb_username: ${tsdb_user}
# tsdb_password: ${tsdb_password}
| Parameter | Description | Required |
|---|---|---|
| oms_meta_host | The IP address of the MetaDB, which can be the IP address of a MySQL database or a MySQL tenant of OceanBase Database. Notice: This parameter is valid only in OceanBase Database V2.0 and later. |
Yes |
| oms_meta_port | The port number of the MetaDB. | Yes |
| oms_meta_user | The username of the MetaDB. | Yes |
| oms_meta_password | The user password of the MetaDB. | Yes |
| drc_rm_db | The name of the database for the OMS console. | Yes |
| drc_cm_db | The name of the MetaDB for the CM service. | Yes |
| drc_cm_heartbeat_db | The name of the heartbeat database for the CM service. | Yes |
| cm_url | The URL of the OMS CM service. Example: http://VIP:8088. Note: To deploy OMS on multiple nodes in a single region, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted. We do not recommend that you use http://127.0.0.1:8088 because this IP address does not support the multi-node multi-region deployment mode. The access URL of the OMS console is in the format of IP address of the host on which OMS is deployed:8089. Example: http(https)://xxx.xxx.xxx.xxx:8089. Port 8088 is used for program calls, and Port 8089 is used for web page access. You must specify Port 8088. |
Yes |
| cm_location | The code of the region. Value range: [0,127]. You can select one number for each region. Notice: If you upgrade to OMS V3.2.1 from an earlier version, you must set the cm_location parameter to 0. |
Yes |
| cm_region | The name of the region. Example: cn-jiangsu. Notice: If you use OMS with the Alibaba Cloud Multi-Site High Availability (MSHA) service in an active-active disaster recovery scenario, use the region configured for the Alibaba Cloud service. |
No |
| cm_region_cn | The value here is the same as the value of cm_region. | No |
| cm_nodes | The IP addresses of servers on which the OMS CM service is deployed. In multi-node deployment mode, you must specify multiple IP addresses for the parameter. | Yes |
| cm_is_default | Indicates whether the OMS CM service is enabled by default. | No. Default value: true. |
| tsdb_service | The type of the time-series database. Valid values: INFLUXDB and CERESDB. |
No. Default value: CERESDB. |
| tsdb_enabled | Indicates whether metric reporting is enabled for monitoring. Valid values: true and false. |
No. Default value: false. |
| tsdb_url | The IP address of the server where InfluxDB is deployed. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. |
No |
| tsdb_username | The username used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. After you deploy the time-series database, manually create a user and specify the username and password. |
No |
| tsdb_password | The password used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. |
No |
Example
Replace related parameters with the actual values in the target deployment environment.
oms_meta_host: xxx.xxx.xxx.xxx
oms_meta_port: 2883
oms_meta_user: oms_meta_user
oms_meta_password: *********
drc_rm_db: oms_rm
drc_cm_db: oms_cm
drc_cm_heartbeat_db: oms_cm_heartbeat
cm_url: http://VIP:8088
cm_location: 100
cm_region: cn-anhui
cm_region_cn: cn-anhui
cm_is_default: true
cm_nodes:
- xxx.xxx.xxx.xx1
- xxx.xxx.xxx.xx2
tsdb_service: 'INFLUXDB'
tsdb_enabled: true
tsdb_url: 'xxx.xxx.xxx.xxx:8086'
tsdb_username: username
tsdb_password: *************