Deploy OMS on multiple nodes in a single region

2025-12-26 06:16:14  Updated

This topic describes how to deploy OceanBase Migration Service (OMS) on multiple nodes in a single region by using the deployment tool.

Background information

  • You can deploy OMS on a single node first and then scale out to multiple nodes.

  • If you choose to deploy OMS with the config.yaml configuration file, note that the settings are slightly different from those for the single-node deployment mode. For more information, see the "Template and example of a configuration file" section of this topic.

  • To deploy OMS on multiple nodes, you must apply for a virtual IP address (VIP) and use it as the mount point for the OMS console. In addition, you must configure the mapping rules for Ports 8088 and 8089 in the VIP-based network strategy.

    You can use the VIP to access the OMS console even if an OMS node fails.

Prerequisites

  • The installation environment meets the system and network requirements. For more information, see System and network requirements.

  • The RM database, CM database, and heartbeat database have been prepared as the metadata databases of OMS. If you have not prepared them in advance, OMS will automatically create them.

  • The server to deploy OMS can connect to all other servers.

  • All servers involved in the multi-node deployment can connect to each other and you can obtain root permissions on a node by using its username and password.

  • You have obtained the installation package of OMS, which is generally a tar.gz file whose name starts with oms.

  • You have downloaded the OMS installation package and loaded it to the local image repository of the Docker container on each server node.

    docker load -i <OMS installation package>
    
  • You have prepared a directory for mounting the OMS container. In the mount directory, OMS will create the /home/admin/logs, /home/ds/store, and /home/ds/run directories for storing the component information and logs generated during the running of OMS.

  • (Optional) You have prepared a time-series database for storing performance monitoring data and DDL/DML statistics of OMS.

Terms

You need to replace variable names in some commands and prompts. A variable name is enclosed with angle brackets (<>).

  • OMS container mount directory: See the description of the mount directory in the "Prerequisites" section of this topic.

  • IP address of the server: the IP address of the host that executes the script.

  • OMS_IMAGE: the unique identifier of the loaded image. After you load the OMS installation package by using Docker, run the docker images command to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image. The obtained value is the unique identifier of the loaded image. Here is an example:

    $sudo docker images
    REPOSITORY                                          TAG                 IMAGE ID        
    work.oceanbase-dev.com/obartifact-store/oms     feature_3.4.0       2a6a77257d35      
    

    In this example, <OMS_IMAGE> can be work.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0 or 2a6a77257d35. Replace the value of <OMS_IMAGE> in related commands with the preceding value.

  • Directory of the config.yaml file: If you want to deploy OMS based on an existing config.yaml configuration file, this directory is the one where the configuration file is located.

Multi-node deployment architecture

The following figure shows the multi-node deployment architecture, in which Store is the log pulling component and Incr-Sync is the incremental synchronization component. When Node A of OMS fails, the Store and Incr-Sync components running on this node are guarded by the high availability (HA) service and are dynamically switched to Node B or Node C of OMS.

Notice

By default, the high availability feature is disabled. To ensure high availability for the Store and Incr-Sync components, manually enable this feature in the OMS console. For more information, see Modify HA configurations.

architecture7-en

Deployment procedure without a configuration file

If no OMS configuration file exists, we recommend that you generate a configuration file for deployment. If the OMS configuration file exists, we recommend that you use the existing configuration file for deployment.

Integrated deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy.sh from the loaded image:

    sudo docker run -d --net host --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    

    Here is an example:

    sudo docker run -d --net host --name oms-config-tool work.oceanbase-dev.com/obartifact-store/oms:feature_3.4.0 bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <Mount directory of the OMS container> -i <IP address of the server> -d <OMS_IMAGE>
    
  5. Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Nodes in Single Region.

    2. Select a task.

      Select No configuration file. Deploy OMS for the first time. Start from generating the configuration file.

    3. Configure the RM database and CM database for storing the metadata generated during the running of OMS.

      1. Enter the IP address, port, username, and password of the RM database and CM database.

      2. Set a prefix for names of databases in the MetaDB.

        For example, when the prefix is set to oms, the databases in the MetaDB are named oms_rm, oms_cm, and oms_cm_hb.

      3. Confirm your settings.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      4. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    4. Perform the following operations to configure the CM service of OMS:

      1. Specify the URL of the CM service, which is the VIP or domain name to which all CM servers in the region are mounted. The corresponding parameter in the configuration file is cm-url.

        Enter the VIP or domain name as the URL of the CM service. You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the <IP address>:<port number> format.

        Note

        The http:// prefix in the URL is optional.

      2. Enter the IP addresses of all servers in the region. Separate them with commas (,).

      3. Confirm whether the displayed CM settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    5. Determine whether to monitor historical data of OMS.

      • If you have deployed a time-series database in Step 2, enter y and press Enter to go to the step of configuring the time-series database and enable monitoring for OMS historical data.

      • If you chose not to deploy a time-series database in Step 2, enter n and press Enter to go to the step of determining whether to enable the audit log feature and configure Simple Log Service (SLS) parameters. In this case, OMS does not monitor the historical data after deployment.

    6. Configure the time-series database.

      Perform the following operations:

      1. Confirm whether you have deployed a time-series database.

        Enter the value based on the actual situation. If yes, enter y and press Enter. If no, enter n and press Enter to go to the step of determining whether to enable the audit log feature and set SLS parameters.

      2. Set the type of the time-series database to INFLUXDB.

        Notice

        At present, only INFLUXDB is supported.

      3. Enter the URL, username, and password of the time-series database. For more information, see Deploy a time-series database.

      4. Confirm whether the displayed settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    7. Determine whether to enable the audit log feature and write the audit logs to SLS.

      To enable the audit log feature, enter y and press Enter to go to the next step to specify the SLS parameters.

      Otherwise, enter n and press Enter to start the deployment. In this case, OMS does not audit the logs after deployment.

    8. Specify the SLS parameters.

      1. Set the SLS parameters as prompted.

      2. Confirm whether the displayed settings are correct.

      If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    9. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

      If the configuration file fails the check, modify the settings as prompted.

    10. Start the deployment on each node one after another.

      1. Specify the directory to which the OMS container is mounted on the node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

    11. Start the deployment.

      If the deployment fails, you can log in to the OMS container and view logs in the .log files prefixed with docker_init in the /home/admin/logs directory. If the OMS container fails to be started, you cannot obtain logs.

Separate deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy_v2.sh from the loaded image:

    sudo docker run -d --net host --name oms-config-tool <management image or component image> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy_v2.sh . && sudo docker rm -f oms-config-tool
    

    Here is an example:

    sudo docker run -d --net host --name oms-config-tool 0719**** bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy_v2.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy_v2.sh -o <OMS container mount directory> -i <IP address of the server> -v <management image> -s <component image>
    

    Here is an example:

    sh docker_remote_deploy_v2.sh -o /home/l****.***/l****_oms_run_022102 -i xxx.xxx.xxx.1 -v 0719**** -s 188a****
    
  5. Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Nodes in Single Region.

    2. Select a task.

      Select No configuration file. Deploy OMS for the first time. Start from generating the configuration file.

    3. Configure the RM database and CM database for storing the metadata generated during the running of OMS.

      1. Enter the IP address, port, username, and password of the RM database and CM database.

      2. Set a prefix for names of databases in the MetaDB.

        For example, when the prefix is set to oms, the databases in the MetaDB are named oms_rm, oms_cm, and oms_cm_hb.

      3. Confirm your settings.

        If the settings are correct, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      4. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the MetaDB. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    4. Perform the following operations to configure the CM service of OMS:

      1. Specify the URL of the CM service, which is the VIP or domain name to which all CM servers in the region are mounted. The corresponding parameter in the configuration file is cm-url.

        Enter the VIP or domain name as the URL of the CM service. You can separately specify the IP address and port number in the URL, or use a colon (:) to join the IP address and port number in the <IP address>:<port number> format.

        Note

        The http:// prefix in the URL is optional.

      2. Enter the IP addresses of all management nodes in the region. Separate them with commas (,).

        For example, enter the IP address of one management node: xxx.xxx.xxx.1.

      3. Enter the IP addresses of all component nodes in the region. Separate them with commas (,).

        For example, enter the IP address of two component nodes: xxx.xxx.xxx.1,xxx.xxx.xxx.2.

      4. Confirm whether the displayed CM settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    5. Determine whether to monitor historical data of OMS.

      • If you have deployed a time-series database in Step 2, enter y and press Enter to go to the step of configuring the time-series database and enable monitoring for OMS historical data.

      • If you chose not to deploy a time-series database in Step 2, enter n and press Enter to go to the step of determining whether to enable the audit log feature and configure Simple Log Service (SLS) parameters. In this case, OMS does not monitor the historical data after deployment.

    6. Configure the time-series database.

      Perform the following operations:

      1. Confirm whether you have deployed a time-series database.

        Enter the value based on the actual situation. If yes, enter y and press Enter. If no, enter n and press Enter to go to the step of determining whether to enable the audit log feature and set SLS parameters.

      2. Set the type of the time-series database to INFLUXDB.

        Notice

        At present, only INFLUXDB is supported.

      3. Enter the URL, username, and password of the time-series database. For more information, see Deploy a time-series database.

      4. Confirm whether the displayed settings are correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    7. Determine whether to enable the audit log feature and write the audit logs to SLS.

      To enable the audit log feature, enter y and press Enter to go to the next step to specify the SLS parameters.

      Otherwise, enter n and press Enter to start the deployment. In this case, OMS does not audit the logs after deployment.

    8. Specify the SLS parameters.

      1. Set the SLS parameters as prompted.

      2. Confirm whether the displayed settings are correct.

      If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

    9. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

      If the configuration file fails the check, modify the settings as prompted.

    10. Deploy the management nodes one by one as prompted.

      1. Enter the mount directory on the management node for deploying the OMS container.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config.yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the management nodes.

        To deploy multiple management nodes, complete the deployment on one server and then another until all management nodes are deployed.

    11. Deploy the component nodes one by one as prompted.

      1. Specify the directory to which the OMS container is mounted on the component node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config_yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the component nodes.

        To deploy multiple component nodes, complete the deployment on one server and then another until all component nodes are deployed.

Deployment procedure with a configuration file

If an OMS configuration file exists, you can use the deployment tool to verify the OMS configuration file and directly use the file.

Note

For more information about settings of the config.yaml file, see the "Template and example of a configuration file" section.

To modify the configuration after deployment, perform the following steps:

  1. Log in to the OMS container.

  2. Modify the config.yaml file in the /home/admin/conf/ directory as needed.

  3. Initialize the metadata.

    sh /root/docker_init.sh
    

Integrated deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy.sh from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy.sh -o <Mount directory of the OMS container> -c <Directory of the existing config.yaml file> -i <IP address of the host> -d <OMS_IMAGE>
    

    Note

    For more information about settings of the config.yaml file, see the "Template and example of a configuration file" section.

  5. Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Nodes in Single Region.

    2. Select a task.

      Select Reference configuration file has been passed in through the [-c] option of the script. Start to configure based on the file.

    3. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the RM database and CM database of the MetaDB in the original configuration file. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    4. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

      If the configuration file fails the check, modify the settings as prompted.

    5. Start the deployment on each node one after another.

      1. Specify the directory to which the OMS container is mounted on the node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

    6. Start the deployment.

      If the deployment fails, you can log in to the OMS container and view logs in the .log files prefixed with docker_init in the /home/admin/logs directory. If the OMS container fails to be started, you cannot obtain logs.

Separate deployment mode

  1. Log in to the server where OMS is to be deployed.

  2. (Optional) Deploy a time-series database.

    If you need to collect and display OMS monitoring data, deploy a time-series database. Otherwise, you can skip this step. For more information, see Deploy a time-series database.

  3. Run the following command to obtain the deployment script docker_remote_deploy_v2.sh from the loaded image:

    sudo docker run -d --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy_v2.sh . && sudo docker rm -f oms-config-tool
    
  4. Use the deployment script to start the deployment tool.

    sh docker_remote_deploy_v2.sh -o <Mount directory of the OMS container> -c <Directory of the existing config.yaml file> -i <IP address of the host> -d <OMS_IMAGE>
    

    Note

    For more information about settings of the config.yaml file, see the "Template and example of a configuration file" section.

  5. Complete the deployment as prompted. After you set each parameter, press Enter to move on to the next parameter.

    1. Select a deployment mode.

      Select Multiple Nodes in Single Region.

    2. Select a task.

      Select Reference configuration file has been passed in through the [-c] option of the script. Start to configure based on the file.

    3. If the system displays Database name already exists in the MetaDB. Continue?, it indicates that the database names you specified already exist in the RM database and CM database of the MetaDB in the original configuration file. This may be caused by repeated deployment or upgrade of OMS. You can enter y and press Enter to proceed, or enter n and press Enter to modify the settings.

    4. If the configuration file passes the check, all the settings are displayed. If the settings are correct, enter n and press Enter to proceed. Otherwise, enter y and press Enter to modify the settings.

      If the configuration file fails the check, modify the settings as prompted.

    5. Deploy the management nodes one by one as prompted.

      1. Enter the mount directory on the management node for deploying the OMS container.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config.yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the management nodes.

        To deploy multiple management nodes, complete the deployment on one server and then another until all management nodes are deployed.

    6. Deploy the component nodes one by one as prompted.

      1. Specify the directory to which the OMS container is mounted on the component node.

        Specify a directory with a large capacity.

        For a remote node, the username and password for logging in to the remote node are required. The corresponding user account must have the sudo privilege on the remote node.

      2. Confirm whether the OMS image file can be named <OMS_IMAGE>.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      3. Determine whether to mount an SSL certificate to the OMS container.

        If yes, enter y, press Enter, and specify the https_key and https_crt directories as prompted. Otherwise, enter n and press Enter.

      4. Confirm whether the path to which the config_yaml configuration file will be written is correct.

        If yes, enter y and press Enter to proceed. Otherwise, enter n and press Enter to modify the settings.

      5. Start to deploy the component nodes.

        To deploy multiple component nodes, complete the deployment on one server and then another until all component nodes are deployed.

Template and example of a configuration file

Configuration file template

The configuration file template in this topic is used for the regular password-based login method. If you log in to the OMS console by using single sign-on (SSO), you must integrate the OpenID Connect (OIDC) protocol and add parameters in the config.yaml file template. For more information, see Integrate the OIDC protocol to OMS to implement SSO.

Notice

  • The same configuration file applies to all nodes in the multi-node deployment architecture. In the configuration file, you must specify the IP addresses of multiple nodes for the cm_nodes parameter and set the cm_url parameter to the VIP corresponding to Port 8088.

  • You must replace the sample values of required parameters based on your actual deployment environment. Both the required and optional parameters are described in the following table. You can specify the optional parameters as needed.

  • In the config.yaml file, you must specify the parameters in the key: value format, with a space after the colon (:).

# Information about the RM database and CM database
oms_cm_meta_host: ${oms_cm_meta_host}
oms_cm_meta_password: ${oms_cm_meta_password}
oms_cm_meta_port: ${oms_cm_meta_port}
oms_cm_meta_user: ${oms_cm_meta_user}
oms_rm_meta_host: ${oms_rm_meta_host}
oms_rm_meta_password: ${oms_rm_meta_password}
oms_rm_meta_port: ${oms_rm_meta_port}
oms_rm_meta_user: ${oms_rm_meta_user}
     
# You can customize the names of the following three databases, which are created in the MetaDB when you deploy OMS.
drc_rm_db: ${drc_rm_db}
drc_cm_db: ${drc_cm_db}
drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
     
# Configurations of the OMS cluster
# To deploy OMS on multiple nodes in a single region, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
cm_url: ${cm_url}
cm_location: ${cm_location}
# The cm_region parameter is not required for single-region deployment.
# cm_region: ${cm_region}
# The cm_region_cn parameter is not required for single-region deployment.
# cm_region_cn: ${cm_region_cn}
cm_nodes:
 - ${host_ip1}
 - ${host_ip2}

console_nodes:
 - ${host_ip3}
 - ${host_ip4}
     
# Configurations of the time-series database
# The default value of `tsdb_enabled`, which specifies whether to configure a time-series database, is `false`. To enable metric reporting, set the parameter to `true` and delete the comments for the parameter.
# tsdb_enabled: false 
# If the `tsdb_enabled` parameter is set to `true`, delete comments for the following parameters and specify the values based on your actual configurations.
# tsdb_service: 'INFLUXDB'
# tsdb_url: '${tsdb_url}'
# tsdb_username: ${tsdb_user}
# tsdb_password: ${tsdb_password}
Parameter Description Required?
oms_cm_meta_host The IP address of the CM database. It can only be a MySQL-compatible tenant of OceanBase Database V2.0 or later. Yes
oms_cm_meta_password The password for connecting to the CM database. Yes
oms_cm_meta_port The port number for connecting to the CM database. Yes
oms_cm_meta_user The username for connecting to the CM database. Yes
oms_rm_meta_host The IP address of the RM database. It can only be a MySQL-compatible tenant of OceanBase Database V2.0 or later. Yes
oms_rm_meta_password The password for connecting to the RM database. Yes
oms_rm_meta_port The port number for connecting to the RM database. Yes
oms_rm_meta_user The username for connecting to the RM database. Yes
drc_rm_db The name of the database for the OMS console. Yes
drc_cm_db The name of the MetaDB for the CM service. Yes
drc_cm_heartbeat_db The name of the heartbeat database for the CM service. Yes
cm_url The URL of the OMS CM service, for example, http://VIP:8088.
Note
To deploy OMS on multiple nodes in a single region, you must set the cm_url parameter to a VIP or domain name to which all CM servers in the region are mounted.
We recommend that you do not set it to http://127.0.0.1:8088, which cannot be used for scaling out to multiple nodes in multiple regions.
The access URL of the OMS console is in the following format: IP address of the host where OMS is deployed:8089, for example, http://xxx.xxx.xxx.xxx:8089 or https://xxx.xxx.xxx.xxx:8089. Port 8088 is used for program calls, and Port 8089 is used for web page access. You must specify port 8088.
Yes
cm_location The code of the region. Value range: [0,127]. You can select one number for each region.
Notice
If you upgrade to OMS V3.2.1 from an earlier version, you must set the cm_location parameter to 0.
Yes
cm_region The name of the region, for example, cn-jiangsu.
Notice
If you use OMS with the Alibaba Cloud Multi-Site High Availability (MSHA) service in an active-active disaster recovery scenario, use the region configured for the Alibaba Cloud service. The active-active disaster recovery feature is deprecated in OMS V4.3.1.
No
cm_region_cn The value here is the same as the value of cm_region. No
cm_nodes The IP addresses of servers on which the OMS CM service is deployed. In multi-node deployment mode, you must specify multiple IP addresses for the parameter.
  • In integrated deployment mode, cm_nodes indicates the IP addresses of servers on which the OMS cluster is deployed.
  • In separate deployment mode, cm_nodes specifies the servers on which the component nodes are deployed.
Yes
console_nodes
  • In integrated deployment mode, console_nodes and cm_nodes have the same value.
  • In separate deployment mode, console_nodes specifies the servers on which the management nodes are deployed.
Yes
tsdb_service The type of the time-series database. Valid values: INFLUXDB and CERESDB. No. Default value: INFLUXDB.
tsdb_enabled Specifies whether metric reporting is enabled for monitoring. Valid values: true and false. No. Default value: false.
tsdb_url The IP address of the server where InfluxDB is deployed. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. No
tsdb_username The username used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. After you deploy a time-series database, manually create a user and specify the username and password. No
tsdb_password The password used to connect to the time-series database. You need to modify this parameter based on the actual environment if you set the tsdb_enabled parameter to true. No

Configuration file sample

Replace related parameters with the actual values in the target deployment environment.

oms_cm_meta_host: xxx.xxx.xxx.xxx
oms_cm_meta_password: **********
oms_cm_meta_port: 2883
oms_cm_meta_user: oms_cm_meta_user
oms_rm_meta_host: xxx.xxx.xxx.xxx
oms_rm_meta_password: **********
oms_rm_meta_port: 2883
oms_rm_meta_user: oms_rm_meta_user
drc_rm_db: oms_rm
drc_cm_db: oms_cm
drc_cm_heartbeat_db: oms_cm_heartbeat
cm_url: http://VIP:8088
cm_location: 100
cm_region: cn-anhui
cm_region_cn: cn-anhui
cm_nodes:
 - xxx.xxx.xxx.xx1
 - xxx.xxx.xxx.xx2
console_nodes:
 - xxx.xxx.xxx.xx3
tsdb_service: 'INFLUXDB'
tsdb_enabled: true
tsdb_url: 'xxx.xxx.xxx.xxx:8086'
tsdb_username: username
tsdb_password: *************

Contact Us