This topic describes how to scale out from a single region, a single node, and multiple nodes in a single region.
Scale out from a single node to multiple nodes
You can scale out from a single node to multiple nodes in the following two scenarios:
The initial node has a VIP as the cm_url.
The initial node has a physical host IP address as the cm_url.
The initial node has a VIP as the cm_url
The specific steps are the same as those for the initial single-node deployment. For more information, see Deploy OMS Community Edition on a single node.
Notice
When you start the container, the value of -e OMS_HOST_IP must be the IP address of the current host.
Take the initial node's cm_nodes as xxx.xxx.xxx.1 and the scale-out node's cm_nodes as xxx.xxx.xxx.2 as an example. The config.yaml file is configured as follows.
# OMS Community Edition MetaDB information
oms_meta_host: ${oms_meta_host}
oms_meta_port: ${oms_meta_port}
oms_meta_user: ${oms_meta_user}
oms_meta_password: ${oms_meta_password}
# When you scale out from a single node to multiple nodes, make sure that the names of the following three databases are the same as those on the initial node.
drc_rm_db: ${drc_rm_db}
drc_cm_db: ${drc_cm_db}
drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
# OMS Community Edition cluster configuration
cm_url: http://VIP:8088
cm_location: ${cm_location}
cm_region: ${cm_region}
cm_region_cn: ${cm_region_cn}
cm_is_default: true
cm_nodes:
- xxx.xxx.xxx.1
- xxx.xxx.xxx.2
# Time series database configuration
# The default value is false. If you want to enable the metric reporting feature, set this parameter to true and remove the comment from the beginning of the line.
# tsdb_enabled: false
# If tsdb_enabled is set to true, remove the comment from the beginning of the line for the following parameters and set the values based on your actual situation.
# tsdb_service: 'INFLUXDB'
# tsdb_url: '${tsdb_url}'
# tsdb_username: ${tsdb_user}
# tsdb_password: ${tsdb_password}
The initial node has a physical host IP address as the cm_url
In this scenario, you need to perform database correction. The procedure is as follows.
Note
The tables to be corrected are location_cm, host, and resource_group in the drc_cm_db database, and cluster_info in the drc_rm_db database.
Configure a virtual IP (VIP), a server load balancing (SLB) instance, or a domain name and bind all the IP addresses of the physical hosts in the current region.
For example, bind the IP addresses xxx.xxx.xxx.1 and xxx.xxx.xxx.2 to the VIP.
Stop the processes in the original container. Take the oms_330 container as an example.
sudo docker exec -it oms_330 bash supervisorctl stop allTake the
drc_rm_dbdatabase and thecluster_infotable as an example. Log in to the database and perform the following operations.Back up the data in the
cluster_infotable.select * from cluster_info;Delete the data in the
cluster_infotable.delete from cluster_info;
Modify the configuration file in the oms_330 container.
# /home/admin/conf/config.yaml is a fixed directory in the Docker container. sudo vi /home/admin/conf/config.yaml # Modify the cm_url address and add cm_nodes: cm_url: http://VIP:8088 cm_nodes: - xxx.xxx.xxx.1 - xxx.xxx.xxx.2Perform the initialization operation again.
sudo sh /root/docker_init.shAfter the initialization is completed, confirm whether the
cm_urlfield in thecluster_infotable is set to the VIP address.Copy the configuration file to another server and start the OMS Community Edition container.
Notice
When you start the container, set the value of
-e OMS_HOST_IPto the IP address of the current host.Replace
work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}with the name of the image actually imported by using thedocker load -icommand.
OMS_HOST_IP=xxx CONTAINER_NAME=oms_xxx IMAGE_TAG=feature_x.x.x-ce docker run -dit --net host \ -v /data/config.yaml:/home/admin/conf/config.yaml \ -v /data/oms/oms_logs:/home/admin/logs \ -v /data/oms/oms_store:/home/ds/store \ -v /data/oms/oms_run:/home/ds/run \ # If you mount an HTTPS certificate for the OMS Community Edition container, set the following two parameters. -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt -v /data/oms/https_key:/etc/pki/nginx/oms_server.key -e OMS_HOST_IP=${OMS_HOST_IP} \ --privileged=true \ --pids-limit -1 \ --ulimit nproc=65535:65535 \ --name ${CONTAINER_NAME} \ work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}
Scale out from a single region to multiple regions
The procedure for scaling out from a single region to multiple regions is the same as the procedure for deploying OMS Community Edition on a single node, except that the drc_cm_heartbeat_db parameter in the config.yaml file must be different from the heartbeat database used by the initial OMS Community Edition node. You also need to modify the cm_url, cm_location, cm_region, cm_region_cn, cm_is_default, and cm_nodes parameters to the values for the new region.
Log in to the OMS Community Edition deployment server.
(Optional) Deploy a time-series database.
If you want OMS Community Edition to collect and display monitoring data, deploy a time-series database. If you do not want OMS Community Edition to display monitoring data, skip this step. For more information, see Deploy a time-series database.
Prepare the configuration file.
Edit the OMS Community Edition startup configuration file in an appropriate directory. For example, you can create the configuration file at
/root/config.yaml.Make sure to replace the required parameters with the actual values for the target environment. Take the example of scaling out the host xxx.xxx.xxx.xx1 in the Jiangsu region. The
config.yamlfile is configured as follows.Notice
In the
config.yamlfile, a colon ( : ) must be followed by a space.# Information about the OMS MetaDB oms_meta_host: ${oms_meta_host} oms_meta_port: ${oms_meta_port} oms_meta_user: ${oms_meta_user} oms_meta_password: ${oms_meta_password} # The drc_rm_db and drc_cm_db parameters are the same as those of the initial node. drc_rm_db: ${drc_rm_db} drc_cm_db: ${drc_cm_db} # The drc_cm_heartbeat_db parameter must be different from the heartbeat database used by the initial node. drc_cm_heartbeat_db: ${drc_cm_heartbeat_db} # Information about the OMS Community Edition cluster # Specify the values for the new region. cm_url: http://xxx.xxx.xxx.1:8088 cm_location: ${cm_location} cm_region: cn-jiangsu cm_region_cn: Jiangsu cm_is_default: true cm_nodes: - xxx.xxx.xxx.1 # Information about the time-series database # The default value is false. If you want to enable the metrics reporting feature, set this parameter to true and remove the comment mark from the beginning of the line. # tsdb_enabled: false # If tsdb_enabled is set to true, remove the comment mark from the beginning of the following lines and specify the values of the parameters based on your actual situation. # tsdb_service: 'INFLUXDB' # tsdb_url: '${tsdb_url}' # tsdb_username: ${tsdb_user} # tsdb_password: ${tsdb_password}The following table describes the parameters.
Parameter Description Required oms_meta_host The IP address of the MetaDB, which can be a MySQL or MySQL compatible mode of OceanBase Database. Yes oms_meta_port The port number of the MetaDB. Yes oms_meta_user The username of the MetaDB. Yes oms_meta_password The password of the MetaDB. Yes drc_rm_db The name of the database of the management console. The value must be the same as that of the initial node. Yes drc_cm_db The name of the MetaDB of the cluster management service. The value must be the same as that of the initial node. Yes drc_cm_heartbeat_db The name of the heartbeat database of the cluster management service. The value must be different from the heartbeat database used by the initial node. Yes cm_url The URL of the OMS Community Edition cluster management service. For example, http://xxx.xxx.xxx.1:8088.
Notice:
In the case of a single-node deployment, the value is usually the IP address of the OMS Community Edition server.
We recommend that you do not set the value tohttp://127.0.0.1:8088, because this address cannot be expanded to a multi-region, multi-node deployment.Yes cm_location The region code, which is an integer in the range [0,127]. Each region is identified by a number, which you select. Yes cm_region The region, for example, cn-jiangsu.
Notice:
If you use OMS Community Edition in a disaster recovery and dual-active scenario with Alibaba Cloud MSHA, use the Alibaba Cloud region information as the value ofcm_region.No cm_region_cn The Chinese name of the region. For example, Jiangsu. No cm_nodes The IP address list of the OMS Community Edition cluster management service. In this example, the value is xxx.xxx.xxx.1. Yes cm_is_default Specifies whether the OMS Community Edition cluster management service is the default one.
Notice:
If multiple regions are involved, you can set only one region'scm_is_defaultparameter totrueto identify the default OMS Community Edition cluster management service.No. Default value: truetsdb_enabled Specifies whether to enable the metrics reporting feature (monitoring capability), which can be trueorfalse.No. Default value: falsetsdb_service Specifies the type of the time-series database, which can be INFLUXDBorCERESDB.No. Default value: INFLUXDBtsdb_url The URL of the server that hosts InfluxDB. When tsdb_enabledis set totrue, you must modify this parameter based on your actual situation.No tsdb_username The username of the time-series database. When tsdb_enabledis set totrue, you must modify this parameter based on your actual situation. After you deploy the time-series database, you must manually create a time-series database user and specify the username and password.No tsdb_password The password of the time-series database. When tsdb_enabledis set totrue, you must modify this parameter based on your actual situation.No Load the OMS Community Edition installation package to the local image repository of the Docker container.
docker load -i <OMS Community Edition installation package>Run the following command to start the container.
OMS Community Edition supports HTTP and HTTPS protocols for accessing the OMS Community Edition console. If you want to securely access OMS Community Edition, you can provide an HTTPS certificate and mount it to a specified directory in the container. If you want to access OMS Community Edition by using the HTTP protocol, you do not need to configure any certificate.
OMS_HOST_IP=xxx CONTAINER_NAME=oms_xxx IMAGE_TAG=feature_x.x.x-ce docker run -dit --net host \ -v /data/config.yaml:/home/admin/conf/config.yaml \ -v /data/oms/oms_logs:/home/admin/logs \ -v /data/oms/oms_store:/home/ds/store \ -v /data/oms/oms_run:/home/ds/run \ # If you mount an HTTPS certificate in the OMS Community Edition container, you must specify the following two parameters. -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt -v /data/oms/https_key:/etc/pki/nginx/oms_server.key -e OMS_HOST_IP=${OMS_HOST_IP} \ --privileged=true \ --pids-limit -1 \ --ulimit nproc=65535:65535 \ --name ${CONTAINER_NAME} \ work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}The following table describes the parameters.
Parameter Description OMS_HOST_IP The IP address of the host. CONTAINER_NAME The name of the created container. The format is oms_xxx. Specify xxx based on the actual version. IMAGE_TAG After you load the OMS installation package to Docker, run the docker imagescommand to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image, which is the unique identifier of the loaded image named<OMS_IMAGE>.Notice
Replace
work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}with the name of the image actually imported by executing thedocker load -icommand./data/oms/oms_logs
/data/oms/oms_store
/data/oms/oms_run/data/oms/oms_logs,/data/oms/oms_store, and/data/oms/oms_runcan be replaced with the created mount directories on your OMS Community Edition deployment server. These directories store log files generated during OMS Community Edition operation, log pull components, and synchronization components. The log files are persisted on the server.
Notice:
The mount directory must remain unchanged during subsequent OMS Community Edition re-deployments and upgrades./home/admin/logs
/home/ds/store
/home/ds/run/home/admin/logs,/home/ds/store, and/home/ds/runare fixed directories in the container. The paths cannot be modified./data/oms/https_crt (optional)
/data/oms/https_key (optional)The mount path of the HTTPS certificate in the OMS Community Edition container. Replace the path based on your actual situation. If you mount an HTTPS certificate, the Nginx service in the OMS Community Edition container runs in HTTPS mode. You must access OMS Community Edition in HTTPS mode to use the console. privileged Grants extended permissions to the container. pids-limit Specifies the process limit for the container. -1 indicates no limit. ulimit nproc Specifies the maximum number of user processes. Enter the new container.
docker exec -it ${CONTAINER_NAME} bashNotice
CONTAINER_NAMEis the name of the created container.In the root directory, run the metadata initialization command.
bash /root/docker_init.shAfter the command is executed, the initialization process proceeds as follows:
Initialize data in the MetaDB.
Generate configuration files for all components.
Restart all components.
Initialize resource tags and resource groups of OMS Community Edition.
When you run the
docker_init.shscript, pay attention to the command-line output. After the initialization is completed, the system will prompt you [End] All initialization steps completed successfully.Notice
The initialization process takes 2 to 4 minutes. Please wait patiently and do not interrupt the process.
Scale out multiple nodes in a single region
The initial environment has two nodes in a single region with a VIP as the cm_url
This section describes how to scale out the deployment of OMS Community Edition V4.2.7-CE to multiple nodes in a single region by using the docker_remote_deploy.sh script.
Pull the OMS Community Edition V4.2.7-CE image and obtain the deployment script from the loaded image.
sudo docker pull <OMS_IMAGE> sudo docker run -d --net host --name oms-config-tool <OMS_IMAGE> bash && sudo docker cp oms-config-tool:/root/docker_remote_deploy.sh . && sudo docker rm -f oms-config-toolRun the deployment tool by using the deployment script.
sudo sh docker_remote_deploy.sh -o <OMS container mount directory> -i <local IP address> -d <OMS_IMAGE>Complete the deployment of two nodes in a single region as prompted. For more information, see Deploy OMS in multiple nodes in a single region.
After the initialization is completed, log in to the OMS Community Edition console to confirm.
Procedure for scaling up
The cm_url specified in the initial node configuration, http://xxx.xxx.xxx.1:1188, serves as the VIP, and the cm_nodes for the scale-out node is xxx.xxx.xxx.2. This describes the procedure for scaling out a single region to multiple nodes.
Configure the VIP, SLB, or domain name, and bind the scale-out node.
Copy a copy of the configuration file to the new node. In the
config.yamlconfiguration file on the new node, add the IP address xxx.xxx.xxx.2 under thecm_nodesparameter.Run the command to start the container.
OMS_HOST_IP=xxx.xxx.xxx.2 CONTAINER_NAME=oms_xxx IMAGE_TAG=feature_x.x.x-ce docker run -dit --net host \ -v /data/config.yaml:/home/admin/conf/config.yaml \ -v /data/oms/oms_logs:/home/admin/logs \ -v /data/oms/oms_store:/home/ds/store \ -v /data/oms/oms_run:/home/ds/run \ # If you want to mount an HTTPS certificate in the OMS container, you need to set the following two parameters: #-v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt #-v /data/oms/https_key:/etc/pki/nginx/oms_server.key -e OMS_HOST_IP=${OMS_HOST_IP} \ --privileged=true \ --pids-limit -1 \ --ulimit nproc=65535:65535 \ --name ${CONTAINER_NAME} \ work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}Start parameter description
Parameter Description OMS_HOST_IP The IP address of the host. This parameter must be specified as the IP address of the current scale-out host. CONTAINER_NAME The name of the container. It must start with oms. xxx represents the specific version. IMAGE_TAG After loading the OMS Community Edition installation package by using Docker, obtain the [IMAGE ID] or [REPOSITORY:TAG] for the corresponding image by using the docker imagescommand. This unique identifier is stored in<OMS_IMAGE>.Notice
Replace
work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}with the actual image name for the image to be imported by using thedocker load -icommand./data/oms/oms_logs
/data/oms/oms_store
/data/oms/oms_run/data/oms/oms_logs,/data/oms/oms_store, and/data/oms/oms_runcan be replaced with local mount directories created on the server where the OMS Community Edition is deployed. These directories store log files, data components, and synchronization files generated by the OMS Community Edition, and they are persisted on the server.
Notice:
Mount directory paths must remain unchanged during re-deployment or upgrades./home/admin/logs
/home/ds/store
/home/ds/run/home/admin/logs,/home/ds/store, and/home/ds/runare fixed directories in the container. The paths cannot be modified./data/oms/https_crt (optional)
/data/oms/https_key (optional)The path where the HTTPS certificate is mounted in the OMS Community Edition container. If the HTTPS certificate is mounted, the Nginx service in the OMS Community Edition container runs in HTTPS mode. You must access OMS Community Edition in HTTPS mode to use the console service. privileged Grants extended permissions to the container. pids-limit Sets the maximum number of processes allowed within the container. A value of -1 indicates no limit. ulimit nproc The maximum number of user processes is configured. Enter the container.
docker exec -it ${CONTAINER_NAME} bashExecute metadata initialization in the
rootdirectory.bash /root/docker_init.shAfter the initialization step is completed, log in to the OMS Community Edition console to confirm.