This topic describes how to scale out a single region and a single node, and how to scale out in the separated deployment mode.
Notice
The scale-out nodes must be of the same version as the existing nodes.
Scale out from a single node to multiple nodes
You can scale out from a single node to multiple nodes in the following two scenarios:
The initial node is configured with a VIP as the cm_url.
The initial node is configured with a physical machine IP address as the cm_url.
The initial node is configured with a VIP as the cm_url
The specific steps are the same as those for the initial single-node deployment. For more information, see Single-node deployment.
Notice
When you start the container, set the value of -e OMS_HOST_IP to the IP address of the current host.
Assume that the cm_nodes value of the initial node is xxx.xxx.xxx.xx1 and that of the added node is xxx.xxx.xxx.xx2. The config.yaml file is configured as follows.
This is to synchronize the initial node's config.yaml to the latest version; scale-out nodes can be deployed directly according to the config.yaml.
# RM and CM Meta Database information
oms_cm_meta_host: ${oms_cm_meta_host}
oms_cm_meta_password: ${oms_cm_meta_password}
oms_cm_meta_port: ${oms_cm_meta_port}
oms_cm_meta_user: ${oms_cm_meta_user}
oms_rm_meta_host: ${oms_rm_meta_host}
oms_rm_meta_password: ${oms_rm_meta_password}
oms_rm_meta_port: ${oms_rm_meta_port}
oms_rm_meta_user: ${oms_rm_meta_user}
# When you scale out a single node to multiple nodes, please keep the following three databases consistent with the initial node's configuration.
drc_rm_db: ${drc_rm_db}
drc_cm_db: ${drc_cm_db}
drc_cm_heartbeat_db: ${drc_cm_heartbeat_db}
# OMS cluster configuration
cm_url: http://VIP:8088
cm_location: ${cm_location}
cm_region: ${cm_region}
cm_region_cn: ${cm_region_cn}
cm_nodes:
- xxx.xxx.xxx.xx1
- xxx.xxx.xxx.xx2
# Time series database configuration
# The default value is false. If you want to enable the metrics reporting feature, set the value to true and remove the comment symbol from the beginning of the parameter.
# tsdb_enabled: false
# If tsdb_enabled is set to true, please remove the comment symbol from the beginning of the following parameters and fill in the values as needed.
# tsdb_service: 'INFLUXDB'
# tsdb_url: '${tsdb_url}'
# tsdb_username: ${tsdb_user}
# tsdb_password: ${tsdb_password}
The initial node is configured with a physical machine IP address as the cm_url
In this scenario, you need to correct the database information. The procedure is as follows.
Note
The tables to be corrected are location_cm, host, and resource_group in the drc_cm_db database, and cluster_info in the drc_rm_db database.
Configure a VIP, SLB, or domain name and bind all the physical machine IP addresses in the current region.
For example, the VIP is bound to the IP addresses xxx.xxx.xxx.xx1 and xxx.xxx.xxx.xx2.
Stop the processes in the original container. Take oms_330 as an example.
sudo docker exec -it oms_330 bash supervisorctl stop allTake the
drc_rm_dbdatabase andcluster_infotable as an example. Log in to the database and perform the following operations.Query data in the
cluster_infotable for backup.select * from cluster_info;Delete data in the
cluster_infotable.delete from cluster_info;
Modify the configuration file in the oms_330 container.
# /home/admin/conf/config.yaml is a fixed directory in the Docker container. sudo vi /home/admin/conf/config.yaml # Modify the cm_url and add cm_nodes: cm_url: http://VIP:8088 cm_nodes: - xxx.xxx.xxx.xx1 - xxx.xxx.xxx.xx2Perform the initialization again.
sudo sh /root/docker_init.shAfter the initialization is completed, check whether the
cm_urlfield in thecluster_infotable of the database is correctly set to the VIP address.Copy the configuration file and start the OMS container on another host.
Notice
When you start the container, set the value of
-e OMS_HOST_IPto the IP address of the current host.OMS_HOST_IP=xxx CONTAINER_NAME=oms_xxx IMAGE_TAG=feature_x.x.x docker run -dit --net host \ -v /data/config.yaml:/home/admin/conf/config.yaml \ -v /data/oms/oms_logs:/home/admin/logs \ -v /data/oms/oms_store:/home/ds/store \ -v /data/oms/oms_run:/home/ds/run \ # If you mount the HTTPS certificate in the OMS container, set the following two parameters. -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt -v /data/oms/https_key:/etc/pki/nginx/oms_server.key -e OMS_HOST_IP=${OMS_HOST_IP} \ --privileged=true \ --pids-limit -1 \ --ulimit nproc=65535:65535 \ --name ${CONTAINER_NAME} \ work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}
Scale out from a single region to multiple regions
The procedure for scaling out a single-region single-node deployment to a multi-region single-node deployment is the same as the procedure for deploying OMS in a single-node mode, except that the drc_cm_heartbeat_db parameter in the config.yaml file must be different from the database used by the original OMS host. You also need to modify the cm_url, cm_location, cm_region, cm_region_cn, and cm_nodes parameters to the configurations of the new region.
Log in to the OMS deployment server.
(Optional) Deploy a time-series database.
If you want OMS to collect and display monitoring data, deploy a time-series database. If you do not want OMS to display monitoring data, skip this step. For more information, see Deploy a time-series database.
Prepare the configuration file.
Edit the OMS startup configuration file in an appropriate directory. For example, you can create the configuration file as
/root/config.yaml.Replace the required parameters with the actual values of the target environment. Take the expanded host xxx.xxx.xxx.xx1 in the Jiangsu region as an example, the
config.yamlfile is configured as follows.Notice
In the
config.yamlfile, the `Key: Value` format has a space following the colon.# RM and CM meta database information oms_cm_meta_host: ${oms_cm_meta_host} oms_cm_meta_password: ${oms_cm_meta_password} oms_cm_meta_port: ${oms_cm_meta_port} oms_cm_meta_user: ${oms_cm_meta_user} oms_rm_meta_host: ${oms_rm_meta_host} oms_rm_meta_password: ${oms_rm_meta_password} oms_rm_meta_port: ${oms_rm_meta_port} oms_rm_meta_user: ${oms_rm_meta_user} # drc_rm_db, drc_cm_db, and the initial node are configured in the same way. drc_rm_db: ${drc_rm_db} drc_cm_db: ${drc_cm_db} # drc_cm_heartbeat_db must be given a new name to distinguish it from the heartbeat_db database of the initial node. drc_cm_heartbeat_db: ${drc_cm_heartbeat_db} # OMS cluster configuration # The following parameters are configured in the expanded region. cm_url: http://xxx.xxx.xxx.xx1:8088 cm_location: ${cm_location} cm_region: cn-jiangsu cm_region_cn: Jiangsu cm_nodes: - xxx.xxx.xxx.xx1 # Time-series database configuration # The default value is false. If you want to enable the metrics reporting feature, set this parameter to true and remove the comment mark from the beginning of the line. # tsdb_enabled: false # If tsdb_enabled is set to true, remove the comment mark from the beginning of the following parameters and set them to appropriate values. # tsdb_service: 'INFLUXDB' # tsdb_url: '${tsdb_url}' # tsdb_username: ${tsdb_user} # tsdb_password: ${tsdb_password}Parameter Description Required oms_cm_meta_host The IP address of the meta database of the CM. At present, only OceanBase Database in MySQL compatible mode is supported, and the version must be V2.0 or later. Yes oms_cm_meta_password The password of the meta database of the CM. Yes oms_cm_meta_port The port of the meta database of the CM. Yes oms_cm_meta_user The username of the meta database of the CM. Yes oms_rm_meta_host The IP address of the meta database of the RM. At present, only OceanBase Database in MySQL compatible mode is supported, and the version must be V2.0 or later. Yes oms_rm_meta_password The password of the meta database of the RM. Yes oms_rm_meta_port The port of the meta database of the RM. Yes oms_rm_meta_user The username of the meta database of the RM. Yes drc_rm_db The name of the database of the console, which must be the same as the initial node. Yes drc_cm_db The name of the meta database of the cluster management service, which must be the same as the initial node. Yes drc_cm_heartbeat_db The name of the heartbeat database of the cluster management service. Please use a new name to distinguish it from the heartbeat_db database of the initial node. Yes cm_url The URL of the cluster management service of OMS. For example, http://xxx.xxx.xxx.xx1:8088.
Notice:
In single-node deployment, the value is usually the IP address of the current OMS host.
We recommend that you do not usehttp://127.0.0.1:8088, because this address cannot be expanded to multi-region multi-node mode.Yes cm_location The region code, which ranges from 0 to 127. Each region is represented by a number, which you can choose. Yes cm_region The region, for example, cn-jiangsu.
Notice:
If you you use OMS with the Alibaba Cloud Multi-Site High Availability (MSHA) service in an active-active disaster recovery scenario, use the region information provided by Alibaba Cloud as the value ofcm_region. The active-active disaster recovery feature is deprecated in OMS V4.3.1.No cm_region_cn The Chinese name of the region. For example, Jiangsu. No cm_nodes The IP address list of the cluster management service of OMS. In this example, xxx.xxx.xxx.xx1 is used. Yes tsdb_enabled Specifies whether to enable the metrics reporting feature (monitoring capability). The value can be trueorfalse.No. The default value is false.tsdb_service Specifies the type of the time-series database. The value can be INFLUXDBorCERESDB.No. The default value is INFLUXDB.tsdb_url The IP address of the host where InfluxDB is deployed. If tsdb_enabledis set totrue, set this parameter to the actual value.No tsdb_username The username of the time-series database. If tsdb_enabledis set totrue, set this parameter to the actual value. After you deploy the time-series database, you must manually create a time-series database user and specify the username and password.No tsdb_password The password of the time-series database. If tsdb_enabledis set totrue, set this parameter to the actual value.No Load the OMS installation package to the local image repository of the Docker container.
docker load -i <OMS installation package>Run the following command to start the container.
OMS can be accessed through the HTTP protocol or HTTPS protocol. If you want to securely access OMS, you can provide an HTTPS certificate and mount it to the specified directory in the container. If you want to access OMS through the HTTP protocol, no configuration is required.
OMS_HOST_IP=xxx CONTAINER_NAME=oms_xxx IMAGE_TAG=feature_x.x.x docker run -dit --net host \ -v /data/config.yaml:/home/admin/conf/config.yaml \ -v /data/oms/oms_logs:/home/admin/logs \ -v /data/oms/oms_store:/home/ds/store \ -v /data/oms/oms_run:/home/ds/run \ # If you have mounted the HTTPS certificate to the OMS container, you need to set the following two parameters. -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt -v /data/oms/https_key:/etc/pki/nginx/oms_server.key -e OMS_HOST_IP=${OMS_HOST_IP} \ --privileged=true \ --pids-limit -1 \ --ulimit nproc=65535:65535 \ --name ${CONTAINER_NAME} \ work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}Parameter Description OMS_HOST_IP The IP address of the host. CONTAINER_NAME The name of the created container. The format is oms_xxx. Specify xxx based on the specific version. For example, if you want to use OMS V3.1.0, specify oms_310. IMAGE_TAG After you load the OMS installation package to Docker, run the docker imagescommand to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image, which is the unique identifier of the loaded image, namely,<OMS_IMAGE>./data/oms/oms_logs
/data/oms/oms_store
/data/oms/oms_run/data/oms/oms_logs,/data/oms/oms_store, and/data/oms/oms_runcan be replaced with the created mount directories on your OMS deployment server. These directories store the log files generated during OMS operation, the files generated by the log pull component and the synchronization component, and are persisted locally.
Notice:
The mount directory must remain unchanged when you redeploy or upgrade OMS./home/admin/logs
/home/ds/store
/home/ds/run/home/admin/logs,/home/ds/store, and/home/ds/runare fixed directories in the container and cannot be modified./data/oms/https_crt (optional)
/data/oms/https_key (optional)The HTTPS certificate mount directory in the OMS container. You can replace the directory based on your actual situation. If you have mounted the HTTPS certificate, the Nginx service in the OMS container will run in HTTPS mode. You must access OMS in HTTPS mode to use the OMS console service. privileged Grants the container extended privileges. pids-limit Specifies the process limit for the container. -1 indicates no limit. ulimit nproc Specifies the maximum number of user processes. Go to the new container.
docker exec -it ${CONTAINER_NAME} bashNotice
CONTAINER_NAMEis the name of the created container. The format is oms_xxx. Specify xxx based on the specific version. For example, if you want to use OMS V3.1.0, specify oms_310.Run the metadata initialization command.
sh /root/docker_init.shAfter the command is executed, the initialization process proceeds as follows:
Initialize data in the meta database.
Generate configuration files for each component.
Restart all components.
Initialize OMS resource labels and resource groups.
When you run
docker_init.sh, pay attention to the command line output. After the initialization is completed, the system will prompt you [End] All initialization steps completed.Notice
The initialization process takes 2 to 4 minutes. Please wait patiently and do not interrupt the process.
Scale out in the separated deployment mode
Scenario example
In the Hangzhou region, there is only one component node. You need to add another component node in the Hangzhou region.
Procedure
Modify the
config.yamlfile of the existing management node.Go to the management node.
sudo docker exec -it <oms_console> bashRun the
vim /home/admin/conf/config.yamlcommand.Add the IP address of the new component node to
cm_nodesand copy the completeconfig.yamlfile for new the component node.
Deploy the component node by using the docker run command.
Prepare the directories and files required by the OMS container and the
config.yamlfile.sudo mkdir -m 777 /home/lxxxx.lxxxx/lxxxx_oms_run_022103 && cd /home/lxxxx.lxxxx/lxxxx_oms_run_022103 && mkdir -m 777 oms_logs oms_run oms_store && sudo vim /home/lxxxx.lxxxx/lxxxx_oms_run_022103/config.yaml sudo chmod 777 /home/lxxxx.lxxxx/lxxxx_oms_run_022103/config.yamlCopy the configuration file and start the OMS container on another host.
sudo docker run -dit --net host \ -v /home/lxxxx.lxxxx/lxxxx_oms_run_022103/config.yaml:/home/admin/conf/config.yaml \ -v /home/lxxxx.lxxxx/lxxxx_oms_run_022103/oms_logs:/home/admin/logs \ -v /home/lxxxx.lxxxx/lxxxx_oms_run_022103/oms_store:/home/ds/store \ -v /home/lxxxx.lxxxx/lxxxx_oms_run_022103/oms_run:/home/ds/run \ -e OMS_HOST_IP=xxx.xxx.xxx.1 \ --privileged=true \ --pids-limit -1 \ --ulimit nproc=65535:65535 \ --name oms_component \ 188a066a27ab
Run the initialization command.
sh docker_init.shRun the following command on the management node.
python -m omsflow.scripts.units.oms_cluster_manager add_resource