This topic describes how to deploy OceanBase Migration Service (OMS) Community Edition in multiple regions and nodes. The example uses Hangzhou and Beijing as the example regions.
Terms
OMS_IMAGE: After you load the OMS Community Edition installation package by using Docker, you can run the docker images command to obtain the [IMAGE ID] or [REPOSITORY:TAG] of the loaded image. The obtained value is the unique identifier of the loaded image, which is represented as <OMS_IMAGE>. Example:
$sudo docker images
REPOSITORY TAG IMAGE ID
work.oceanbase-dev.com/oceanbase/oms:feature_4.2.11_ce 2786e8a6eccd
In this example, <OMS_IMAGE> can be work.oceanbase-dev.com/oceanbase/oms:feature_4.2.11_ce or 2786e8a6eccd. In the subsequent instructions, replace ${OMS_IMAGE} with the value obtained in this example.
Deploy OMS Community Edition
Log in to the k8s environment and prepare the configuration files for each region.
The following sample configuration files do not specify the node on which the Pod is started. If you need to specify the node on which the Pod is started, first determine whether the node in the k8s cluster has a label indicating the IDC where it is located. If it does, you can use this label to specify the IDC where OMS Community Edition is started. To view the label indicating the IDC where the node in the k8s cluster is located, run the following command.
kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS sqaxxxx.sa128 Ready control-plane,master 37d v1.31.1+k3s1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=sqaxxxx.sa128,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s,zone=zone1The following are sample configuration files for the Hangzhou and Beijing regions. Replace the parameters with the actual values of your deployment environment. In
kind: Namespace, the value ofnamemust be consistent with the value ofnamespacein other types ofkind.Hangzhou region
apiVersion: v1 kind: Namespace metadata: name: omsregion-1 --- apiVersion: v1 kind: ConfigMap metadata: name: oms-config namespace: omsregion-1 data: config.yaml: | "apsara_audit_enable": "false" "apsara_audit_sls_access_key": "" "apsara_audit_sls_access_secret": "" "apsara_audit_sls_endpoint": "" "apsara_audit_sls_ops_site_topic": "" "apsara_audit_sls_user_site_topic": "" "cm_is_default": !!bool "true" # The region code. Each region has a unique number. "cm_location": "0" # The node address, in the ${pod_name}.${service_name}.${namespace_name}.svc format. # The pod_name is the name in kind: StatefulSet, with an ID added at the end. The ID starts from 0. For example, if replicas is 2 in a multi-node deployment, the pod_name values are oms-0 and oms-1. # The service_name is the name in the first kind: Service. # The namespace_name is the name in kind: Namespace. "cm_nodes": - "oms-0.oms-service.omsregion-1.svc" - "oms-1.oms-service.omsregion-1.svc" "cm_region": "hangzhou" "cm_region_cn": "Hangzhou" "cm_server_port": "8088" # The format is http://${service_name}.${namespace_name}.svc:8088. "cm_url": "http://oms-service.omsregion-1.svc:8088" "drc_cm_db": "cm_db_name" "drc_cm_heartbeat_db": "cm_hb_name_hz" "drc_rm_db": "rm_db_name" "ghana_server_port": "8090" "init_db": "true" "nginx_server_port": "8089" "oms_meta_host": "xxx.xxx.xxx.1" "oms_meta_password": "oms_meta_password" "oms_meta_port": "2881" "oms_meta_user": "xxx@tenant_name" "sshd_server_port": "2023" "supervisor_server_port": "9000" "tsdb_enabled": "true" "tsdb_password": "tsdb_password" "tsdb_service": "INFLUXDB" "tsdb_url": "xxx.xxx.xxx.2:8086" "tsdb_username": "tsdb_username" --- apiVersion: v1 kind: Service metadata: name: oms-service namespace: omsregion-1 spec: clusterIP: None selector: app: oms --- apiVersion: v1 kind: Service metadata: name: oms-service-nodeport namespace: omsregion-1 spec: type: NodePort selector: app: oms ports: - name: nginx protocol: TCP port: 8089 targetPort: 8089 - name: ghana protocol: TCP port: 8090 targetPort: 8090 - name: cm protocol: TCP port: 8088 targetPort: 8088 - name: sshd protocol: TCP port: 2023 targetPort: 2023 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: oms namespace: omsregion-1 labels: app: oms spec: replicas: 2 serviceName: "oms-service" selector: matchLabels: app: oms template: metadata: labels: app: oms spec: # Add the node selection item to specify the IDC where OMS Community Edition is started. # Different regions require different labels on the nodes. For example, the label for the Hangzhou region is zone=zone1. nodeSelector: zone: "zone1" initContainers: - name: copy-config image: busybox:1.35 command: ['sh', '-c', 'cp /config-ro/config.yaml /work/config.yaml'] volumeMounts: - name: config-volume mountPath: /config-ro readOnly: true - name: writable-config mountPath: /work containers: - name: oms image: ${OMS_IMAGE} env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: OMS_HOST_IP # The value of POD_NAME indicates the name of the current Pod. Do not modify it. For oms-service.omsregion-1, replace it with the actual ${service_name} and ${namespace_name}. value: $(POD_NAME).oms-service.omsregion-1.svc - name: HOST_DOCKER_VERSION value: "20.10.7" ports: - containerPort: 8088 - containerPort: 8089 - containerPort: 8090 - containerPort: 9000 - containerPort: 2023 volumeMounts: - name: writable-config mountPath: /home/admin/conf/config.yaml subPath: config.yaml - name: logs-volume mountPath: /home/admin/logs - name: store-volume mountPath: /home/ds/store - name: run-volume mountPath: /home/ds/run volumes: - name: config-volume configMap: name: oms-config - name: writable-config emptyDir: {} # The following three mount disks are for storing logs, incremental data, and running data. You can increase the size of these disks as needed. # We recommend that you set the size of logs-volume to at least 200 GiB. # We recommend that you set the size of store-volume to at least 500 GiB. Store-volume stores incremental data. We recommend that you increase the size of this disk. # We recommend that you set the size of run-volume to at least 300 GiB. volumeClaimTemplates: - metadata: name: logs-volume spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 200Gi - metadata: name: store-volume spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 500Gi - metadata: name: run-volume spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 300GiBeijing region
apiVersion: v1 kind: Namespace metadata: name: omsregion-2 --- apiVersion: v1 kind: ConfigMap metadata: name: oms-config namespace: omsregion-2 data: config.yaml: | "apsara_audit_enable": "false" "apsara_audit_sls_access_key": "" "apsara_audit_sls_access_secret": "" "apsara_audit_sls_endpoint": "" "apsara_audit_sls_ops_site_topic": "" "apsara_audit_sls_user_site_topic": "" "cm_is_default": !!bool "false" # The region code. Each region has a unique number. "cm_location": "1" # The node address, in the ${pod_name}.${service_name}.${namespace_name}.svc format. # The pod_name is the name in kind: StatefulSet, with an ID added at the end. The ID starts from 0. For example, if replicas is 2 in a multi-node deployment, the pod_name values are oms-0 and oms-1. # The service_name is the name in the first kind: Service. # The namespace_name is the name in kind: Namespace. "cm_nodes": - "oms-0.oms-service.omsregion-2.svc" - "oms-1.oms-service.omsregion-2.svc" "cm_region": "beijing" "cm_region_cn": "Beijing" "cm_server_port": "8088" # The format is http://${service_name}.${namespace_name}.svc:8088. "cm_url": "http://oms-service.omsregion-2.svc:8088" "drc_cm_db": "cm_db_name" "drc_cm_heartbeat_db": "cm_hb_name_bj" "drc_rm_db": "rm_db_name" "ghana_server_port": "8090" "init_db": "true" "nginx_server_port": "8089" "oms_meta_host": "xxx.xxx.xxx.1" "oms_meta_password": "oms_meta_password" "oms_meta_port": "2881" "oms_meta_user": "xxx@tenant_name" "sshd_server_port": "2023" "supervisor_server_port": "9000" "tsdb_enabled": "true" "tsdb_password": "tsdb_password" "tsdb_service": "INFLUXDB" "tsdb_url": "xxx.xxx.xxx.2:8086" "tsdb_username": "tsdb_username" --- apiVersion: v1 kind: Service metadata: name: oms-service namespace: omsregion-2 spec: clusterIP: None selector: app: oms --- apiVersion: v1 kind: Service metadata: name: oms-service-nodeport namespace: omsregion-2 spec: type: NodePort selector: app: oms ports: - name: nginx protocol: TCP port: 8089 targetPort: 8089 - name: ghana protocol: TCP port: 8090 targetPort: 8090 - name: cm protocol: TCP port: 8088 targetPort: 8088 - name: sshd protocol: TCP port: 2023 targetPort: 2023 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: oms namespace: omsregion-2 labels: app: oms spec: replicas: 2 serviceName: "oms-service" selector: matchLabels: app: oms template: metadata: labels: app: oms spec: # Add the node selection item to specify the IDC where OMS Community Edition is started. # Different regions require different labels on the nodes. For example, the label for the Beijing region is zone=zone2. nodeSelector: zone: "zone2" initContainers: - name: copy-config image: busybox:1.35 command: ['sh', '-c', 'cp /config-ro/config.yaml /work/config.yaml'] volumeMounts: - name: config-volume mountPath: /config-ro readOnly: true - name: writable-config mountPath: /work containers: - name: oms image: ${OMS_IMAGE} env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: OMS_HOST_IP value: $(POD_NAME).oms-service.omsregion-2.svc - name: HOST_DOCKER_VERSION value: "20.10.7" ports: - containerPort: 8088 - containerPort: 8089 - containerPort: 8090 - containerPort: 9000 - containerPort: 2023 volumeMounts: - name: writable-config mountPath: /home/admin/conf/config.yaml subPath: config.yaml - name: logs-volume mountPath: /home/admin/logs - name: store-volume mountPath: /home/ds/store - name: run-volume mountPath: /home/ds/run volumes: - name: config-volume configMap: name: oms-config - name: writable-config emptyDir: {} # The following three mount disks are for storing logs, incremental data, and running data. You can increase the size of these disks as needed. # We recommend that you set the size of logs-volume to at least 200 GiB. # We recommend that you set the size of store-volume to at least 500 GiB. Store-volume stores incremental data. We recommend that you increase the size of this disk. # We recommend that you set the size of run-volume to at least 300 GiB. volumeClaimTemplates: - metadata: name: logs-volume spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 200Gi - metadata: name: store-volume spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 500Gi - metadata: name: run-volume spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 300Gi
Install OMS Community Edition in the Hangzhou and Beijing regions, respectively.
For example, the configuration file for the Hangzhou region is named
oms-k8s-statefulset-region-1, and the configuration file for the Beijing region is namedoms-k8s-statefulset-region-2. The installation commands are as follows.kubectl apply -f oms-k8s-statefulset-region-1.yaml kubectl apply -f oms-k8s-statefulset-region-2.yamlTo view the details of a node, run the following command.
kubectl get pods -n omsregion-1 kubectl get pods -n omsregion-2 # View the installation log of OMS Community Edition. kubectl logs -f oms-0 -n omsregion-1 kubectl logs -f oms-1 -n omsregion-1View the external access port. The port mapped to 8089 is the port for accessing the OMS Community Edition management console.
kubectl get svc -n omsregion-1 kubectl get svc -n omsregion-2
Uninstall OMS Community Edition
To uninstall OMS Community Edition, run the following command:
kubectl delete -f ${configuration file name}
Example:
kubectl delete -f oms-k8s-statefulset-region-1.yaml
kubectl delete -f oms-k8s-statefulset-region-2.yaml