This topic describes how to deploy an OceanBase cluster in a Kubernetes environment by using ob-operator.
Prerequisites
Make sure that the following conditions are met:
You have an available Kubernetes cluster with at least nine CPU cores, 33 GB of memory, and 360 GB of storage space.
You have installed cert-manager, on which ob-operator depends. For more information about how to install cert-manager, see Installation.
You have installed a MySQL client or OceanBase Client (OBClient) for connecting to the OceanBase cluster.
Note
This topic takes ob-operator V2.2.0 as an example. The procedure may vary with the ob-operator version. For the procedure of different versions, see ob-operator documentation of corresponding versions.
Deploy ob-operator
ob-operator simplifies the deployment and management of OceanBase clusters in Kubernetes. You can use either of the following methods to deploy ob-operator:
Deploy ob-operator by using Helm
Run the following commands to deploy ob-operator:
helm repo add ob-operator https://oceanbase.github.io/ob-operator/
helm install ob-operator ob-operator/ob-operator --namespace=oceanbase-system --create-namespace --version=2.2.0
where:
--namespacespecifies the namespace, which can be customized. The recommended value isoceanbase-system.--versionspecifies the version of ob-operator. The latest version is recommended.
Deploy ob-operator by using a configuration file
Run the following command to use a configuration file to deploy ob-operator:
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.2.0_release/deploy/operator.yaml
Customize ob-operator
To customize ob-operator, run the following command to download the configuration file:
wget https://raw.githubusercontent.com/oceanbase/ob-operator/2.2.0_release/deploy/operator.yaml
Modify the configuration file as needed, and then run the following command to deploy ob-operator. For the description of parameters in the configuration file, see Configure ob-operator.
kubectl apply -f operator.yaml
Deploy an OceanBase cluster
Create a PersistentVolumeClaim (PVC).
When you use ob-operator to deploy an OceanBase cluster, you must create a PVC to store data of the cluster. In this topic, local-path-provisioner is used to manage PVCs. Run the following command to deploy local-path-provisioner:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/ local-path-storage.yamlFor more information, see local-path-provisioner.
Create a namespace.
Run the following command to create a namespace used for deploying the OceanBase cluster:
kubectl create namespace oceanbaseAfter you create a namespace, you can run the
kubectl get namespace oceanbasecommand to verify the creation result. If the status of the namespace isActive, it is successfully created.Create secrets for default users.
Before you deploy an OceanBase cluster, you must run the following commands to create several secrets for specific users in the OceanBase cluster:
kubectl create secret -n oceanbase generic root-password --from-literal=password='<root_password>' kubectl create secret -n oceanbase generic proxyro-password --from-literal=password='<proxyro_password>'Here,
<root_password>and<proxyro_password>are respectively the passwords of theroot@sysandproxyro@sysusers. You need to change the passwords. After you create secrets, you can run thekubectl get secret -n oceanbasecommand to verify the creation result.Define the OceanBase cluster.
An OceanBase cluster is defined by a YAML configuration file. You can run the following command to create a configuration file, which is named
obcluster.yamlin this example.vi obcluster.yamlThe following example shows the content of the configuration file.
apiVersion: oceanbase.oceanbase.com/v1alpha1 kind: OBCluster metadata: name: obcluster namespace: oceanbase spec: clusterName: obcluster clusterId: 1 userSecrets: root: root-password proxyro: proxyro-password topology: - zone: zone1 replica: 1 - zone: zone2 replica: 1 - zone: zone3 replica: 1 observer: image: oceanbase/oceanbase-cloud-native:4.2.1.1-101010012023111012 resource: cpu: 2 memory: 10Gi storage: dataStorage: storageClass: local-path size: 50Gi redoLogStorage: storageClass: local-path size: 50Gi logStorage: storageClass: local-path size: 20Gi monitor: image: oceanbase/obagent:4.2.1-100000092023101717 resource: cpu: 1 memory: 1GiThe following table describes the main parameters in the configuration file. For information about all parameters, see Create a cluster.
Parameter Required? Description metadata.name Yes The name of the cluster. It is the resource name in the Kubernetes environment. metadata.namespace Yes The namespace to which the OceanBase cluster belongs. spec.clusterName Yes The name of the OceanBase cluster. spec.clusterId Yes The ID of the OceanBase cluster. spec.userSecrets Yes The secrets of default users in the OceanBase cluster. spec.userSecrets.root Yes The name of the secret of the root@sysuser in the OceanBase cluster. The secret must contain thepasswordfield.spec.userSecrets.proxyro Yes The name of the secret of the proxyro@sysuser in the OceanBase cluster. The secret must contain thepasswordfield.spec.topology Yes The topology of the OceanBase cluster, which contains the definitions of zones. spec.topology[i].zone Yes The name of a zone in the OceanBase cluster. spec.topology[i].replica Yes The number of OBServer nodes in a zone of the OceanBase cluster. spec.observer.image Yes The OceanBase Database image used. spec.observer.resource Yes The resource configurations for each OBServer node in the OceanBase cluster. spec.observer.resource.cpu Yes The number of CPU cores for each OBServer node in the OceanBase cluster. We recommend that you set the value to an integer greater than 2. A value smaller than 2 will cause system exceptions. spec.observer.resource.memory Yes The size of memory for each OBServer node in the OceanBase cluster. We recommend that you set a value greater than 10Gi. A value smaller than10Giwill cause system exceptions.spec.observer.storage Yes The storage space for each OBServer node in the OceanBase cluster. spec.observer.storage.dataStorage Yes The data storage space for each OBServer node in the OceanBase cluster. We recommend that you set the value to at least three times the memory size. spec.observer.storage.redoLogStorage Yes The clog storage space for each OBServer node in the OceanBase cluster. We recommend that you set the value to at least three times the memory size. spec.observer.storage.logStorage Yes The runtime log storage space for each OBServer node in the OceanBase cluster. We recommend that you set the value to 10Gior higher.spec.observer.storage.*.storageClass Yes The storage class specified for a PVC when the PVC is created. This parameter takes effect for storage configurations. spec.observer.storage.*.size Yes The capacity specified for a PVC when the PVC is created. This parameter takes effect for storage configurations. spec.monitor No The monitoring configurations. We recommend that you configure the monitoring feature. ob-operator uses OBAgent to collect monitoring data and interconnects with Prometheus to monitor the status of the OceanBase cluster. spec.monitor.image Yes The image used by OBAgent. spec.monitor.resource Yes The resources used by the monitoring container. spec.monitor.resource.cpu Yes The CPU resources used by the monitoring container. spec.monitor.resource.memory Yes The memory resources used by the monitoring container. Deploy the OceanBase cluster.
kubectl apply -f obcluster.yamlRun the following command to query the status of the OceanBase cluster. When the status changes to
Running, the OceanBase cluster has been deployed and initialized. This process typically takes a couple of minutes, with the main time-consuming steps being image pulling and cluster initialization.kubectl get obclusters.oceanbase.oceanbase.com obcluster -n oceanbase
Directly connect to the OceanBase cluster
After you deploy an OceanBase cluster, you can perform the following steps to directly connect to the OceanBase cluster. We recommend that you deploy OceanBase Database Proxy (ODP) and connect to the OceanBase cluster by using ODP. For information about how to deploy ODP, see the Deploy ODP section in this topic.
Obtain the addresses of OceanBase cluster pods.
kubectl get pods -n oceanbase -l ref-obcluster=obcluster -o wideIn this example,
oceanbasecorresponds to the value ofmetadata.namespace, andobclustercorresponds to the value ofmetadata.name. You need to replace the values based on the actual situation.The output is as follows:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES obcluster-1-zone2-c76d303299a9 2/2 Running 0 4m 10.10.10.1 node-x <none> <none> obcluster-1-zone3-2cdf3cd8a05e 2/2 Running 0 4m 10.10.10.2 node-x <none> <none> obcluster-1-zone1-94904330202f 2/2 Running 0 4m 10.10.10.3 node-x <none> <none>Connect to the OceanBase cluster.
You can connect to the OceanBase cluster by using the IP address of any node in the cluster. The command is as follows:
mysql -h10.10.10.1 -P2881 -uroot@sys -p oceanbase -A -c
Deploy ODP
ODP is defined by a YAML configuration file. Perform the following steps to deploy ODP:
Create a configuration file for ODP.
vi obproxy.yamlIn this example, the configuration file is named
obproxy.yaml. Here is a sample configuration file:apiVersion: v1 kind: Service metadata: name: svc-obproxy namespace: oceanbase spec: type: ClusterIP selector: app: obproxy ports: - name: "sql" port: 2883 targetPort: 2883 - name: "prometheus" port: 2884 targetPort: 2884 --- apiVersion: apps/v1 kind: Deployment metadata: name: obproxy namespace: oceanbase spec: selector: matchLabels: app: obproxy replicas: 2 template: metadata: labels: app: obproxy spec: containers: - name: obproxy image: oceanbase/obproxy-ce:4.2.1.0-11 ports: - containerPort: 2883 name: "sql" - containerPort: 2884 name: "prometheus" env: - name: APP_NAME value: helloworld - name: OB_CLUSTER value: obcluster - name: RS_LIST value: ${RS_LIST} - name: PROXYRO_PASSWORD valueFrom: secretKeyRef: name: proxyro-password key: password resources: limits: memory: 2Gi cpu: "1" requests: memory: 200Mi cpu: 200mwhere:
APP_NAMEindicates the application name of ODP.OB_CLUSTERindicates the name of the OceanBase cluster that ODP connects to.RS_LISTindicates the RootService list of the OceanBase cluster, in the format of${ip1}:${sql_port1};${ip2}:${sql_port2};${ip3}:${sql_port3}. You need to replace the RootService list based on the actual situation. You can directly connect to the OceanBase cluster and execute theSELECT GROUP_CONCAT(CONCAT(SVR_IP, ':', SQL_PORT) SEPARATOR ';') AS RS_LIST FROM oceanbase.DBA_OB_SERVERS;statement to view its RootService list. For information about how to directly connect to an OceanBase cluster, see the Directly connect to the OceanBase cluster section in this topic.PROXYRO_PASSWORDindicates the value ofnameto the name of the secret created earlier for theproxyro@sysuser. The secret must contain thepasswordfield.
Deploy ODP.
kubectl apply -f obproxy.yamlVerify whether ODP is successfully deployed.
Run the following command to view the status of ODP pods:
kubectl get pod -A | grep obproxyThe output is as follows, indicating that two ODP pods exist:
oceanbase obproxy-5cb8f4d975-pmr59 1/1 Running 0 21s oceanbase obproxy-5cb8f4d975-xlvjp 1/1 Running 0 21sRun the following command to view the ODP service:
kubectl get svc svc-obproxy -n oceanbaseThe output is as follows:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-obproxy ClusterIP 10.10.10.1 <none> 2883/TCP,2884/TCP 2m26s
Connect to the OceanBase cluster by using ODP
We recommend that you connect to the OceanBase cluster by using ODP. After you deploy OceanBase Database and ODP, perform the following steps to connect to the OceanBase cluster:
Obtain the service address of ODP.
kubectl get svc ${servicename} -n ${namespace} # for example kubectl get svc svc-obproxy -n oceanbaseThe output is as follows:
# output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-obproxy ClusterIP 10.10.10.1 <none> 2883/TCP,2884/TCP 2m26sConnect to the OceanBase cluster.
You can run the following command to connect to the cluster based on the values of
CLUSTER-IPandPORTin the preceding output:mysql -h10.10.10.1 -P2883 -uroot@sys#obcluster -p oceanbase -A -c
Monitor the OceanBase cluster
You can use the OceanBase Dashboard tool to monitor the OceanBase cluster. OceanBase Dashboard is a GUI-based O&M tool used in combination with ob-operator. At present, its latest version is V0.2.1. It provides features such as cluster management, tenant management, backup management, performance monitoring, and endpoint connection. It is the top choice for working with ob-operator to monitor the performance metrics of an OceanBase cluster in a Kubernetes environment.
Deploy OceanBase Dashboard
We recommend that you install OceanBase Dashboard by using Helm, the package manager of Kubernetes. After Helm is installed, run the following three commands to install OceanBase Dashboard in the default namespace:
helm repo add ob-operator https://oceanbase.github.io/ob-operator/
helm repo update ob-operator
helm install oceanbase-dashboard ob-operator/oceanbase-dashboard --version=0.2.1
To install OceanBase Dashboard in another namespace, replace the last installation command with the following one:
helm install oceanbase-dashboard ob-operator/oceanbase-dashboard --version=0.2.1 -n <namespace> --create-namespace
In this command, <namespace> specifies the target namespace where OceanBase Dashboard is to be installed. If the target namespace does not exist, add the --create-namespace option in the command to create the required namespace. If your cluster supports LoadBalancer services, you can also configure --set service.type=LoadBalancer during installation to set the service type to LoadBalancer.
The following output indicates that OceanBase Dashboard is successfully deployed.
NAME: oceanbase-dashboard
LAST DEPLOYED: Wed May 8 11:04:49 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Welcome to OceanBase dashboard
1. After installing the dashboard chart, you can use `port-forward` to expose the dashboard outside like:
> kubectl port-forward -n default services/oceanbase-dashboard-oceanbase-dashboard 18081:80 --address 0.0.0.0
then you can visit the dashboard on http://$YOUR_SERVER_IP:18081
2. Use the following command to get password for default admin user.
> echo $(kubectl get -n default secret oceanbase-dashboard-user-credentials -o jsonpath='{.data.admin}' | base64 -d)
Log in as default account:
Username: admin
Password: <Get from the above command>
After OceanBase Dashboard is deployed, the Kubernetes cluster needs to take time to pull the required images. You can run the following command to check whether OceanBase Dashboard has been installed:
kubectl get deployment oceanbase-dashboard-oceanbase-dashboard
The output is as follows. If the value of the READY column is 1/1, the installation has been completed. In this case, you can perform subsequent operations.
NAME READY UP-TO-DATE AVAILABLE AGE
oceanbase-dashboard-oceanbase-dashboard 1/1 1 1 2m10s
Access OceanBase Dashboard
The default login account for OceanBase Dashboard is admin. You can run the following command, which is the second command in the output returned after OceanBase Dashboard is deployed, to obtain the password of the account:
echo $(kubectl get -n default secret oceanbase-dashboard-user-credentials -o jsonpath='{.data.admin}' | base64 -d)
You can access OceanBase Dashboard in any of the following ways:
Access through NodePort: By default, a service of the NodePort type is created for OceanBase Dashboard. You can access OceanBase Dashboard through NodePort.
Access through LoadBalancer: If your cluster supports LoadBalancer services, you can access OceanBase Dashboard through LoadBalancer.
Access through Port Forward: If the preceding two ways cannot be used, you can use Port Forward to temporarily access OceanBase Dashboard.
By default, a service of the NodePort type is created for OceanBase Dashboard. You can run the following command to obtain the NodePort on which the service is exposed. Note that the service name varies with the Helm chart name that you specified. You can obtain the service name from the first command in the output returned after OceanBase Dashboard is deployed. In this topic, the sample service name is oceanbase-dashboard-oceanbase-dashboard.
kubectl get svc oceanbase-dashboard-oceanbase-dashboard
The output is as follows:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oceanbase-dashboard-oceanbase-dashboard NodePort 10.10.10.1 <none> 80:30176/TCP 13m
Access the port number (specified by PORT) of the Kubernetes node from a browser to open the login page of OceanBase Dashboard. The service port is dynamically allocated by Kubernetes. You can run the kubectl get nodes -o wide command to obtain the node IP address.
You can choose to create a service of the LoadBalancer type when you install OceanBase Dashboard, or run the following command to change the service type to LoadBalancer after you install OceanBase Dashboard:
kubectl patch -n oceanbase-dashboard svc oceanbase-dashboard-oceanbase-dashboard --type=merge --patch='{"spec": {"type": "LoadBalancer"}}'
After the service type is changed, the Kubernetes cluster will allocate an external IP address for OceanBase Dashboard. You can access the login page of OceanBase Dashboard by using this external IP address. Wait for a period of time and run the following command. You will find that the EXTERNAL-IP field in the service information returned has been assigned a value.
kubectl get svc oceanbase-dashboard-oceanbase-dashboard
The output is as follows:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oceanbase-dashboard-oceanbase-dashboard LoadBalancer 10.10.10.1 10.10.10.2 80:18082/TCP 1d5h
Enter http://10.10.10.2:18082 in a browser to access OceanBase Dashboard. In this example, EXTERNAL-IP is 10.10.10.2 and PORT is 18082. In practice, you need to replace the IP address and port number based on the actual situation.
If you cannot expose OceanBase Dashboard as a NodePort service because the port for service exposure on the node where OceanBase Dashboard is deployed in your cluster is unavailable, or if your cluster does not support LoadBalancer services, you can run the kubectl port-forward command to expose OceanBase Dashboard on the specified port of the current server for temporary access. For example, you can run the following command to expose OceanBase Dashboard on port 18081 of the server where you run this command:
kubectl port-forward -n default services/oceanbase-dashboard-oceanbase-dashboard 18081:80 --address 0.0.0.0
You can use a browser on another computer to access port 18081 of the current server to go to the login page of OceanBase Dashboard. If you run the preceding command on your local computer, you can enter http://127.0.0.1:18081 in a browser to access OceanBase Dashboard.
View monitoring metrics
After you log in to OceanBase Dashboard, click Cluster or Tenant in the left-side pane to view the monitoring information of clusters or tenants, as shown in the following figures.
Note
Apart from performance metric monitoring for clusters and tenants, OceanBase Dashboard also provides other features, such as cluster management, tenant management, backup management, and endpoint connection, to facilitate the O&M of OceanBase clusters.