This topic describes how to use ob-operator to create an OceanBase cluster.
Preparations
Configure labels for Kubernetes nodes
ob-operator determines the distribution of OBServer nodes based on the setting of nodeSelector in the obcluster.yaml configuration file of OceanBase Database. Therefore, you must first configure labels for Kubernetes nodes.
You can run the following command to set a label for a node:
kubectl label node <node_name> <label_key>=<label_value>
# for example
kubectl label node node1 ob.zone=zone1
Values of node_name can be obtained by running the kubectl get node command.
Deploy local-path-provisioner
When you use ob-operator to deploy OceanBase Database, you must create PersistentVolumeClaims (PVCs) to store data of OceanBase Database. We recommend that you use local storage for better performance. In this topic, local-path-provisioner is used to manage PVCs.
Run the following command to deploy local-path-provisioner:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml
For more information, see local-path-provisioner.
Deploy OceanBase Database
You can define an OceanBase cluster by using a YAML configuration file. You can modify the configuration file provided by ob-operator for custom deployment of the OceanBase cluster. Run the following command to obtain the obcluster.yaml configuration file:
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/obcluster.yaml
The following example shows the content of the configuration file.
apiVersion: cloud.oceanbase.com/v1
kind: OBCluster
metadata:
name: ob-test
namespace: obcluster
spec:
imageRepo: oceanbase/oceanbase-cloud-native
tag: 4.1.0.0-100000192023032010
imageObagent: oceanbase/obagent:1.2.0
clusterID: 1
topology:
- cluster: cn
zone:
- name: zone1
region: region1
nodeSelector:
ob.zone: zone1
replicas: 1
- name: zone2
region: region1
nodeSelector:
ob.zone: zone2
replicas: 1
- name: zone3
region: region1
nodeSelector:
ob.zone: zone3
replicas: 1
parameters:
- name: log_disk_size
value: "40G"
resources:
cpu: 2
memory: 10Gi
storage:
- name: data-file
storageClassName: "local-path"
size: 50Gi
- name: data-log
storageClassName: "local-path"
size: 50Gi
- name: log
storageClassName: "local-path"
size: 30Gi
- name: obagent-conf-file
storageClassName: "local-path"
size: 1Gi
volume:
name: backup
nfs:
server: ${nfs_server_address}
path: /opt/nfs
"readonly": false
The parameters in the configuration file are described as follows:
imageRepo: the image repository of OceanBase Database.tag: the tag of the OceanBase image.imageObagent: the image of OBAgent, which is used to collect monitoring data of the OceanBase cluster.cluster: the cluster name. To deploy the OceanBase cluster in the Kubernetes cluster, set this parameter to the same value as that of the startup parameter--cluster-nameof ob-operator, which defaults tocn.nodeSelector: the Kubernetes node selection strategy of the zone based on the Kubernetes node labels, which are configured in deployment preparation.parameters: the custom parameters of OceanBase Database. Specify the parameters as needed.cpu: the number of CPU cores for the pod. We recommend that you set the value to an integer greater than 2. A value smaller than 2 may cause system exceptions.memory: the memory size of the pod. We recommend that you set this parameter to a value greater than 10 GiB. A value smaller than 10 GiB may cause system exceptions.data-file: specifies the size andstorageClassNameof the data directory of OceanBase Database. We recommend that you set the size to at least three times the memory size.data-log: specifies the size andstorageClassNameof the clog directory of OceanBase Database. We recommend that you set the size to at least three times the memory size.log: specifies the size andstorageClassNameof the log directory of the observer process. We recommend that you set the size to greater than 10 GiB.obagent-conf-file: the size of the configuration file directory of OBAgent. We recommend that you set a moderate value around 1 GiB.volume: the size of the storage space for backup data. If backup is not required, you can leave it unconfigured. However, you cannot add the configuration after the cluster is created. Therefore, make a plan in advance.
After you modify the configuration file, run the following command to deploy the OceanBase cluster:
kubectl apply -f obcluster.yaml
View the deployment result
Run the following command to check whether the OceanBase cluster is deployed:
kubectl get obcluster ob-test -o yaml -n obcluster
If the status is Ready in the returned result, the cluster is deployed.
...
status:
status: Ready
topology:
- cluster: cn
clusterStatus: Ready
lastTransitionTime: "2023-07-12T07:10:33Z"
zone:
- availableReplicas: 1
expectedReplicas: 1
name: zone1
region: region1
zoneStatus: Ready
- availableReplicas: 1
expectedReplicas: 1
name: zone2
region: region1
zoneStatus: Ready
- availableReplicas: 1
expectedReplicas: 1
name: zone3
region: region1
zoneStatus: Ready
Deploy ODP
You can deploy OceanBase Database Proxy (ODP) by customizing the YAML configuration files provided by ob-operator. Run the following command to download the ODP configuration files:
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/obproxy/deployment.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/obproxy/service.yaml
The content of the deployment.yaml file is as follows:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: obproxy
namespace: obcluster
spec:
selector:
matchLabels:
app: obproxy
replicas: 2
template:
metadata:
labels:
app: obproxy
spec:
containers:
- name: obproxy
image: oceanbase/obproxy-ce:4.1.0.0-7
ports:
- containerPort: 2883
name: "sql"
- containerPort: 2884
name: "prometheus"
env:
- name: APP_NAME
value: helloworld
- name: OB_CLUSTER
value: ob-test
- name: RS_LIST
value: $RS_LIST
resources:
limits:
memory: 2Gi
cpu: "1"
The environment variables are described as follows:
APP_NAME: the app name of ODP.OB_CLUSTER: the name of the OceanBase cluster to which ODP connects.RS_LIST: the RootService list of the OceanBase cluster in the format of${ip1}:${port1};${ip2}:${port2};${ip3}:${port3}. You can use either of the following methods to obtain the addresses forrs_list:Method 1: Connect to the OceanBase cluster and run the
show parameters like 'rootservice_list'command.Method 2: Run
kubectl get RootService rs-${cluster_name} -n ${namespace} -o yamlto obtain all RootService addresses. Replace${cluster_name}and${namespace}in the command with the actual OceanBase cluster name and namespace.
The service.yaml file enables two ports, one for SQL connection and the other for monitoring data collection. The content of the service.yaml file is as follows:
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: obproxy-service
namespace: obcluster
spec:
type: NodePort
selector:
app: obproxy
ports:
- name: "sql"
port: 2883
targetPort: 2883
nodePort: 30083
- name: "prometheus"
port: 2884
targetPort: 2884
nodePort: 30084
After you modify the configuration files, run the following commands to deploy ODP:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Connect to the OceanBase cluster
We recommend that you use ODP to connect to OceanBase Database. After you deploy OceanBase Database and ODP, you can run the following command to obtain the service endpoint of ODP:
kubectl get svc ${servicename} -n ${namespace}
# for example
kubectl get svc obproxy-service -n obcluster
## output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
obproxy-service NodePort xxx.xxx.xxx.xxx <none> 2883:30083/TCP,2884:30084/TCP 1m
You can run any of the following commands to connect to the cluster by using the cluster IP address or port number:
# use clusterip without password
obclient -hxxx.xxx.xxx.xxx -P2883 -uroot@sys oceanbase -A -c
# use clusterip with password
obclient -hxxx.xxx.xxx.xxx -P2883 -uroot@sys -p oceanbase -A -c
# using nodeport without password
obclient -h<node_ip> -P30083 -uroot@sys oceanbase -A -c
# using nodeport with password
obclient -h<node_ip> -P30083 -uroot@sys -p oceanbase -A -c
Monitor OceanBase Database
Deploy Prometheus
You can deploy Prometheus by customizing the configuration files provided by ob-operator. Run the following command to download the Prometheus configuration files:
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/prometheus/cluster-role.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/prometheus/cluster-role-binding.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/prometheus/configmap.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/prometheus/deployment.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/prometheus/service.yaml
Prometheus selects monitoring data sources based on the service and port names in Kubernetes. You can filter the data sources by using a regular expression. See the following example:
# Configurations in configmap.yaml
scrape_configs:
- job_name: 'obagent-monitor-basic'
kubernetes_sd_configs:
- role: endpoints
metrics_path: '/metrics/ob/basic'
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name,__meta_kubernetes_endpoint_port_name]
regex: svc-monitor-ob-test;monagent
action: keep
- job_name: 'obagent-monitor-extra'
kubernetes_sd_configs:
- role: endpoints
metrics_path: '/metrics/ob/extra'
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_pod_container_port_name]
regex: svc-monitor-ob-test;monagent
action: keep
- job_name: 'obagent-monitor-host'
kubernetes_sd_configs:
- role: endpoints
metrics_path: '/metrics/node/host'
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name,__meta_kubernetes_endpoint_port_name]
regex: svc-monitor-ob-test;monagent
action: keep
- job_name: 'proxy-monitor'
kubernetes_sd_configs:
- role: endpoints
metrics_path: '/metrics'
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name,__meta_kubernetes_endpoint_port_name]
regex: obproxy-service;prometheus
action: keep
Observe the following notes:
Services and ports:
OceanBase Database: ob-operator creates a service based on the name of the deployed OceanBase cluster. The service is named in the format of
svc-monitor-${obcluster_name}, and is used to obtain the exporter endpoint. If you have specified a name for the OceanBase cluster during deployment, you must modify the corresponding configurations based on the actual cluster name.ODP: You must configure the service name and port name for ODP.
Request paths of monitoring metrics:
Monitoring metrics of OceanBase Database:
/metrics/ob/basicand/metrics/ob/extraMonitoring metrics of ODP:
/metrics. ODP supports the exposure of monitoring metrics by using the Prometheus protocol.
After you modify the configuration files, run the following commands to deploy Prometheus:
kubectl apply -f cluster-role.yaml
kubectl apply -f cluster-role-binding.yaml
kubectl apply -f configmap.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Verify the deployment of Prometheus
Enter the address of Prometheus in the address bar of your browser and press Enter. On the page that appears, choose Status > Targets and verify whether proxy-monitor, obagent-monitor-host, obagent-monitor-basic, and obagent-monitor-extra are all in the UP state.
Click Graph and enter a PromQL expression to query the endpoint status.
Deploy Grafana
You can deploy Grafana by customizing the configuration files provided by ob-operator. Run the following command to download the Grafana configuration files:
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/grafana/configmap.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/grafana/pvc.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/grafana/deployment.yaml
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/grafana/service.yaml
Key configuration information of Grafana, including the service address of Prometheus, is specified by using the configmap.yaml file. If Prometheus is deployed based on custom configurations, you must specify the actual service address.
# Key configurations in configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: obcluster
data:
prometheus.yaml: |-
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "http://svc-prometheus.obcluster.svc:8080",
"version": 1,
"isDefault": true
}
]
}
Set url in the datasources section to the service address of Prometheus.
After you modify the configuration files, run the following commands to deploy Grafana:
kubectl apply -f configmap.yaml
kubectl apply -f pvc.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Verify the deployment of Grafana
Enter the address of Grafana in the address bar of your browser and press Enter. On the page that appears, log on as the admin user, with the default password admin. When you log on for the first time, you will be prompted to change the password. By default, the configurations of Grafana contain the dashboards of OceanBase Database. You can visit the following URL to view a dashboard:
http://${node_ip}:${node_port}/d/oceanbase
You can view the host metrics or ODP monitoring templates on the corresponding dashboard by adding the corresponding ID to the URL:
Host: 15216
ODP: 15354
Note
Monitoring data of ODP will be displayed only after ODP receives actual requests.