OceanBase Deployer (OBD) supports the deployment of Prometheus and Grafana since V1.6.0. This topic describes how to add GUI-based monitoring for a deployed cluster.
This topic describes three scenarios. You can refer to the descriptions based on the actual conditions of your cluster.
Note
The configuration examples in this topic are for reference only. For more information about the detailed configurations, go to the
/usr/obd/exampledirectory and view the examples of different components.
Scenario 1: OBAgent is not deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is not deployed, you must create a cluster and deploy OBAgent, Prometheus, and Grafana in the cluster.
OBAgent is separately configured for collecting monitoring information of OceanBase Database. It is declared in the configuration file that Prometheus depends on OBAgent and that Grafana depends on Prometheus.
Sample configuration file:
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
obagent:
servers:
# Please don't use hostname, only IP can be supported
- 10.10.10.1
- 10.10.10.2
- 10.10.10.3
global:
# The working directory for obagent. obagent is started under this directory. This is a required field.
home_path: /home/admin/obagent
# The port of monitor agent. The default port number is 8088.
monagent_http_port: 8088
# The port of manager agent. The default port number is 8089.
mgragent_http_port: 8089
# Log path. The default value is log/monagent.log.
log_path: log/monagent.log
# The log level of manager agent.
mgragent_log_level: info
# The total size of manager agent.Log size is measured in Megabytes. The default value is 30M.
mgragent_log_max_size: 30
# Expiration time for manager agent logs. The default value is 30 days.
mgragent_log_max_days: 30
# The maximum number for manager agent log files. The default value is 15.
mgragent_log_max_backups: 15
# The log level of monitor agent.
monagent_log_level: info
# The total size of monitor agent.Log size is measured in Megabytes. The default value is 200M.
monagent_log_max_size: 200
# Expiration time for monitor agent logs. The default value is 30 days.
monagent_log_max_days: 30
# The maximum number for monitor agent log files. The default value is 15.
monagent_log_max_backups: 15
# Username for HTTP authentication. The default value is admin.
http_basic_auth_user: admin
# Password for HTTP authentication.
http_basic_auth_password: ******
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the ocp_agent_monitor_password in oceanbase-ce.
monitor_password: ******
# The SQL port for observer. The default value is 2881. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
sql_port: 2881
# The RPC port for observer. The default value is 2882. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
rpc_port: 2882
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
cluster_name: obcluster
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
cluster_id: 1
# The redo dir for Oceanbase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the redo_dir in oceanbase-ce.
ob_log_path: /home/admin/observer/store
# The data dir for Oceanbase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the data_dir in oceanbase-ce.
ob_data_path: /home/admin/observer/store
# The work directory for Oceanbase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the home_path in oceanbase-ce.
ob_install_path: /home/admin/observer
# The log path for Oceanbase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the {home_path}/log in oceanbase-ce.
observer_log_path: /home/admin/observer/log
# Monitor status for OceanBase Database. Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, OBD decides whether to enable this parameter based on depends.
ob_monitor_status: active
10.10.10.1:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone1
10.10.10.2:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone2
10.10.10.3:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone3
prometheus:
depends:
- obagent
servers:
- 10.10.10.4
global:
home_path: /home/admin/prometheus
grafana:
depends:
- prometheus
servers:
- 10.10.10.4
global:
home_path: /home/admin/grafana
login_password: ******
For more information about the parameters in the configuration file, see Configuration files. After you modify the configuration file, run the following command to deploy and start a new cluster:
obd cluster deploy <new deploy name> -c new_config.yaml
obd cluster start <new deploy name>
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
Scenario 2: OBAgent is deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is deployed, you must create a cluster and deploy Prometheus and Grafana in the cluster.
In this scenario, it cannot be declared that Prometheus depends on OBAgent. Therefore, you must manually associate them. Open the conf/prometheus_config/prometheus.yaml file in the installation directory of OBAgent in the existing cluster, and copy the corresponding configuration to the conifg parameter in the global section of the Prometheus settings. Sample configuration file:
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
prometheus:
servers:
- 10.10.10.4
global:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path: /home/admin/prometheus
config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval: 1s
evaluation_interval: 10s
rule_files:
- "rules/*rules.yaml"
scrape_configs:
- job_name: prometheus
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- 'localhost:9090'
- job_name: node
basic_auth:
username: ******
password: ******
metrics_path: /metrics/node/host
scheme: http
static_configs:
- targets:
- 10.10.10.1:8088
- job_name: ob_basic
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/basic
scheme: http
static_configs:
- targets:
- 10.10.10.1:8088
- job_name: ob_extra
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/extra
scheme: http
static_configs:
- targets:
- 10.10.10.1:8088
- job_name: agent
basic_auth:
username: ******
password: ******
metrics_path: /metrics/stat
scheme: http
static_configs:
- targets:
- 10.10.10.1:8088
grafana:
servers:
- 10.10.10.4
depends:
- prometheus
global:
home_path: /home/admin/grafana
login_password: ****** # Grafana login password. The default value is 'oceanbase'.
For more information about the parameters in the configuration file, see Configuration files. In the preceding sample configuration file, the username and password of basic_auth must be the same as those of http_basic_auth_xxx in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
obd cluster deploy <new deploy name> -c new_config.yaml
After the deployment is completed, copy the conf/prometheus_config/rules directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
obd cluster start <new deploy name>
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
Notice
In the monitoring metrics of Prometheus in
scrape_configs,localhost:9090must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus,basic_authmust be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.If the OBAgent node of the existing cluster changes, you must run the
obd cluster edit-config <new deploy name>command to synchronize the content in theconf/prometheus_config/prometheus.yamlinstallation directory of OBAgent.
Scenario 3: Monitor multiple clusters and dynamically synchronize OBAgent changes
To enable Prometheus to collect the monitoring information of multiple clusters or dynamically synchronize OBAgent changes, you can make a few changes on the basis of scenario 2.
Specifically, replace static_configs in Prometheus configurations with file_sd_config to obtain and synchronize the information about the OBAgent node. In the following example, all .yaml files in the targets directory of the installation directory (home_path) of Prometheus are collected.
Note
The
targetsdirectory will be created in the installation directory of Prometheus only if related parameters are configured for OBAgent in the configuration file of the existing cluster. For more information, see Modify the configurations of a monitored cluster.
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
prometheus:
servers:
- 10.10.10.4
global:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path: /home/admin/prometheus
config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval: 1s
evaluation_interval: 10s
rule_files:
- "rules/*rules.yaml"
scrape_configs:
- job_name: prometheus
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- 'localhost:9090'
- job_name: node
basic_auth:
username: ******
password: ******
metrics_path: /metrics/node/host
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: ob_basic
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/basic
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: ob_extra
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/extra
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: agent
basic_auth:
username: ******
password: ******
metrics_path: /metrics/stat
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
grafana:
servers:
- 10.10.10.4
depends:
- prometheus
global:
home_path: /home/admin/grafana
login_password: ****** # Grafana login password. The default value is 'oceanbase'.
For more information about the parameters in the configuration file, see Configuration files. In the preceding sample configuration file, the username and password of basic_auth must be the same as those of http_basic_auth_xxx in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
obd cluster deploy <new deploy name> -c new_config.yaml
After the deployment is completed, copy the conf/prometheus_config/rules directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
obd cluster start <new deploy name>
After you deploy the new cluster, go to the Grafana page as prompted. At this time, you cannot view the monitoring information of monitored clusters. You must modify the OBAgent configurations of the monitored clusters.
Modify the configurations of a monitored cluster
To create the targets directory in the installation directory of Prometheus, you must run the obd cluster edit-config <deploy name> command to modify the configuration file. Specifically, you must add the target_sync_configs parameter to the configuration file to point to the targets directory in the installation directory of Prometheus. By default, the user settings of the current cluster are used. If the user settings on the server where Prometheus is installed are inconsistent with the user settings in the configuration file of the current cluster, perform configuration based on the example.
obagent:
servers:
# Please don't use hostname, only IP can be supported
- 10.10.10.1
- 10.10.10.2
- 10.10.10.3
global:
...
target_sync_configs:
- host: 10.10.10.4
target_dir: /home/admin/prometheus/targets
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
...
After you modify the configuration file, restart the cluster as prompted. Then, go to the Grafana page and view the monitoring information of the existing cluster.
Notice
In the monitoring metrics of Prometheus in
scrape_configs,localhost:9090must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus,basic_authmust be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.The HTTP usernames and passwords that are collected by Prometheus must be consistent for all OBAgents. For any inconsistency, split the collection metrics.