OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Deployer

V3.1.0Community Edition

  • What is obd
  • Quick Start
    • Install obd
    • Quickly deploy OceanBase Database
    • Quickly deploy OCP
    • Deploy an OceanBase cluster on the GUI
  • obd Command
    • Quick deployment command
    • Cluster commands
    • Mirror and repository commands
    • Testing commands
    • Tool commands
    • obdiag commands
    • Binlog commands
    • Telemetry commands
  • User Guide
    • Configuration files
    • Deploy through GUI
      • GUI operation overview
      • Deploy OCP through the GUI
      • Modify components on the GUI
    • Deploy through CLI
      • Deploy OceanBase Database on a single server
      • Deploy OCP Express by using commands
      • Deploy and use obconfigserver
      • Deploy the binlog service
      • Deploy oblogproxy by using the CLI
      • Deploy OCP by using commands
      • Use OCP to take over a cluster deployed by obd
      • Use obd to take over a cluster
      • Add GUI-based monitoring for an existing cluster
      • Upgrade OCP Express
      • Upgrade OceanBase Database
      • Physical Standby Database
        • Deploy primary and standby tenants by using obd
        • Switch the roles of tenants and decouple a standby tenant from the primary tenant
      • Scale out a cluster and change cluster components
      • Performance test
      • Cluster diagnotistics
  • FAQ
    • FAQ
    • How do I upgrade an OBProxy to obproxy-ce 3.2.3?
  • Mode configuration rules
  • Error codes
  • Release Notes
    • Version rules
    • V3.1
      • OceanBase Deployer V3.1.2
      • OceanBase Deployer V3.1.1
      • OceanBase Deployer V3.1.0
    • V3.0
      • OceanBase Deployer V3.0.1
      • OceanBase Deployer V3.0.0
    • V2.10
      • OceanBase Deployer V2.10.1
      • OceanBase Deployer V2.10.0
    • V2.9
      • OceanBase Deployer V2.9.2
      • OceanBase Deployer V2.9.1
      • OceanBase Deployer V2.9.0
    • V2.8
      • OceanBase Deployer V2.8.0
    • V2.7
      • OceanBase Deployer V2.7.0
    • V2.6
      • OceanBase Deployer V2.6.2
      • OceanBase Deployer V2.6.1
      • OceanBase Deployer V2.6.0
    • V2.5
      • OceanBase Deployer V2.5.0
    • V2.4
      • OceanBase Deployer V2.4.0
    • V2.3
      • OceanBase Deployer V2.3.1
      • OceanBase Deployer V2.3.0
    • V2.2
      • OceanBase Deployer V2.2.0
    • V2.1
      • OceanBase Deployer V2.1.1
      • OceanBase Deployer V2.1.0
    • V2.0
      • OceanBase Deployer V2.0.1
      • OceanBase Deployer V2.0.0
    • V1.6
      • OceanBase Deployer V1.6.2
      • OceanBase Deployer V1.6.1
      • OceanBase Deployer V1.6.0
    • V1.5
      • OceanBase Deployer V1.5.0
    • V1.4
      • OceanBase Deployer V1.4.0
    • V1.3
      • OceanBase Deployer V1.3.3
      • OceanBase Deployer V1.3.2
      • OceanBase Deployer V1.3.0
    • V1.2
      • OceanBase Deployer V1.2.1
      • OceanBase Deployer V1.2.0

Download PDF

What is obd Install obd Quickly deploy OceanBase Database Quickly deploy OCP Deploy an OceanBase cluster on the GUI Quick deployment command Cluster commands Mirror and repository commands Testing commands Tool commands obdiag commands Binlog commands Telemetry commands Configuration files GUI operation overview Deploy OCP through the GUI Modify components on the GUI Deploy OceanBase Database on a single server Deploy OCP Express by using commands Deploy and use obconfigserver Deploy the binlog service Deploy oblogproxy by using the CLI Deploy OCP by using commands Use OCP to take over a cluster deployed by obd Use obd to take over a cluster Add GUI-based monitoring for an existing cluster Upgrade OCP Express Upgrade OceanBase Database Scale out a cluster and change cluster components Performance test Cluster diagnotistics FAQ How do I upgrade an OBProxy to obproxy-ce 3.2.3? Mode configuration rules Error codes Version rules OceanBase Deployer V3.1.2 OceanBase Deployer V3.1.1 OceanBase Deployer V3.1.0 OceanBase Deployer V3.0.1 OceanBase Deployer V3.0.0 OceanBase Deployer V2.10.1 OceanBase Deployer V2.10.0 OceanBase Deployer V2.9.2 OceanBase Deployer V2.9.1 OceanBase Deployer V2.9.0 OceanBase Deployer V2.8.0 OceanBase Deployer V2.7.0 OceanBase Deployer V2.6.2 OceanBase Deployer V2.6.1 OceanBase Deployer V2.6.0 OceanBase Deployer V2.5.0 OceanBase Deployer V2.4.0 OceanBase Deployer V2.3.1 OceanBase Deployer V2.3.0 OceanBase Deployer V2.2.0 OceanBase Deployer V2.1.1 OceanBase Deployer V2.1.0 OceanBase Deployer V2.0.1 OceanBase Deployer V2.0.0 OceanBase Deployer V1.6.2 OceanBase Deployer V1.6.1 OceanBase Deployer V1.6.0 OceanBase Deployer V1.5.0 OceanBase Deployer V1.4.0 OceanBase Deployer V1.3.3 OceanBase Deployer V1.3.2 OceanBase Deployer V1.3.0 OceanBase Deployer V1.2.1 OceanBase Deployer V1.2.0
OceanBase logo

The Unified Distributed Database for the Al Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Deployer
  3. V3.1.0
iconOceanBase Deployer
V 3.1.0Community Edition
Community Edition
  • V 3.2.1
  • V 3.2.0
  • V 3.1.0
  • V 3.0.0
  • V 2.10.1
  • V 2.10.0
  • V 2.9.0
  • V 2.8.0
  • V 2.7.0
  • V 2.6.0
  • V 2.5.0
  • V 2.4.0
  • V 2.3.1
  • V 2.3.0
  • V 2.2.0
  • V 2.1.0
  • V 2.0.0
  • V 1.6.1

Add GUI-based monitoring for an existing cluster

Last Updated:2025-03-21 08:41:19  Updated
share
What is on this page
Scenario 1: OBAgent is not deployed in the cluster
Scenario 2: OBAgent is deployed in the cluster
Scenario 3: Monitor multiple clusters and dynamically synchronize OBAgent changes
Modify the configurations of a monitored cluster

folded

share

OceanBase Deployer (obd) supports the deployment of Prometheus and Grafana since V1.6.0. This topic describes how to add GUI-based monitoring for a deployed cluster.

This topic describes three scenarios. You can refer to the descriptions based on the actual conditions of your cluster.

Note

The configuration examples in this topic are for reference only. For more information about the detailed configurations, go to the /usr/obd/example directory and view the examples of different components.

Scenario 1: OBAgent is not deployed in the cluster

To add GUI-based monitoring for a cluster in which OBAgent is not deployed, you must create a cluster and deploy OBAgent, Prometheus, and Grafana in the cluster.

OBAgent is separately configured for collecting monitoring information of OceanBase Database. It is declared in the configuration file that Prometheus depends on OBAgent and that Grafana depends on Prometheus.

Sample configuration file:

# user:
#   username: your username
#   password: your password if need
#   key_file: your ssh-key file path if need
#   port: your ssh port, default 22
#   timeout: ssh connection timeout (second), default 30
obagent:
  servers:
    # Please don't use hostname, only IP can be supported
    - 10.10.10.1
    - 10.10.10.2
    - 10.10.10.3
  global:
    # The working directory for obagent. obagent is started under this directory. This is a required field.
    home_path: /home/admin/obagent
    # The port of monitor agent. The default port number is 8088.
    monagent_http_port: 8088
    # The port of manager agent. The default port number is 8089.
    mgragent_http_port: 8089
    # Log path. The default value is log/monagent.log.
    log_path: log/monagent.log
    # The log level of manager agent.
    mgragent_log_level: info
    # The total size of manager agent.Log size is measured in Megabytes. The default value is 30M.
    mgragent_log_max_size: 30
    # Expiration time for manager agent logs. The default value is 30 days.
    mgragent_log_max_days: 30
    # The maximum number for manager agent log files. The default value is 15.
    mgragent_log_max_backups: 15
    # The log level of monitor agent.
    monagent_log_level: info
    # The total size of monitor agent.Log size is measured in Megabytes. The default value is 200M.
    monagent_log_max_size: 200
    # Expiration time for monitor agent logs. The default value is 30 days.
    monagent_log_max_days: 30
    # The maximum number for monitor agent log files. The default value is 15.
    monagent_log_max_backups: 15
    # Username for HTTP authentication. The default value is admin.
    http_basic_auth_user: admin
    # Password for HTTP authentication. The default value is a random password.
    # http_basic_auth_password: ******
    # Monitor password for OceanBase Database. The default value is empty. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the ocp_agent_monitor_password in oceanbase-ce.
    monitor_password: ******
    # The SQL port for observer. The default value is 2881. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
    sql_port: 2881
    # The RPC port for observer. The default value is 2882. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
    rpc_port: 2882
    # Cluster name for OceanBase Database. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
    cluster_name: obcluster
    # Cluster ID for OceanBase Database. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
    cluster_id: 1
    # The redo dir for Oceanbase Database. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the redo_dir in oceanbase-ce.
    ob_log_path: /home/admin/observer/store
    # The data dir for Oceanbase Database. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the data_dir in oceanbase-ce.
    ob_data_path: /home/admin/observer/store
    # The work directory for Oceanbase Database. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the home_path in oceanbase-ce.
    ob_install_path: /home/admin/observer
    # The log path for Oceanbase Database. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the {home_path}/log in oceanbase-ce.
    observer_log_path: /home/admin/observer/log
    # Monitor status for OceanBase Database.  Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, obd decides whether to enable this parameter based on depends.
    ob_monitor_status: active
  10.10.10.1:
    # Zone name for your observer. The default value is zone1. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
    zone_name: zone1
  10.10.10.2:
    # Zone name for your observer. The default value is zone1. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
    zone_name: zone2
  10.10.10.3:
    # Zone name for your observer. The default value is zone1. When a depends exists, obd gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
    zone_name: zone3
prometheus:
  depends:
    - obagent
  servers:
    - 10.10.10.4
  global:
    home_path: /home/admin/prometheus
grafana:
  depends:
    - prometheus
  servers:
    - 10.10.10.4
  global:
    home_path: /home/admin/grafana
    login_password: ******

For more information about the parameters in the configuration file, see Configuration files. After you modify the configuration file, run the following command to deploy and start a new cluster:

obd cluster deploy <new deploy name> -c new_config.yaml
obd cluster start <new deploy name>

After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.

Scenario 2: OBAgent is deployed in the cluster

To add GUI-based monitoring for a cluster in which OBAgent is deployed, you must create a cluster and deploy Prometheus and Grafana in the cluster.

In this scenario, it cannot be declared that Prometheus depends on OBAgent. Therefore, you must manually associate them. Open the conf/prometheus_config/prometheus.yaml file in the installation directory of OBAgent in the existing cluster, and copy the corresponding configuration to the conifg parameter in the global section of the Prometheus settings. Sample configuration file:

# user:
#   username: your username
#   password: your password if need
#   key_file: your ssh-key file path if need
#   port: your ssh port, default 22
#   timeout: ssh connection timeout (second), default 30
prometheus:
  servers:
    - 10.10.10.4
  global:
    # The working directory for prometheus. prometheus is started under this directory. This is a required field.
    home_path: /home/admin/prometheus
    config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
      global:
        scrape_interval: 1s
        evaluation_interval: 10s

      rule_files:
        - "rules/*rules.yaml"

      scrape_configs:
        - job_name: prometheus
          metrics_path: /metrics
          scheme: http
          static_configs:
            - targets:
                - 'localhost:9090'
        - job_name: node
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/node/host
          scheme: http
          static_configs:
            - targets:
                - 10.10.10.1:8088
        - job_name: ob_basic
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/ob/basic
          scheme: http
          static_configs:
            - targets:
                - 10.10.10.1:8088
        - job_name: ob_extra
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/ob/extra
          scheme: http
          static_configs:
            - targets:
                - 10.10.10.1:8088
        - job_name: agent
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/stat
          scheme: http
          static_configs:
            - targets:
                - 10.10.10.1:8088
grafana:
  servers:
    - 10.10.10.4
  depends:
    - prometheus
  global:
    home_path: /home/admin/grafana
    login_password: ****** # Grafana login password. The default value is 'oceanbase'.

For more information about the parameters in the configuration file, see Configuration files. In the preceding sample configuration file, the username and password of basic_auth must be the same as those of http_basic_auth_xxx in the configuration file of OBAgent.

After you modify the configuration file, run the following command to deploy a new cluster:

obd cluster deploy <new deploy name> -c new_config.yaml

After the deployment is completed, copy the conf/prometheus_config/rules directory in the installation directory of OBAgent to the installation directory of Prometheus.

Run the following command to start the new cluster:

obd cluster start <new deploy name>

After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.

Notice

  1. In the monitoring metrics of Prometheus in scrape_configs, localhost:9090 must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, basic_auth must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.

  2. If the OBAgent node of the existing cluster changes, you must run the obd cluster edit-config <new deploy name> command to synchronize the content in the conf/prometheus_config/prometheus.yaml installation directory of OBAgent.

Scenario 3: Monitor multiple clusters and dynamically synchronize OBAgent changes

To enable Prometheus to collect the monitoring information of multiple clusters or dynamically synchronize OBAgent changes, you can make a few changes on the basis of scenario 2.

Specifically, replace static_configs in Prometheus configurations with file_sd_config to obtain and synchronize the information about the OBAgent node. In the following example, all .yaml files in the targets directory of the installation directory (home_path) of Prometheus are collected.

Note

The targets directory will be created in the installation directory of Prometheus only if related parameters are configured for OBAgent in the configuration file of the existing cluster. For more information, see Modify the configurations of a monitored cluster.

# user:
#   username: your username
#   password: your password if need
#   key_file: your ssh-key file path if need
#   port: your ssh port, default 22
#   timeout: ssh connection timeout (second), default 30
prometheus:
  servers:
    - 10.10.10.4
  global:
    # The working directory for prometheus. prometheus is started under this directory. This is a required field.
    home_path: /home/admin/prometheus
    config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
      global:
        scrape_interval: 1s
        evaluation_interval: 10s

      rule_files:
        - "rules/*rules.yaml"

      scrape_configs:
        - job_name: prometheus
          metrics_path: /metrics
          scheme: http
          static_configs:
            - targets:
                - 'localhost:9090'
        - job_name: node
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/node/host
          scheme: http
          file_sd_configs:
            - files:
              - targets/*.yaml
        - job_name: ob_basic
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/ob/basic
          scheme: http
          file_sd_configs:
            - files:
              - targets/*.yaml
        - job_name: ob_extra
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/ob/extra
          scheme: http
          file_sd_configs:
            - files:
              - targets/*.yaml
        - job_name: agent
          basic_auth:
            username: ******
            password: ******
          metrics_path: /metrics/stat
          scheme: http
          file_sd_configs:
            - files:
              - targets/*.yaml
grafana:
  servers:
    - 10.10.10.4
  depends:
    - prometheus
  global:
    home_path: /home/admin/grafana
    login_password: ****** # Grafana login password. The default value is 'oceanbase'.

For more information about the parameters in the configuration file, see Configuration files. In the preceding sample configuration file, the username and password of basic_auth must be the same as those of http_basic_auth_xxx in the configuration file of OBAgent.

After you modify the configuration file, run the following command to deploy a new cluster:

obd cluster deploy <new deploy name> -c new_config.yaml

After the deployment is completed, copy the conf/prometheus_config/rules directory in the installation directory of OBAgent to the installation directory of Prometheus.

Run the following command to start the new cluster:

obd cluster start <new deploy name>

After you deploy the new cluster, go to the Grafana page as prompted. At this time, you cannot view the monitoring information of monitored clusters. You must modify the OBAgent configurations of the monitored clusters.

Modify the configurations of a monitored cluster

To create the targets directory in the installation directory of Prometheus, you must run the obd cluster edit-config <deploy name> command to modify the configuration file. Specifically, you must add the target_sync_configs parameter to the configuration file to point to the targets directory in the installation directory of Prometheus. By default, the user settings of the current cluster are used. If the user settings on the server where Prometheus is installed are inconsistent with the user settings in the configuration file of the current cluster, perform configuration based on the example.

obagent:
  servers:
    # Please don't use hostname, only IP can be supported
    - 10.10.10.1
    - 10.10.10.2
    - 10.10.10.3
  global:
    ...
    target_sync_configs:
      - host: 10.10.10.4
        target_dir: /home/admin/prometheus/targets
    #    username: your username
    #    password: your password if need
    #    key_file: your ssh-key file path if need
    #    port: your ssh port, default 22
    #    timeout: ssh connection timeout (second), default 30
    ...

After you modify the configuration file, restart the cluster as prompted. Then, go to the Grafana page and view the monitoring information of the existing cluster.

Notice

  1. In the monitoring metrics of Prometheus in scrape_configs, localhost:9090 must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, basic_auth must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.

  2. The HTTP usernames and passwords that are collected by Prometheus must be consistent for all OBAgents. For any inconsistency, split the collection metrics.

Previous topic

Use obd to take over a cluster
Last

Next topic

Upgrade OCP Express
Next
What is on this page
Scenario 1: OBAgent is not deployed in the cluster
Scenario 2: OBAgent is deployed in the cluster
Scenario 3: Monitor multiple clusters and dynamically synchronize OBAgent changes
Modify the configurations of a monitored cluster