Blog编组 28
Diagnosis and Tuning with OceanBase: How to Perform End-to-end Diagnostics with obdiag

Diagnosis and Tuning with OceanBase: How to Perform End-to-end Diagnostics with obdiag

右侧logo

obdiag is a CLI diagnostic tool designed for OceanBase Database.  It performs comprehensive scans and collects crucial data,  such as logs, SQL audit records, and process stack information of OceanBase. You may deploy your OceanBase cluster by using OceanBase Control Platform (OCP) or OceanBase Deployer (OBD), or manually deploy it based on the OceanBase documentation. Regardless of the deployment mode, you can use obdiag to gather diagnostic information with a few clicks. This powerful tool has now been officially open-sourced, further enhancing its accessibility and usability for developers and database administrators.

The obdiag team has compiled relevant experiences regarding OceanBase diagnosis and tuning and has commenced releasing a series of tutorial articles. This article will dive into the end-to-end diagnostics of OceanBase and provide a detailed guide for use.


Distributed databases inherently come with operational complexities, especially due to their intricate call chains. When timeout issues occur, it can be challenging for O&M engineers to quickly determine whether the root cause lies within the database components or the network. Traditionally, engineers have relied on their experience and system logs to troubleshoot these issues. This blog will share how to perform end-to-end diagnostics for OceanBase using obdiag.

Introducing obdiag

obdiag is a CLI diagnostic tool designed for OceanBase Database. It scans and collects information such as logs, SQL audit records, and process stack information of OceanBase. You may deploy your OceanBase cluster by using OCP or OBD, or manually deploy it based on the OceanBase documentation. Regardless of the deployment mode, you can use obdiag to collect diagnostic information with a few clicks. obdiag has now been officially open-sourced.

The obdiag team has compiled relevant experiences regarding OceanBase diagnosis and tuning and has commenced releasing a series of tutorial articles. This article will explore the mechanisms of the end-to-end diagnostics feature of obdiag and give a detailed guide for use.

End-to-End Diagnostics in OceanBase 4.x

To enhance diagnostic efficiency, OceanBase 4.0 introduces a new end-to-end diagnostic mechanism called trace.log. This feature traces essential information of user SQL requests as they pass through various components and stages within the database call chain. The collected data is then presented in a visual format, allowing users to quickly identify the source of the problem and accurately diagnose internal component issues.

End-to-end diagnosis covers two main data flow paths:

●  The first path starts with a request from the application, which is then transmitted through a client (such as JDBC or OCI) to OceanBase Database Proxy (ODP), and forwarded to the OBServer nodes, with the final result returned to the application.

●  The second path involves a request sent directly from the application through the client to the OBServer nodes, with the result returned directly.

The purpose of end-to-end diagnostics is to locate issues within all components along these two paths.

oceanbase database

The trace files are independently recorded on the servers where OBProxy and OBServer are located. The end-to-end diagnostic trace information of the OBClient is not recorded on the client server, but is transmitted to OBProxy for recording.

Users can connect to OceanBase through the client. When sending a request through the client, the request may be routed through ODP to OBServer nodes, or it may directly reach OBServer nodes. With the end-to-end diagnosis functionality, O&M engineers can use methods from the PL/SQL DBMS_MONITOR  package to control and monitor whether the application has enabled end-to-end tracing, as well as the printing details of the tracing information as needed.

The logs for end-to-end diagnostics, also known as trace logs, are determined by the data access path. If the database is accessed through ODP, the trace information is recorded in both the ODP and OBServer log files. If the access is direct to OBServer nodes, the trace information is only recorded in the OBServer log file.

The trace log file size is 256M. A new trace.log will be written once it is filled, while the previous file will be archived and the number of archived files is controlled by the relevant parameters. By collecting and analyzing the trace logs, operators can track the end-to-end execution time of each transaction or SQL and other related information, in order to help locate issues.

The command obdiag analyze flt_trace can help operators analyze the trace.log distributed on various nodes more efficiently.


Design of obdiag end-to-end diagnostics

Working mechanisms of the end-to-end diagnostics feature of obdiag

The main architecture relies on the centralized collection mode of obdiag. When the user initiates end-to-end diagnostics, obdiag will collect trace.log from various nodes and then perform centralized analysis and processing of the collected data.

┌────────┐       ┌─────────────────────────────┐       ┌────────────────────────────┐
│ Server 1 │------>│ Search and filter the logs related to the specified flt_trace_id. │------>│ Return filtered logs to the node where obdiag is deployed. │---┐
└────────┘       └─────────────────────────────┘       └────────────────────────────┘   │
┌────────┐       ┌─────────────────────────────┐       ┌────────────────────────────┐   │    ┌────────────────────────────────┐
│ Server 2 │------>│ Search and filter the logs related to the specified flt_trace_id. │------>│ Return filtered logs to the node where obdiag is deployed. │---┼--->│ Aggregate logs obtained from parent and child nodes and generate a trace tree based on the hierarchical relationship between nodes.
└────────┘       └─────────────────────────────┘       └────────────────────────────┘   │    └────────────────────────────────┘
┌────────┐       ┌─────────────────────────────┐       ┌────────────────────────────┐   │
│ Server N │------>│ Search and filter the logs related to the specified flt_trace_id. │------>│ Return filtered logs to the node where obdiag is deployed. │---┘
└────────┘       └─────────────────────────────┘       └────────────────────────────┘

The sequence diagram of the end-to-end diagnostics

oceanbase database


How to perform obdiag end-to-end diagnostics

Versions of each component that support end-to-end diagnostics functionality:

●  OB-Server >= 4.0.0.0

●  OB-Proxy >= 4.0.0

●  OB-JDBC >= 2.4.0

●  obdiag >= 1.5.0


Step 0 (Optional): Reset Tenant Trace

If you wish to print all results during the end-to-end diagnostics, you can reset the tenant trace.

After connecting to the OBServer, execute the following SQL command:

-- Disable the Trace
call dbms_monitor.ob_tenant_trace_disable();
-- Record and print all the time-consuming information of the current tenant
call dbms_monitor.ob_tenant_trace_enable(1, 1, 'ALL');


Step 1: Find the ID of a suspected slow SQL statement

If you suspect that an SQL statement is slow, you can query the gv$ob_sql_audit view to obtain its flt_trace_id. Here is an example:

OceanBase(root@test)>select query_sql, flt_trace_id from oceanbase.gv$ob_sql_audit where query_sql like 'select * from test';
+----------------------------------+--------------------------------------+
| query_sql                        | flt_trace_id                         |
+----------------------------------+--------------------------------------+
| select * from test | 00060aa3-d607-f5f2-328b-388e17f687cb |
+----------------------------------+--------------------------------------+
1 row in set (0.001 sec)

The result indicates that the flt_trace_id of the suspected SQL statement is 00060aa3-d607-f5f2-328b-388e17f687cb.

You can also obtain the flt_trace_id by searching the trace.log file of ODP or the OBServer node. Here is an example:

head trace.log

[2024-06-28 22:20:07.242229] [489640][T1_L0_G0][T1][YF2A0BA2DA7E-00060BEC28627BEF-0-0] {"trace_id":"00060bec-275e-9832-e730-7c129f2182ac","name":"close_das_task","id":"00060bec-2a20-bf9e-56c9-724cb467f859","start_ts":1701958807240606,"end_ts":1701958807240607,"parent_id":"00060bec-2a20-bb5f-e03a-5da01aa3308b","is_follow":false}

The result indicates that the flt_trace_id of the suspected SQL statement is 00060bec-275e-9832-e730-7c129f2182ac.


Step 2: Set the configuration file

To configure obdiag, you can create a user-defined configuration file in a custom path, or use the system configuration file, which you do not need to modify in most cases.

User-defined configuration file

You can create or edit a user-defined configuration file by running the obdiag config <option> command. By default, the configuration file is named config.yml and is stored in the ~/.obdiag/ directory. Template configuration files are stored in the ~/.obdiag/exampledirectory.

obdiag config -h <db_host> -u <sys_user> [-p password] [-P port]

The following table describes the parameters:

ParameterRequired?Description
db_hostYesThe IP address used to connect to the sys tenant of the OceanBase cluster.
sys_userYesThe username used to connect to the sys tenant of the OceanBase cluster. To avoid permission issues, we recommend that you use 'root@sys'.
-p passwordNoThe password used to connect to the sys tenant of the OceanBase cluster. This parameter is left empty by default.
-P portNoThe port of the sys tenant of the OceanBase cluster. Port 2881 is used by default.


Here are some examples:

# A password is specified.
obdiag config -hxx.xx.xx.xx -uroot@sys -p***** -P2881

# No password is specified.
obdiag config -hxx.xx.xx.xx -uroot@sys -p"" -P2881

If you run the config command, you need to enter the information as prompted in interactive mode.

After the execution is completed, the new configuration is generated in the config.yml configuration file, and the original configuration file, if it contains configuration information, is backed up to the ~/.obdiag/ directory as a backup_conf file.

The following sample code shows a complete configuration, which consists of three parts that can be configured as needed:

# Part 1: Parameters related to OceanBase Cloud Platform (OCP)
ocp:
  login:
    url: http://xx.xx.xx.xxx:xx
    user: ****
    password: ******
# Part 2: Parameters related to the OceanBase cluster.
obcluster:
  ob_cluster_name: test # The cluster name.
  db_host: xx.xx.xx.1# The IP address of the cluster.
  db_port: 2881 # The default port 2881.
  tenant_sys: # The information of the sys tenant. To avoid permission issues, we recommend that you use root@sys.
    user: root@sys # By default, root@sys is used.
    password: ""
  servers:
    nodes:
      - ip: xx.xx.xx.1
      - ip: xx.xx.xx.2
      - ip: xx.xx.xx.3
    global:
      ssh_username: **** # The logon information. We recommend that you use the same user information specified for the deployment of OBServer nodes.
      ssh_password: **** # If you do not use a password, set it to "".
      # ssh_port: 22 # The SSH port. By default, port 22 is used.
      # ssh_key_file: "" # The path of the SSH key. If you specify the ssh_password parameter, you do not need to specify this parameter.
      # ssh_type: remote # The deployment mode of OBServer nodes. Valid values: remote and docker. Default value: remote. Note that Kubernetes is not supported in docker mode.
      # container_name: xxx # The name of the OBServer container. If you set the ssh_type parameter to docker, you must specify this parameter.

      # The installation directory of OBServer. For example, if the path of the executable program of OBServer is /root/observer/bin/observer,
      # you must set the home_path parameter to /root/observer.
      home_path: /root/observer   
      # data_dir: /root/observer/store # The path of the OBServer data disk. The default path is ${home_path}/store, which is the same as the value of the parameter with the same name in OceanBase Deployer (OBD).
      # redo_dir: /root/observer/store # The path of the OBServer log disk. The default path is ${home_path}/store, which is the same as the value of the parameter with the same name in OBD.
# Part 3: Parameters related to OceanBase Database Proxy (ODP)
obproxy:
  obproxy_cluster_name: obproxy
  servers:
    nodes:
      - ip: xx.xx.xx.4
      - ip: xx.xx.xx.5
      - ip: xx.xx.xx.6
    global:
      ssh_username: **** # The logon information. We recommend that you use the same user information specified for the deployment of ODP.
      ssh_password: **** # If you do not use a password, set it to "".
      # ssh_port: 22 # The SSH port. By default, port 22 is used.
      # ssh_key_file: "" # The path of the SSH key. If you specify the ssh_password parameter, you do not need to specify this parameter.
      # ssh_type: remote # The deployment mode of ODP. Valid values: remote and docker. Default value: remote. Note that Kubernetes is not supported in docker mode.
      # container_name: xxx# The name of the ODP container. If you set the ssh_type parameter to docker, you must specify this parameter.

      # The installation directory of ODP. For example, if the path of the executable program of ODP is /root/obproxy/bin/obproxy, you must set the home_path parameter to /root/obproxy.
      home_path: /root/obproxy

Parameters of a specific node overwrites those in the global section.

In the following sample configuration of an OceanBase cluster, parameters of each node are specified under the IP address of the node. If the same parameters are specified in the global section, parameters of each node take effect.

obcluster:
  ob_cluster_name: test
  db_host: xx.xx.xx.1
  db_port: 2881 # The default port 2881.
  tenant_sys:
    user: root@sys # default root@sys
    password: ""
  servers:
    nodes:
      - ip: xx.xx.xx.1
        ssh_username: ****
        ssh_password: ****1
        home_path: /root/observer1
        data_dir: /root/observer/store1
        redo_dir: /root/observer/store1
      - ip: xx.xx.xx.2
        ssh_username: ****2
        ssh_password: ****2
        home_path: /root/observer2
        data_dir: /root/observer/store2
        redo_dir: /root/observer/store2
      - ip: xx.xx.xx.3
        ssh_username: ****3
        ssh_password: ****3
        home_path: /root/observer3
        data_dir: /root/observer/store3
        redo_dir: /root/observer/store3
    global:
      ssh_port: 22

System configuration file

The system configuration file inner_config.yml is stored in the /usr/local/oceanbase-diagnostic-tool/conf/ directory.

obdiag:
  basic:
    config_path: ~/.obdiag/config.yml # The path of the user-defined configuration file.
    config_backup_dir: ~/.obdiag/backup_conf # The path where the backup of the original configuration file is stored when you run the obdiag config command.
    file_number_limit: 20 # The maximum number of files returned for a collection command on a single remote host.
    file_size_limit: 2G # The maximum size of a file returned for a collection command on a single remote host.
  logger:
    log_dir: ~/.obdiag/log # The path where the execution log file of obdiag is stored.
    log_filename: obdiag.log # The name of the execution log file of obdiag.
    file_handler_log_level: DEBUG # The lowest level of execution logs of obdiag to be recorded.
    log_level: INFO # The execution log level of obdiag.
    mode: obdiag
    stdout_handler_log_level: INFO # The lowest level of obdiag logs to be displayed.
check: # Parameters required for inspection. Usually, you do not need to modify parameters in this section.
  ignore_version: false # Specifies whether to ignore the version of OceanBase Database.
  report:
    report_path: "./check_report/" # The output path of the inspection report.
    export_type: table # The type of the inspection report.
  package_file: "~/.obdiag/check_package.yaml" # The path of the inspection package file.
  tasks_base_path: "~/.obdiag/tasks/" # The basic directory of inspection tasks.

Step 3: Run the end-to-end diagnostics command

obdiag analyze flt_trace [options]

The following table describes the options:

OptionRequired?Data typeDefault valueDescription
--flt_trace_idYesStringThe value of the flt_trace_id field.You can obtain the value by querying the flt_trace_id field in the gv$ob_sql_auditview or searching the trace.log file.
--store_dirNoStringThe default path is the current path in which the command is executed.The local path where the results are stored.
--filesNoStringThis option is left empty by default.If you specify the --files option, the offline log analysis mode is enabled.
--topNoString5The displayed number of leaf nodes that take the longest time in the result of the end-to-end diagnostics.
--recursionNoString8The maximum number of layers for recursive end-to-end diagnostics.
--outputNoString60The number of displayed rows of the result. The full result is stored in the result file.
-cNoString~/.obdiag/config.ymlThe path of the configuration file.

Here is an example:

$ obdiag analyze flt_trace --flt_trace_id 000605b1-28bb-c15f-8ba0-1206bcc08aa3

root node id: 000605b1-28bb-c15f-8ba0-1206bcc08aa3

TOP time-consuming leaf span:
+---+----------------------------------+-------------+---------------------+
| ID| Span Name                        | Elapsed Time|      NODE           |
+---+----------------------------------+-------------+---------------------+
| 18| px_task                          | 2.758 ms    | OBSERVER(xx.xx.xx.1)|
| 5 | pc_get_plan                      | 52 μs       | OBSERVER(xx.xx.xx.1)|
| 16| do_local_das_task                | 45 μs       | OBSERVER(xx.xx.xx.1)|     
| 10| do_local_das_task                | 17 μs       | OBSERVER(xx.xx.xx.1)|
| 17| close_das_task                   | 14 μs       | OBSERVER(xx.xx.xx.1)|     
+---+----------------------------------+-------------+---------------------+
Tags & Logs:
-------------------------------------
18 - px_task  Elapsed: 2.758 ms
     NODE:OBSERVER(xx.xx.xx.1)
     tags: [{'group_id': 0}, {'qc_id': 1}, {'sqc_id': 0}, {'dfo_id': 1}, {'task_id': 1}]
5 - pc_get_plan  Elapsed: 52 μs
    NODE:OBSERVER(xx.xx.xx.1)
16 - do_local_das_task  Elapsed: 45 μs
     NODE:OBSERVER(xx.xx.xx.3)
10 - do_local_das_task  Elapsed: 17 μs
     NODE:OBSERVER(xx.xx.xx.1)
17 - close_das_task  Elapsed: 14 μs
     NODE:OBSERVER(xx.xx.xx.3)

Details:
+---+----------------------------------+-------------+---------------------+
| ID| Span Name                        | Elapsed Time|  NODE               |
+---+----------------------------------+-------------+---------------------+
| 1 | TRACE                            | -           | -                   |
| 2 | └─com_query_process              | 5.351 ms    | OBPROXY(xx.xx.xx.1) |
| 3 |   └─mpquery_single_stmt          | 5.333 ms    | OBSERVER(xx.xx.xx.1)|
| 4 |     ├─sql_compile                | 107 μs      | OBSERVER(xx.xx.xx.1)|
| 5 |     │ └─pc_get_plan              | 52 μs       | OBSERVER(xx.xx.xx.1)|
| 6 |     └─sql_execute                | 5.147 ms    | OBSERVER(xx.xx.xx.1)|
| 7 |       ├─open                     | 87 μs       | OBSERVER(xx.xx.xx.1)|
| 8 |       ├─response_result          | 4.945 ms    | OBSERVER(xx.xx.xx.1)|
| 9 |       │ ├─px_schedule            | 2.465 ms    | OBSERVER(xx.xx.xx.1)|
| 10|       │ │ ├─do_local_das_task    | 17 μs       | OBSERVER(xx.xx.xx.1)|
| 11|       │ │ ├─px_task              | 2.339 ms    | OBSERVER(xx.xx.xx.2)|
| 12|       │ │ │ ├─do_local_das_task  | 54 μs       | OBSERVER(xx.xx.xx.2)|
| 13|       │ │ │ └─close_das_task     | 22 μs       | OBSERVER(xx.xx.xx.2)|
| 14|       │ │ ├─do_local_das_task    | 11 μs       | OBSERVER(xx.xx.xx.1)|
| 15|       │ │ ├─px_task              | 2.834 ms    | OBSERVER(xx.xx.xx.3)|
| 16|       │ │ │ ├─do_local_das_task  | 45 μs       | OBSERVER(xx.xx.xx.3)|
| 17|       │ │ │ └─close_das_task     | 14 μs       | OBSERVER(xx.xx.xx.3)|
| 18|       │ │ └─px_task              | 2.758 ms    | OBSERVER(xx.xx.xx.1)|
| 19|       │ ├─px_schedule            | 1 μs        | OBSERVER(xx.xx.xx.1)|
| 20|       │ └─px_schedule            | 1 μs        | OBSERVER(xx.xx.xx.1)|
| ..|       ......                     | ...         |  ......             |
+---+----------------------------------+-------------+---------------------+

For more details, please run cmd ' cat analyze_flt_result/000605b1-28bb-c15f-8ba0-1206bcc08aa3.txt '

View the details:

$ cat analyze_flt_result/000605b1-28bb-c15f-8ba0-1206bcc08aa3.txt

root node id: 000605b1-28bb-c15f-8ba0-1206bcc08aa3

TOP time-consuming leaf span:
+---+----------------------------------+-------------+---------------------+
| ID| Span Name                        | Elapsed Time|      NODE           |
+---+----------------------------------+-------------+---------------------+
| 18| px_task                          | 2.758 ms    | OBSERVER(xx.xx.xx.1)|
| 5 | pc_get_plan                      | 52 μs       | OBSERVER(xx.xx.xx.1)|
| 16| do_local_das_task                | 45 μs       | OBSERVER(xx.xx.xx.1)|     
| 10| do_local_das_task                | 17 μs       | OBSERVER(xx.xx.xx.1)|
| 17| close_das_task                   | 14 μs       | OBSERVER(xx.xx.xx.1)|     
+---+----------------------------------+-------------+---------------------+
Tags & Logs:
-------------------------------------
18 - px_task  Elapsed: 2.758 ms
     NODE:OBSERVER(xx.xx.xx.1)
     tags: [{'group_id': 0}, {'qc_id': 1}, {'sqc_id': 0}, {'dfo_id': 1}, {'task_id': 1}]
5 - pc_get_plan  Elapsed: 52 μs
    NODE:OBSERVER(xx.xx.xx.1)
16 - do_local_das_task  Elapsed: 45 μs
     NODE:OBSERVER(xx.xx.xx.3)
10 - do_local_das_task  Elapsed: 17 μs
     NODE:OBSERVER(xx.xx.xx.1)
17 - close_das_task  Elapsed: 14 μs
     NODE:OBSERVER(xx.xx.xx.3)


Details:

+---+----------------------------------+-------------+---------------------+
| ID| Span Name                        | Elapsed Time|  NODE               |
+---+----------------------------------+-------------+---------------------+
| 1 | TRACE                            | -           | -                   |
| 2 | └─com_query_process              | 5.351 ms    | OBPROXY(xx.xx.xx.1) |
| 3 |   └─mpquery_single_stmt          | 5.333 ms    | OBSERVER(xx.xx.xx.1)|
| 4 |     ├─sql_compile                | 107 μs      | OBSERVER(xx.xx.xx.1)|
| 5 |     │ └─pc_get_plan              | 52 μs       | OBSERVER(xx.xx.xx.1)|
| 6 |     └─sql_execute                | 5.147 ms    | OBSERVER(xx.xx.xx.1)|
| 7 |       ├─open                     | 87 μs       | OBSERVER(xx.xx.xx.1)|
| 8 |       ├─response_result          | 4.945 ms    | OBSERVER(xx.xx.xx.1)|
| 9 |       │ ├─px_schedule            | 2.465 ms    | OBSERVER(xx.xx.xx.1)|
| 10|       │ │ ├─do_local_das_task    | 17 μs       | OBSERVER(xx.xx.xx.1)|
| 11|       │ │ ├─px_task              | 2.339 ms    | OBSERVER(xx.xx.xx.2)|
| 12|       │ │ │ ├─do_local_das_task  | 54 μs       | OBSERVER(xx.xx.xx.2)|
| 13|       │ │ │ └─close_das_task     | 22 μs       | OBSERVER(xx.xx.xx.2)|
| 14|       │ │ ├─do_local_das_task    | 11 μs       | OBSERVER(xx.xx.xx.1)|
| 15|       │ │ ├─px_task              | 2.834 ms    | OBSERVER(xx.xx.xx.3)|
| 16|       │ │ │ ├─do_local_das_task  | 45 μs       | OBSERVER(xx.xx.xx.3)|
| 17|       │ │ │ └─close_das_task     | 14 μs       | OBSERVER(xx.xx.xx.3)|
| 18|       │ │ └─px_task              | 2.758 ms    | OBSERVER(xx.xx.xx.1)|
| 19|       │ ├─px_schedule            | 1 μs        | OBSERVER(xx.xx.xx.1)|
| 20|       │ └─px_schedule            | 1 μs        | OBSERVER(xx.xx.xx.1)|
| 21|       └─close                    | 70 μs       | OBSERVER(xx.xx.xx.1)|
| 22|         └─end_transaction        | 3 μs        | OBSERVER(xx.xx.xx.1)|
+---+----------------------------------+-------------+---------------------+
Tags & Logs:
-------------------------------------
1 -   
2 - com_query_process  Elapsed: 5.351 ms
    NODE:OBPROXY(xx.xx.xx.1)
    tags: [{'sess_id': 3221487633}, {'action_name': ''}, {'module_name': ''}, {'client_info': ''}, {'receive_ts': 1695108311007659}, {'log_trace_id': 'YA9257F000001-000605B0441954BC-0-0'}]
3 - mpquery_single_stmt  Elapsed: 5.333 ms
    NODE:OBSERVER(xx.xx.xx.1)
4 - sql_compile  Elapsed: 107 μs
    NODE:OBSERVER(xx.xx.xx.1)
    tags: [{'sql_text': 'select /*+parallel(2)*/ count(1) from t1 tt1, t1 tt2'}, {'sql_id': '797B7202BA69D4C2C77C12BFADDC19DC'}, {'database_id': 201001}, {'plan_hash': 150629045171310866}, {'hit_plan': True}]
5 - pc_get_plan  Elapsed: 52 μs
    NODE:OBSERVER(xx.xx.xx.1)
6 - sql_execute  Elapsed: 5.147 ms
    NODE:OBSERVER(xx.xx.xx.1)
7 - open  Elapsed: 87 μs
    NODE:OBSERVER(xx.xx.xx.1)
8 - response_result  Elapsed: 4.945 ms
    NODE:OBSERVER(xx.xx.xx.1)
9 - px_schedule  Elapsed: 2.465 ms
    NODE:OBSERVER(xx.xx.xx.1)
    tags: [{'used_worker_cnt': 0}, {'qc_id': 1}, {'dfo_id': 2147483647}, {'used_worker_cnt': 0}, {'qc_id': 1}, {'dfo_id': 1}]
10 - do_local_das_task  Elapsed: 17 μs
     NODE:OBSERVER(xx.xx.xx.1)
11 - px_task  Elapsed: 2.339 ms
     NODE:OBSERVER(xx.xx.xx.2)
     tags: [{'group_id': 0}, {'qc_id': 1}, {'sqc_id': 0}, {'dfo_id': 0}, {'task_id': 0}]
12 - do_local_das_task  Elapsed: 54 μs
     NODE:OBSERVER(xx.xx.xx.2)
13 - close_das_task  Elapsed: 22 μs
     NODE:OBSERVER(xx.xx.xx.2)
14 - do_local_das_task  Elapsed: 11 μs
     NODE:OBSERVER(xx.xx.xx.1)
15 - px_task  Elapsed: 2.834 ms
     NODE:OBSERVER(xx.xx.xx.3)
     tags: [{'group_id': 0}, {'qc_id': 1}, {'sqc_id': 0}, {'dfo_id': 1}, {'task_id': 0}]
16 - do_local_das_task  Elapsed: 45 μs
     NODE:OBSERVER(xx.xx.xx.3)
17 - close_das_task  Elapsed: 14 μs
     NODE:OBSERVER(xx.xx.xx.3)
18 - px_task  Elapsed: 2.758 ms
     NODE:OBSERVER(xx.xx.xx.1)
     tags: [{'group_id': 0}, {'qc_id': 1}, {'sqc_id': 0}, {'dfo_id': 1}, {'task_id': 1}]
19 - px_schedule  Elapsed: 1 μs
     NODE:OBSERVER(xx.xx.xx.1)
20 - px_schedule  Elapsed: 1 μs
     NODE:OBSERVER(xx.xx.xx.1)
21 - close  Elapsed: 70 μs
     NODE:OBSERVER(xx.xx.xx.1)
22 - end_transaction  Elapsed: 3 μs
     NODE:OBSERVER(xx.xx.xx.1)
     tags: [{'trans_id': 0}]


Additional Resources

Download obdiag

Find the latest version of obdiag for free from the OceanBase Software Center.


obdiag Documentation

Explore comprehensive usage guides and configuration details in the obdiag Documentation.


GitHub Repository

Review the source code, report issues, and contribute to the project on GitHub.

ICON_SHARE
ICON_SHARE