Performance test

2025-08-01 07:16:05  Updated

This topic describes how to test the performance of an OceanBase cluster by using OceanBase Deployer (obd) commands.

Note

  • To use obd commands to test the performance of an OceanBase cluster, make sure that the cluster is managed by obd. Otherwise, obd cannot obtain the cluster information. By default, a cluster deployed by using obd is managed by obd. For a cluster not deployed with obd, you first need to use obd to take over the cluster. For more information, see Use obd to take over a cluster.

  • You can also manually run performance tests by using OceanBase Database tools. For more information, see Performance test.

Prepare the environment

Before testing, prepare the test environment as per the following requirements:

  • Java Development Kit (JDK): Use V1.8u131 or later.

  • Make: Run the yum install make command to install make.

  • GNU Compiler Collection (GCC): Run the yum install gcc command to install GCC.

  • OceanBase Client (OBClient): For more information, see OBClient documentation.

  • OceanBase Database: Deploy OceanBase Database and create a tenant and a user for the test. For more information about how to deploy OceanBase Database, see Deploy an OceanBase cluster on the GUI. You can run the obd cluster tenant create command to create a tenant. For more information, see the obd cluster tenant create section in Cluster commands.

  • IOPS: We recommend that the disk IOPS be above 10,000.

Notice

  • We recommend that you do not use the obd cluster autodeploy command to deploy the cluster. To ensure stability, this command does not maximize the resource utilization. For example, it does not use all of the memory. If you use this command, we recommend that you modify the configuration file separately to maximize the resource utilization.

  • The sys tenant is a built-in system tenant used for managing the cluster. Do not use the sys tenant to run the test.

Run the TPC-H benchmark

Overview

TPC-H is a business intelligence test set developed by the Transaction Processing Performance Council (TPC) of the United States to simulate the decision-making process of applications. It is commonly used in academia and industry for evaluating the performance of decision support applications. This BI benchmark comprehensively evaluates the overall business computing capabilities of decision support systems and imposes high requirements on vendors of such systems. Due to its universal and practical business value, TPC-H is widely used in credit analysis and credit card analysis of banks, telecom operation analysis, tax analysis, and tobacco industry decision analysis.

The TPC-H benchmark is the successor of TPC-D, a benchmark developed by TPC in 1994 for decision support systems. TPC-H implements a data warehouse in the Third Normal Form (3NF) that contains eight basic relationships. The main evaluation metric is the response time of each query from its submission to the return of results. The TPC-H benchmark measures the number of queries executed per hour (QphH@size) in a database. H indicates the average number of complex queries executed per hour and size indicates the scale of the database. This metric reflects the query processing capacity of a database system. The TPC-H benchmark is modeled based on real-world production and operation environments. This enables it to evaluate some key performance metrics that cannot be evaluated by other benchmarks. The TPC-H benchmark fills the gap in data warehouse testing and encourages database vendors and research institutions to push the decision support technology to its limit.

Procedure

To use obd to run the TPC-H benchmark on an OceanBase cluster, perform the following steps:

  1. Install obtpch.

    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
    sudo yum install obtpch
    
  2. Create a soft link to obtpch.

    sudo ln -s /usr/tpc-h-tools/tpc-h-tools/ /usr/local/
    
  3. Run the TPC-H benchmark.

    obd test tpch test --tenant=tpch_mysql -s 100 --remote-tbl-dir=/tmp/tpch100
    

    For more information about the obd test tpch command, see the obd test tpch section in Testing commands.

    Take note of the following considerations when you run the test:

    • In this example, test is the name of the cluster under test, and tpch_mysql is the tenant name. You can modify the values as needed. You can run the obd cluster list command to query the cluster name and the obd cluster tenant show <deploy_name> command to query tenants in the cluster. You can also modify the parameters based on your business needs.

    • After you run the obd test tpch command, the system lists the test steps and outputs in detail. A larger data amount requires a longer test time.

    • The remote directory specified by remote-tbl-dir must have sufficient capacity to store TPC-H data. We recommend that you specify an independent disk to store loaded test data.

    • The obd test tpch command automatically completes all operations, including the generation and transmission of test data, OceanBase Database parameter optimization, data loading, and testing. If an error occurs during the process, you can retry the test by specifying parameters. For example, you can specify the --test-only parameter to directly load data and run the test while skipping data generation and transmission.

    The output is as follows:

    [2024-05-08 17:13:47]: start /home/admin/tmp/db1.sql
    [2024-05-08 17:13:53]: end /home/admin/tmp/db1.sql, cost 5.34s
    [2024-05-08 17:13:53]: start /home/admin/tmp/db2.sql
    [2024-05-08 17:13:53]: end /home/admin/tmp/db2.sql, cost 0.69s
    [2024-05-08 17:13:53]: start /home/admin/tmp/db3.sql
    [2024-05-08 17:13:55]: end /home/admin/tmp/db3.sql, cost 1.29s
    [2024-05-08 17:13:55]: start /home/admin/tmp/db4.sql
    [2024-05-08 17:13:56]: end /home/admin/tmp/db4.sql, cost 1.50s
    [2024-05-08 17:13:56]: start /home/admin/tmp/db5.sql
    [2024-05-08 17:14:00]: end /home/admin/tmp/db5.sql, cost 3.35s
    [2024-05-08 17:14:00]: start /home/admin/tmp/db6.sql
    [2024-05-08 17:14:00]: end /home/admin/tmp/db6.sql, cost 0.26s
    [2024-05-08 17:14:00]: start /home/admin/tmp/db7.sql
    [2024-05-08 17:14:02]: end /home/admin/tmp/db7.sql, cost 1.99s
    [2024-05-08 17:14:02]: start /home/admin/tmp/db8.sql
    [2024-05-08 17:14:05]: end /home/admin/tmp/db8.sql, cost 3.39s
    [2024-05-08 17:14:05]: start /home/admin/tmp/db9.sql
    [2024-05-08 17:14:09]: end /home/admin/tmp/db9.sql, cost 4.19s
    [2024-05-08 17:14:09]: start /home/admin/tmp/db10.sql
    [2024-05-08 17:14:10]: end /home/admin/tmp/db10.sql, cost 0.87s
    [2024-05-08 17:14:10]: start /home/admin/tmp/db11.sql
    [2024-05-08 17:14:11]: end /home/admin/tmp/db11.sql, cost 0.46s
    [2024-05-08 17:14:11]: start /home/admin/tmp/db12.sql
    [2024-05-08 17:14:12]: end /home/admin/tmp/db12.sql, cost 1.43s
    [2024-05-08 17:14:12]: start /home/admin/tmp/db13.sql
    [2024-05-08 17:14:13]: end /home/admin/tmp/db13.sql, cost 1.36s
    [2024-05-08 17:14:13]: start /home/admin/tmp/db14.sql
    [2024-05-08 17:14:14]: end /home/admin/tmp/db14.sql, cost 0.40s
    [2024-05-08 17:14:14]: start /home/admin/tmp/db15.sql
    [2024-05-08 17:14:14]: end /home/admin/tmp/db15.sql, cost 0.47s
    [2024-05-08 17:14:14]: start /home/admin/tmp/db16.sql
    [2024-05-08 17:14:15]: end /home/admin/tmp/db16.sql, cost 0.60s
    [2024-05-08 17:14:15]: start /home/admin/tmp/db17.sql
    [2024-05-08 17:14:16]: end /home/admin/tmp/db17.sql, cost 0.60s
    [2024-05-08 17:14:16]: start /home/admin/tmp/db18.sql
    [2024-05-08 17:14:18]: end /home/admin/tmp/db18.sql, cost 2.29s
    [2024-05-08 17:14:18]: start /home/admin/tmp/db19.sql
    [2024-05-08 17:14:18]: end /home/admin/tmp/db19.sql, cost 0.43s
    [2024-05-08 17:14:18]: start /home/admin/tmp/db20.sql
    [2024-05-08 17:14:19]: end /home/admin/tmp/db20.sql, cost 1.18s
    [2024-05-08 17:14:19]: start /home/admin/tmp/db21.sql
    [2024-05-08 17:14:24]: end /home/admin/tmp/db21.sql, cost 4.24s
    [2024-05-08 17:14:24]: start /home/admin/tmp/db22.sql
    [2024-05-08 17:14:25]: end /home/admin/tmp/db22.sql, cost 0.94s
    Total Cost: 37.26s
    

Run the TPC-C benchmark

Overview

TPC-C is an online transaction processing (OLTP) benchmark. It uses a commodity sales model to test an OLTP system. For more information, visit http://www.tpc.org/tpcc/.

Procedure

To use obd to run the TPC-C benchmark on an OceanBase cluster, perform the following steps:

  1. Install obtpcc.

    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
    sudo yum install obtpcc java
    
  2. Run the TPC-C benchmark.

    obd test tpcc test --tenant=tpcc_mysql --warehouses=10  --run-mins=1
    

    For more information about the obd test tpcc command, see the obd test tpcc section in Testing commands.

    Take note of the following considerations when you run the test:

    • In this example, test is the name of the cluster under test, and tpcc_mysql is the tenant name. You can modify the values as needed. You can run the obd cluster list command to query the cluster name and the obd cluster tenant show <deploy_name> command to query tenants in the cluster. You can also modify the parameters based on your business needs.

    • After you run the obd test tpcc command, the system lists the test steps and outputs in detail. A larger data amount requires a longer test time.

    • The obd test tpcc command automatically completes all operations, including the generation and transmission of test data, OceanBase Database parameter optimization, data loading, and testing. If an error occurs during the process, you can retry the test by specifying parameters. For example, you can specify the --test-only parameter to directly load data and run the test while skipping data generation and transmission.

    The output is as follows:

    TPC-C Result
    Measured tpmC (NewOrders)   : 3735.65
    Measured tpmTOTAL           : 8402.25
    Session Start               : 2024-05-09 10:35:53
    Session End                 : 2024-05-09 10:36:54
    Transaction Count           : 8492
    

Run the Sysbench benchmark

Overview

Sysbench is a LuaJIT-based multi-thread benchmark tool that allows you to write scripts and test the CPU, memory, thread, I/O, and database performance. It is often used for evaluating and testing the database workload under various system parameters. You can run the Sysbench benchmark in diversified business scenarios by customizing Lua scripts without modifying the source code. The Sysbench benchmark covers the following aspects:

  • CPU performance

  • Disk I/O performance

  • Scheduler performance

  • Memory allocation and transmission speed

  • POSIX thread performance

  • Database performance (OLTP benchmark)

Procedure

To use obd to run the Sysbench benchmark on an OceanBase cluster, perform the following steps:

  1. Install ob-sysbench.

    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
    sudo yum install ob-sysbench
    
  2. Run the Sysbench benchmark.

    obd test sysbench test --tenant=sysbench_mysql --script-name=oltp_read_only.lua,oltp_write_only.lua,oltp_read_write.lua --table-size=1000000 --threads=32,64,128,256,512,1024 --rand-type=uniform
    

    Take note of the following considerations when you run the test:

    • In this example, test is the name of the cluster under test, and sysbench_mysql is the tenant name. You can modify the values as needed. You can run the obd cluster list command to query the cluster name and the obd cluster tenant show <deploy_name> command to query tenants in the cluster.

    • In this example, default values are used for most parameters in the script. You can modify the parameters based on your business needs. For more information about the obd test sysbench command, see the obd test sysbench section in Testing commands.

    Assume that the Sysbench script is named oltp_read_write.lua and the number of initialized threads is 1024. The output is as follows:

    queries performed:
    read:                            458682
    write:                           61850
    other:                           134686
    total:                           655218
    transactions:                        32742  (526.50 per sec.)
    queries:                             655218 (10536.09 per sec.)
    ignored errors:                      21     (0.34 per sec.)
    reconnects:                          0      (0.00 per sec.)
    General statistics:
    total time:                          62.1862s
    total number of events:              32742
    Latency (ms):
    min:                                  986.50
    avg:                                 1922.86
    max:                                 6202.85
    95th percentile:                     2985.89
    sum:                             62958351.24
    Threads fairness:
    events (avg/stddev):           31.9746/0.16
    execution time (avg/stddev):   61.4828/0.41
    

Contact Us