The io_adapter_benchmark command allows you to query the performance of read/write operations between an OBServer node and the backup medium. This topic describes how to use the io_adapter_benchmark command.
Introduction
You can use the -h option to query the help information of the io_adapter_benchmark command.
./ob_admin io_adapter_benchmark -h
The output is as follows:
Usage: io_adapter_benchmark command [command args] [options]
commands:
-h, --help display this message.
options:
-d Specifies the test directory.
-s Specifies the authentication information for object storage.
-t Specifies the number of concurrent threads.
-r Specifies the number of times that a single thread executes tasks.
-l Specifies the running duration.
-o Specifies the size of objects.
-n Specifies the number of objects.
-f Specifies the size of data to be read for each read operation or the size of data to be positionally written (pwrite) for each append or multipart write operation.
-p Specifies the task type.
-j Specifies the backup medium type. As the directory structures of Amazon Simple Storage Service (S3) and Huawei Cloud Object Storage Service (OBS) have changed, some operations require this option.
-b Specifies whether to clear the directory before the execution of the task. If you specify `-b 'true'`, the directory is cleared. Otherwise, the directory is not cleared.
-c Specifies whether to clear the directory after the task is completed. If you specify `-c 'true'`, the directory is cleared. Otherwise, the directory is not cleared.
The -p option has the following valid values:
write: Writes data to common files. Each thread writes data to files in a separate directory.append: Appends data to files.multi: Uploads data in multiple parts.read: Reads data from files. You can perform a read task only after data is written.del: Deletes data. Each thread deletes data from a separate directory. The number of threads for deleting data must be the same as that for writing data.
Examples
Write data to common files
./ob_admin io_adapter_benchmark -d 'oss://oss_bucket/test_path' \
-s 'host=xxx.com&access_id=111&access_key=222' \
-o 100 \ # Set the object size to 100 bytes.
-t 4 \ # Set the number of concurrent threads to 4.
-r 10 \ # Set the maximum number of objects that a single thread can write to 10.
-j 100 \ # Set the maximum execution duration to 100s.
-p 'write' \
-b 'true' \ # Optional. Clear the directory before the execution of the task.
-c 'true' \ # Optional. Clear the directory after the task is completed.
The preceding command performs a performance test by writing data to common files under the oss_bucket/test_path directory. As -b 'true' is specified, the command clears the test_path directory before writing data to the files. As -t 4 and -o 100 are specified, 4 concurrent threads write 100-byte objects to the cleared directory. As -r 10 and -j 100 are specified, the data write stops after each thread writes 10 objects or the total execution duration reaches the upper limit of 100s. As -c 'true' is specified, the command clears the directory after completing the test.
The output is as follows:
succ to open, filename=/tmp/ob_admin.log, fd=3, wf_fd=2
succ to open, filename=/tmp/ob_admin_rs.log, fd=4, wf_fd=2
| Task Config|{thread_num:4, max_task_runs:100, time_limit_s:-1, obj_size:2097152, obj_num:-1, fragment_size:-1, is_adaptive:false, type:0}
------------------------------{Testing}------------------------------
| Status|SUCCESS
| Total operation num|400
| Total execution time|16.373251 s
| Total user time|6.110431 s
| Total system time|0.410841 s
| CPU usage for 100MB/s BW|81.515900% per 100MB/s
| Total throughput bytes|838860800
| Total QPS|24.430090
| Per Thread QPS|6.107523
| Total BW|48.860181 MB/s
| Per Thread BW|12.215045 MB/s
| Total Op Time Map|total_entry=400, min_ms=99, th_50_ms=113, th_90_ms=253, th_99_ms=400, th_999_ms=452, max_ms=928
| Open Time Map|Empty Time Map
| Close Op Time Map|Empty Time Map
Append data to files
./ob_admin io_adapter_benchmark -d 'oss://home/admin/backup_info' \
-s 'host=xxx.com&access_id=111&access_key=222' \
-o 100 \ # Set the object size to 100 bytes.
-t 4 \ # Set the number of concurrent threads to 4.
-r 10 \ # Set the maximum number of objects that a single thread can write to 10.
-p 'append' \
-f 10 \ # Append 10 bytes of data to an object each time. In this example, a 100-byte object requires 10 appending operations.
The output is as follows:
succ to open, filename=/tmp/ob_admin.log, fd=3, wf_fd=2
succ to open, filename=/tmp/ob_admin_rs.log, fd=4, wf_fd=2
| Task Config|{thread_num:4, max_task_runs:10, time_limit_s:-1, obj_size:2097152, obj_num:-1, fragment_size:1048576, is_adaptive:false, type:1}
------------------------------{Testing}------------------------------
| Status|SUCCESS
| Total operation num|40
| Total execution time|5.685847 s
| Total user time|0.776306 s
| Total system time|0.073172 s
| CPU usage for 100MB/s BW|106.184750% per 100MB/s
| Total throughput bytes|83886080
| Total QPS|7.035012
| Per Thread QPS|1.758753
| Total BW|14.070023 MB/s
| Per Thread BW|3.517506 MB/s
| Total Op Time Map|total_entry=40, min_ms=364, th_50_ms=419, th_90_ms=704, th_99_ms=754, th_999_ms=754, max_ms=963
| Open Time Map|total_entry=40, min_ms=0, th_50_ms=3, th_90_ms=3, th_99_ms=3, th_999_ms=3, max_ms=3
| Close Op Time Map|total_entry=40, min_ms=103, th_50_ms=112, th_90_ms=123, th_99_ms=124, th_999_ms=124, max_ms=125
Upload data in multiple parts
./ob_admin io_adapter_benchmark -d 'oss://home/admin/backup_info' \
-s 'host=xxx.com&access_id=111&access_key=222' \
-o 100 \ # Set the object size to 100 bytes.
-t 4 \ # Set the number of concurrent threads to 4.
-r 10 \ # Set the maximum number of objects that a single thread can write to 10.
-p 'multi' \
-f 10 \ # Set the data size of each pwrite operation to 10 bytes. In this example, a 100-byte object requires 10 pwrite operations.
The implementation of multipart uploads in object storage requires that the size of each part reach 8 MB. Therefore, the number of pwrite operations is not equal to the number of parts.
The output is as follows:
succ to open, filename=/tmp/ob_admin.log, fd=3, wf_fd=2
succ to open, filename=/tmp/ob_admin_rs.log, fd=4, wf_fd=2
| Task Config|{thread_num:4, max_task_runs:10, time_limit_s:-1, obj_size:2097152, obj_num:-1, fragment_size:1048576, is_adaptive:false, type:2}
------------------------------{Testing}------------------------------
| Status|SUCCESS
| Total operation num|40
| Total execution time|7.135682 s
| Total user time|0.765570 s
| Total system time|0.095029 s
| CPU usage for 100MB/s BW|107.574875% per 100MB/s
| Total throughput bytes|83886080
| Total QPS|5.605631
| Per Thread QPS|1.401408
| Total BW|11.211262 MB/s
| Per Thread BW|2.802815 MB/s
| Total Op Time Map|total_entry=40, min_ms=314, th_50_ms=411, th_90_ms=762, th_99_ms=833, th_999_ms=833, max_ms=1002
| Open Time Map|total_entry=40, min_ms=43, th_50_ms=50, th_90_ms=57, th_99_ms=99, th_999_ms=99, max_ms=102
| Close Op Time Map|total_entry=40, min_ms=269, th_50_ms=357, th_90_ms=708, th_99_ms=781, th_999_ms=781, max_ms=900
Read data from files
ob_admin io_adapter_benchmark -d 'oss://home/admin/backup_info' \
-s 'host=xxx.com&access_id=111&access_key=222' \
-o 100 \ # Set the object size to 100 bytes.
-t 4 \ # Set the number of concurrent threads to 4.
-r 10 \ # Set the maximum number of read operations that a single thread can perform to 10.
-p 'read' \
-f 10 \ # Set the data size of each random read operation to 10 bytes.
\ # To read full objects, specify `-f obj_size`.
-j 1 \ # Specify `-j 1` if the backup directory is in the Amazon S3 or Huawei Cloud OBS format.
-n 10 \ # Set the number of objects that a single thread has written to 10.
Note that you can perform a read task only after performing a write task. Set the-n option of the read task to the same value as the -r option of the write task, and keep the value of the -t option of the read task no greater than that of the write task.
Note
If you have run multiple write commands, set the -n and -r options for the read command based on the latest write command that has the same -d value as the read command.
The output is as follows:
succ to open, filename=/tmp/ob_admin.log, fd=3, wf_fd=2
succ to open, filename=/tmp/ob_admin_rs.log, fd=4, wf_fd=2
| Task Config|{thread_num:4, max_task_runs:20, time_limit_s:-1, obj_size:2097152, obj_num:10, fragment_size:1048576, is_adaptive:true, type:3}
------------------------------{Testing}------------------------------
| Status|SUCCESS
| Total operation num|80
| Total execution time|2.970364 s
| Total user time|0.289431 s
| Total system time|0.217354 s
| CPU usage for 100MB/s BW|63.348125% per 100MB/s
| Total throughput bytes|83886080
| Total QPS|26.932726
| Per Thread QPS|6.733182
| Total BW|26.932726 MB/s
| Per Thread BW|6.733182 MB/s
| Total Op Time Map|total_entry=80, min_ms=114, th_50_ms=132, th_90_ms=164, th_99_ms=356, th_999_ms=356, max_ms=389
| Open Time Map|Empty Time Map
| Close Op Time Map|Empty Time Map
Delete data
ob_admin io_adapter_benchmark -d 'oss://home/admin/backup_info' \
-s 'host=xxx.com&access_id=111&access_key=222' \
-t 4 \ # Set the number of concurrent threads to 4.
-p 'del' \
-j 1 \ # Specify `-j 1` if the backup directory is in the Amazon S3 or Huawei Cloud OBS format.
-r 10 \ # Keep the value of this option the same as that of the `-r` option of the write task.
After you run the preceding command, each thread deletes data from a separate directory. Therefore, the -t option of the delete task must have the same value as the -t option of the write task.
Note
If you have run multiple write commands, configure the -t and -r options for the delete command based on the latest write command that has the same -d value as the delete command.
The output is as follows:
succ to open, filename=/tmp/ob_admin.log, fd=3, wf_fd=2
succ to open, filename=/tmp/ob_admin_rs.log, fd=4, wf_fd=2
| Task Config|{thread_num:4, max_task_runs:20, time_limit_s:-1, obj_size:-1, obj_num:-1, fragment_size:-1, is_adaptive:true, type:4}
------------------------------{Testing}------------------------------
| Status|SUCCESS
| Total operation num|80
| Total execution time|2.136603 s
| Total user time|0.155563 s
| Total system time|0.014127 s
| Total CPU usage|7.942046%
| Total throughput bytes|0
| Total QPS|37.442613
| Per Thread QPS|9.360653
| Total BW|0.000000 MB/s
| Per Thread BW|0.000000 MB/s
| Total Op Time Map|total_entry=80, min_ms=77, th_50_ms=94, th_90_ms=105, th_99_ms=159, th_999_ms=159, max_ms=166
| Open Time Map|Empty Time Map
| Close Op Time Map|Empty Time Map