In the current version of OceanBase Database, worker threads and most background threads are tenant-specific, and network threads are shared. You can configure control groups (cgroups) to control the CPU utilization of tenants.
Background information
Before you configure cgroups, we recommend that you learn about the concept of cgroup first. For more information, see Overview.
Limitations and considerations
Resource isolation considerably compromises performance. We recommend that you do not use the cgroup feature to isolate tenant resources in the following scenarios:
Single-tenant scenarios that have only one tenant in the cluster.
Scenarios where tenants are associated with each other. For example, multiple tenants serve different microservices, resulting in an upstream and downstream relationship among the tenants.
Small-scale tenant scenarios where each tenant has two or four CPU cores.
If the operating system of the OBServer node is Alibaba Cloud Linux, to use the cgroup feature, the operating system version must be 4.19 or later.
Enabling the cgroup feature compromises the performance of OceanBase Database. Therefore, weigh the isolation benefits and performance loss before you enable the cgroup feature.
Procedure
Step 1: Configure the cgroup system directory
Notice
- You must configure the cgroup system directory before you install OceanBase Database.
- You must obtain the
rootuser privileges before you configure the cgroup system directory.
This section describes how to configure the cgroup system directory on one OBServer node. If the OceanBase cluster consists of multiple OBServer nodes, you must configure the cgroup system directory on each OBServer node.
Log in to the OBServer server as the
adminuser.Run the following command to mount the
/sys/fs/cgroupdirectory:Note
If the
/sys/fs/cgroupdirectory already exists, skip this step.[admin@xxx /]$ sudo mount -t tmpfs cgroups /sys/fs/cgroupHere,
cgroupsis a user-defined name for identification when you view the mount information.The mounting result is as follows:
$df Filesystem 1K-blocks Used Available Use% Mounted on / 293601280 28055472 265545808 10% / /dev/v01d 2348810240 2113955876 234854364 91% /data/1 /dev/v02d 1300234240 1170211208 130023032 91% /data/log1 shm 33554432 0 33554432 0% /dev/shm /dev/v04d 293601280 28055472 265545808 10% /home/admin/logs cgroups 395752136 0 395752136 0% /sys/fs/cgroupCreate a directory named
/sys/fs/cgroup/cpuand change its owner. This directory is used for mounting thecpusubsystem later.Note
If the
/sys/fs/cgroup/cpudirectory already exists, skip this step.[admin@xxx /]$ sudo mkdir /sys/fs/cgroup/cpu [admin@xxx /]$ sudo chown admin:admin -R /sys/fs/cgroup/cpuCreate a directory hierarchy named
cpu, attach thecpusubsystem to this hierarchy, and mount this hierarchy to the/sys/fs/cgroup/cpudirectory.[admin@xxx /]$ sudo mount -t cgroup -o cpu cpu /sys/fs/cgroup/cpuCreate a subdirectory named
oceanbaseand change its owner toadmin.[admin@xxx /]$ sudo mkdir /sys/fs/cgroup/cpu/oceanbase [admin@xxx /]$ sudo chown admin:admin -R /sys/fs/cgroup/cpu/oceanbaseAllocate CPU and memory resources for the
oceanbasedirectory.Execute the following statement to view the mounting status of the
cpu,cpuacct, andcpusetsubsystems on your machine:[admin@xxx /]$ ll /sys/fs/cgroupPerform the corresponding operations based on the mounting status of the subsystems:
The
cpusetsubsystem is mounted alongside thecpuandcpuacctsubsystemsIn this scenario, the three subsystems are often mounted to the same directory, as shown in the following example:
drwxr-xr-x 3 root root 0 Jul 24 2020 blkio lrwxrwxrwx 1 root root 33 Jul 24 2020 cpu -> /sys/fs/cgroup/cpuset,cpu,cpuacct lrwxrwxrwx 1 root root 33 Jul 24 2020 cpuacct -> /sys/fs/cgroup/cpuset,cpu,cpuacct lrwxrwxrwx 1 root root 33 Jul 24 2020 cpuset -> /sys/fs/cgroup/cpuset,cpu,cpuacct drwxr-xr-x 4 root root 0 Jul 24 2020 cpuset,cpu,cpuacctIn this case, run the following commands to allocate CPU and memory resources for the
oceanbasedirectory:[admin@xxx /]$ sudo sh -c "echo `cat /sys/fs/cgroup/cpu/cpuset.cpus` > /sys/fs/cgroup/cpu/oceanbase/cpuset.cpus" [admin@xxx /]$ sudo sh -c "echo `cat /sys/fs/cgroup/cpu/cpuset.mems` > /sys/fs/cgroup/cpu/oceanbase/cpuset.mems"The
cpusetsubsystem is mounted separately from thecpuandcpuacctsubsystemsIn this scenario, the
cpusetsubsystem is mounted separately from thecpuandcpuacctsubsystems, which is common in Elastic Compute Service (ECS) environments, as shown in the following example:drwxr-xr-x 2 root root 40 Feb 27 15:27 blkio lrwxrwxrwx 1 root root 11 Feb 27 15:27 cpu -> cpu,cpuacct lrwxrwxrwx 1 root root 11 Feb 27 15:27 cpuacct -> cpu,cpuacct drwxr-xr-x 2 root root 40 Feb 27 15:27 cpu,cpuacct drwxr-xr-x 2 root root 40 Feb 27 15:27 cpusetIn this case, proceed to the next step without performing additional operations.
Run the following command to set the inheritance attribute for subdirectories in the
oceanbasedirectory:[admin@xxx /]$ sudo sh -c "echo 1 > /sys/fs/cgroup/cpu/oceanbase/cgroup.clone_children"After the command succeeds, the
cgroupsubdirectory created under theoceanbasedirectory inherits attributes from the parent directory.
Step 2: Deploy OceanBase Database
After the cgroup system directory is configured, you can deploy OceanBase Database. For the deployment procedure, see Deploy OceanBase Database.
Step 3: Establish a soft link to OceanBase Database
After OceanBase Database is installed, establish a soft link between the installation directory of OceanBase Database and the cgroup system directory.
Log in as the
adminuser to the OBServer node.Manually establish a soft link between the installation directory of OceanBase Database and the cgroup system directory.
[admin@xxx /home/admin]$ cd /home/admin/oceanbase/ [admin@xxx /home/admin] $ ln -sf /sys/fs/cgroup/cpu/oceanbase/ cgroup [admin@xxx /home/admin] $ ln -sf /sys/fs/cgroup/blkio/oceanbase/ cgroupHere,
/home/admin/oceanbase/is the installation directory of OceanBase Database.The execution result is as follows:
[admin@xxx /home/admin/oceanbase] $ll cgroup lrwxrwxrwx 1 admin admin 29 Dec 8 11:09 cgroup -> /sys/fs/cgroup/cpu/oceanbase/ lrwxrwxrwx 1 admin admin 29 Dec 8 11:09 cgroup -> /sys/fs/cgroup/blkio/oceanbase/Restart the observer process.
You must first stop the observer process and then restart it. For more information, see Restart a node.
If the observer process detects that a soft link has been established, it will create the cgroup directory for each tenant in the
/sys/fs/cgroup/cpu/oceanbase/directory.Log in to the
systenant of the cluster and query theV$OB_CGROUP_CONFIGorGV$OB_CGROUP_CONFIGview for the cgroup configurations on each OBServer node.SELECT * FROM oceanbase.GV$OB_CGROUP_CONFIG;
Step 4: Enable the cgroup feature
In OceanBase Database, the cluster-level enable_cgroup parameter specifies whether to enable the cgroup feature for the OBServer nodes in the cluster. The default value is True, which specifies to enable this feature. If this feature is disabled, perform the following steps to enable it.
Log in to the
systenant of the cluster as therootuser.Execute any of the following statements to enable the cgroup feature:
obclient> ALTER SYSTEM SET enable_cgroup = true;or
obclient> ALTER SYSTEM SET enable_cgroup = 1;or
obclient> ALTER SYSTEM SET enable_cgroup = ON;
What to do next
After you configure the cgroup system directory and enable the cgroup feature, in the case of emergencies, you can control the utilization of CPU resources in a tenant by using the cpu.cfs_period_us, cpu.cfs_quota_us, and cpu.shares files in the directory of the tenant. Generally, we recommend that you do not implement resource isolation in this way.
We recommend that you use the files in the cgroup system directory to call the CREATE_CONSUMER_GROUP subprogram in the DBMS_RESOURCE_MANAGER package to create resource groups for user-level or SQL statement-level resource isolation. For more information, see Configure resource isolation within a tenant.