This section describes the hardware setup requirements for the server, including the operating system, BIOS settings, disk mounting, network card settings, and software requirements.
Prepare operating system
OAT/OCP operating system
OAT/OCP can be deployed on the operating systems in the following table.
| Operating system | Supported version | Server type |
|---|---|---|
| Rocky Linux | 9 | x86_64/ARM aarch64 |
| Alibaba Cloud Linux | 2, 3 | x86_64/ARM aarch64 |
| AnolisOS | 8 | x86_64/ARM aarch64 |
| KylinOS | V10, V11 | x86_64/ARM aarch64 |
| UOS | V20 | x86_64/ARM aarch64 |
| NFSChina | 4.0 | x86_64/ARM aarch64 |
| CentOS/RHEL (Red Hat Enterprise Linux) | 7, 8, 9 | x86_64/ARM aarch64 |
| openSUSE | 12SP5 | x86_64/ARM aarch64 |
| Debian | 12 | x86_64/ARM aarch64 |
| openEuler | 20.03 LTS, 22.03 LTS | x86_64/ARM aarch64 |
| NeoKylin OS | V6.0.99, V6.0.100 | x86_64/ARM aarch64 |
| Ubuntu | 22.04 LTS, 24.04 LTS | x86_64/ARM aarch64 |
Operating systems supported by OceanBase Database
OceanBase Database can be installed on the Linux operating systems in the following table.
| Linux operating system | Version | Server architecture |
|---|---|---|
| Rocky Linux | 9 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| Alibaba Cloud Linux | 2, 3 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| AnolisOS | 8 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| KylinOS | V10, V11 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| UOS | V20 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| NFSChina | 4.0 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| Inspur kos | 5.8 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| CentOS / Red Hat Enterprise Linux | 7, 8, 9 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| SUSE Enterprise Linux | 12SP5 | x86_64 (including Hygon) |
| Debian | 12 | x86_64 (including Hygon) |
| openEuler | 20.03 LTS, 22.03 LTS | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| NeoKylin Linux | V6.0.99, V6.0.100 | x86_64 (including Hygon), ARM_64 (Kunpeng, Phytium) |
| Ubuntu | 22.04 LTS, 24.04 LTS | x86_64 (including Hygon) |
Note
- The operating system must be configured with a network and a software manager (yum or zypper source).
- The server on which you deploy an OceanBase cluster must be in little-endian mode.
- If you use the Hygon 7490 chip, you must install an operating system that includes patch 4 and is recommended by the Hygon server manufacturer.
BIOS settings for OceanBase servers
Special settings
In an Intel x86 environment:
We recommend that you modify the
/etc/sysctl.confconfiguration file and set thevm.swappinessparameter to0, as shown in the following code:[root@xxx /] $vi /etc/sysctl.conf vm.swappiness = 0Apply the values in the sysctl.conf configuration file:
[root@xxx /] $sysctl -p
In an AMD or ARM environment, we recommend that you enable Numa.
In an ARM or Hygon environment, we recommend that you modify the
/etc/sysctl.confconfiguration file and set thekernel.numa_balancing,vm.zone_reclaim_mode, andvm.swappinessparameters to0, as shown in the following code:[root@xxx /] $vi /etc/sysctl.conf kernel.numa_balancing = 0 vm.zone_reclaim_mode = 0 vm.swappiness = 0Apply the values in the sysctl.conf configuration file:
[root@xxx /] $sysctl -p
Options must be disabled in BIOS
- Cstate
- Pstate
- EIST
- Power saving
Options must be enabled in BIOS
Automatic Power on After Power Loss: Always on
Intel Virtualization Technology: Enabled
Hyper-threading: Enabled
Hardware prefetcher: Enabled
VT-d: Enabled
SR-IOV: Enabled
Turbo Mode: Enabled
Energy performance: Maximum performance
Note
BIOS settings vary with servers. For more information, see the server user manual.
Disk mounting
Notice
If the directory file capacity of a mount point exceeds 16 TB, only the XFS file system is supported.
The following table describes the disk mounting requirements for the OCP server.
Mount point Size Purpose File system format /home 100 GB~300 GB Log disk for running components ext4 or XFS /data/log1 3~4 times the size of memory Log disk for OCP metadata ext4 or XFS /data/1 Depends on the size of data to be stored Data disk for OCP metadata ext4 or XFS /docker 200 GB~500 GB Root directory for Docker ext4 or XFS The following table describes the disk mounting requirements for the OBServer node.
Mount point Size Purpose File system format /home 100 GB~300 GB Log disk for the observer process ext4 or XFS /data/log1 3~4 times the size of memory Log disk for the observer process ext4 or XFS /data/1 Depends on the size of data to be stored Data disk for the observer process ext4 or XFS Note
We recommend that the root directory be at least 50 GB in size. If you use LVM, we recommend that you specify the stripe parameters when you create the volume. Example:
lvcreate -n data -L 3000G obvg --stripes=3 --stripesize=128
Server disk configuration
Notice
Do not separately mount the /var and /opt directories.
If the disk configuration is 480G2 + 3.84T NVME8 and the memory size is 512G:
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
/home. - Configure eight 3.84T disks in LVM. Then, mount 2.1T to
/data/log1as the log disk. Mount the remaining disks to/data/1as the data disk. (Select three servers as OCP servers. Mount 800G from/data/1to/docker.)
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
If the disk configuration is 480G2 + 3.84T SATA SSD8 and the memory size is 512G:
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
/home. - Configure two disks in RAID 1 and mount 2.1T to
/data/log1. Configure the remaining six disks in RAID 5 and mount them to/data/1as the data disk. (Select three servers as OCP servers. Mount 800G from/data/1to/docker.)
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
If the disk configuration is 480G2 + 1.92T SATA SSD6 and the memory size is 768G:
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
/home. - Configure three 1.92T disks in RAID 5. Mount 3.1T to
/data/log1as the log disk. Configure the remaining three 1.92T disks in RAID 5 and mount them to/data/1as the data disk. (Select three servers as OCP servers. Mount 800G from/data/1to/docker.)
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
If the disk configuration is 480G2 + 1.6T NVME6 and the memory size is 768G:
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
/home. - Configure six 1.6T disks in LVM. Mount 3.1T to
/data/log1as the log disk. Mount the remaining disks to/data/1as the data disk. (Select three servers as OCP servers. Mount 800G from/data/1to/docker.)
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
If the disk configuration is 960 SATA SSD*10 and the memory size is 384G:
- Configure four 960G disks in RAID 5. Mount 50G to the root directory and 500G to
/home. Mount the remaining disks to/data/log1. - Configure six 960G disks in RAID 5 and mount them to
/data/1. (Select three servers as OCP servers. Mount 800G from/data/1to/docker.)
- Configure four 960G disks in RAID 5. Mount 50G to the root directory and 500G to
If the disk configuration is 480G2 + 1.6T SATA SSD6 and the memory size is 384G:
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
/home. - Configure three disks in RAID 5 and mount 2.1T to
/data/log1. Configure the remaining three disks in RAID 5 and mount them to/data/1as the data disk. (Select three servers as OCP servers. Mount 800G from/data/1to/docker.)
- Configure two 480G disks in RAID 1 to install the operating system. Mount 50G to the root directory and 350G to
NIC settings
We recommend that you configure two 10-GbE NICs:
- Set the bonding mode to
bond0. You can set the bonding mode tomode1ormode4. We recommend that you set the bonding mode tomode4. If you set the bonding mode tomode4, the switch must support the 802.3ad protocol. - We recommend that you use
eth0andeth1as the NIC names. - We recommend that you use the network service and do not use NetworkManager.
- We recommend that you do not use team for NIC bonding because team cannot identify the NIC speed in
/sys/class/net/team0/speed. - The NICs in the same bond must be connected to different switches.
RAID settings
- Enable cache write back for the RAID card.
- Disable disk cache.
- Enable automatic rebuild.
- For an LSI RAID card, disable the Consistency Check and Patrol Read features. After you replace or upgrade the RAID card or its driver or firmware, disable the Consistency Check and Patrol Read features again.
Network ports
Make sure that the following network ports are not occupied by other services:
| Server | Port | Description |
|---|---|---|
| OAT server | 7000 | OAT management console |
| OCP server | 8080 | OCP API/OCP-Server web service port |
| OBServer server | 2881 | OBServer SQL listening port |
| OBServer server | 2882 | OBServer-to-OBServer RPC communication port |
| ODP server | 2883 | ODP listening port |
| ODP server | 2884 | ODP monitoring metrics API listening port |
For more information about the listening ports of other components, see Listening ports of components.
Recommendations on I/O schedulers for SSDs
- The default I/O scheduler for NVMe SSDs is
none, and no adjustment is required. - We recommend that you set the I/O scheduler of SATA SSDs to
none.
Considerations
OceanBase Database V4.2.5 BP3, V4.2.5 BP4, V4.2.5 BP5, V4.3.5 BP1, V4.3.5 BP2, V4.3.5 BP3, and V4.4.0 require the instruction set for x86 architecture environments. Before OBServer processes are started, the system checks whether the AVX instruction set is supported. If the AVX instruction set is not supported, the OBServer processes cannot be started.
Note
Starting from the following versions, OceanBase Database no longer requires the AVX instruction set to be supported.
- For V4.3.5, starting from V4.3.5 BP4, the AVX instruction set is no longer required.
- For V4.2.5, starting from V4.2.5 BP6, the AVX instruction set is no longer required.
- For V4.4.0, starting from V4.4.1, the AVX instruction set is no longer required.
