Deployment

2025-12-03 09:53:04  Updated

What are the system requirements for installing an OceanBase server?

The following table lists the minimum system requirements for installing an OceanBase server.

Server type Quantity Minimum functional configuration Minimum performance configuration
OceanBase Cloud Platform (OCP) server 1 16 cores, 32 GB of memory, and 1.5 TB of storage 32 cores, 128 GB of memory, 1.5 TB of SSD storage, and 10 Gbit/s NICs
OceanBase Database computing server 3 4 cores and 16 GB of memory

Note

The log disk size must be four times larger than the memory size, and the data disk must be large enough to store target data.

32 cores, 256 GB of memory, and 10 Gbit/s NICs

Note

  • The log disk size must be four times larger than the memory size, and the data disk must be large enough to store target data.
  • 10 Gbit/s NICs are required.

To ensure high availability for the OCP management service, three management servers are required. You should also implement load balancing through software or hardware, such as F5 or Alibaba Cloud SLB, or utilize the ob_dns software load component provided by the OceanBase database for a three-node deployment.

The following table lists the Linux operating systems that support OceanBase Database.

Linux operating system Version Server architecture
Alibaba Cloud Linux 2 x86_64 or AArch64
AnolisOS 8.6 and later x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)
KylinOS V10 x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)
Unity Operating System (UOS) V20 x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)
NFSChina V4.0 and later x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)
Inspur KOS V5.8 x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)
CentOS/Red Hat Enterprise Linux V7.x and V8.x x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)
SUSE Enterprise Linux Server 12SP3 and later x86_64 (including Hygon)
Debian V8.3 and later x86_64 (including Hygon)
openEuler 20.03 LTS SP1/SP2 x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)
LinxOS V6.0.99 and V6.0.100 x86_64 (including Hygon) or AArch64 (Kunpeng and Phytium)

Note

Before you use an operating system, configure the network and install a package manager such as YUM or Zypper first.

How do I deploy OceanBase Database in the production environment?

The following table describes deployment solutions.

Solution Feature Infrastructure requirement Scenario
Three replicas in one data center RPO=0, low RTO, and automatic failover. Resilient to some hardware failures but not to data center or city-wide disasters. Single data center No requirements for data center or city-wide disaster recovery capabilities.
Three replicas in three data centers in the same region RPO=0, low RTO, and automatic failover. Resilient to some hardware failures and data center-level disasters, but not city-wide disasters. Three data centers in the same region. Low network latency between data centers. Requires data center-level disaster recovery, no city-wide recovery needed.
Five replicas in five data centers across three regions RPO=0, low RTO, and automatic failover. Resilient to some hardware failures, data center-level disasters, and city-wide disasters. Five data centers across three regions. Two regions must be geographically close to provide low network latency. Requires both data center-level and city-wide disaster recovery.
Two data centers in the same region + Inter-cluster data replication RPO>0, high RTO, and manual failover required. Resilient to some hardware failures and data center-level disasters, but not city-wide disasters. Two data centers in the same region. Two data centers in the same region with data center-level disaster recovery requirements.
Five replicas in three data centers across two regions + Inter-cluster data replication For data center-level failures: RPO=0, low RTO, and automatic failover. For city-wide failures: RPO>0, high RTO, and manual failover required. Resilient to some hardware failures, data center-level, and city-wide disasters. Three data centers deployed across two regions. Two cities and three data centers, requiring both data center-level and city-wide disaster recovery.

What is LSE?

Large System Extensions (LSE) is a feature introduced in ARMv8.1. It provides a set of atomic operations designed to support synchronization and mutual access in multiprocessor environments. These atomic operations include Load-Exclusive (LDE), Store-Exclusive (STE), and Conditional Compare Exchange (CCXE) instructions. LSE also offers new instructions and memory barriers to maintain data consistency and ordering.

Using LSE instructions enables efficient access to shared memory and provides finer-grained locking mechanisms with lower overhead. Compared with traditional synchronization instructions, LSE can reduce lock contention, enhance concurrency performance, and improve scalability.

From which version does OCP support packages with nolse?

OCP started accommodating OceanBase Database RPM packages marked with nolse from V4.3.0, primarily addressing the detection issue of LSE instruction set functionality on ARM architecture.

Considerations for OBD deployment

Both OceanBase Database RPM packages with and without nolse can be uploaded simultaneously. OBD will adaptively deploy according to the system's support. Alternatively, you can upload only the supported packages for installation as needed.

Considerations for OCP Deployment

While both nolse and non-nolse packages can be uploaded to OCP, adaptive installation is not supported. You must select the appropriate package version that supports LSE when deploying the cluster.

Contact Us