What are the server requirements for installing OceanBase Database?
The following table lists the minimum configuration requirements for servers where OceanBase Database is to be installed.
| Server type | Quantity | Minimum functional configuration | Minimum performance configuration |
|---|---|---|---|
| OceanBase Cloud Platform (OCP) server | 1 | 16 CPU cores, 32 GB of memory, and 1.5 TB storage space | 32 CPU cores,128 GB of memory, 1.5 TB SSD, and 10 Gbit/s NICs |
| OceanBase Database computing server | 3 | 4 CPU cores and 16 GB of memory
NoteThe log disk size must be more than four times the memory size. The data disk must provide sufficient space for storing the required amount of data. |
32C, 256G, and 10 Gbit/s NICs
NoteThe log disk size must be more than four times the memory size. The data disk must provide sufficient space for storing the required amount of data. |
To ensure high availability of OCP services, deploy OCP on three nodes and use a hardware- or software-based load balancer such as F5, Alibaba Cloud Server Load Balancer (SLB), or OceanBase Database ob_dns.
The following table lists the Linux distributions supported for OceanBase Database.
| Linux distribution | Version | Server architecture |
|---|---|---|
| Alibaba Cloud Linux | 2 | x86_64 or ARM64 |
| Anolis OS | 8.6 and later | x86_64 (including Hygon) or ARM64 (Kunpeng and Phytium) |
| KylinOS | V10 | x86_64 (including Hygon) or ARM64 (Kunpeng and Phytium) |
| Unity Operating System (UOS) | V20 | x86_64 (including Hygon) or ARM64 (Kunpeng and Phytium) |
| NFSChina | 4.0 and later | x86_64 (including Hygon) or ARM644 (Kunpeng and Phytium) |
| Inspur KOS | 5.8 | x86_64 (including Hygon) or ARM64 (Kunpeng and Phytium) |
| CentOS/Red Hat Enterprise Linux | 7.x and 8.x | x86_64 (including Hygon) or ARM64 (Kunpeng and Phytium) |
| SUSE Enterprise Linux | 12 SP3 and later | x86_64 (including Hygon) |
| Debian | 8.3 and later | x86_64 (including Hygon) |
| openEuler | 20.03 LTS SP1/SP2 and 22.10 LTS | x86_64 (including Hygon) or ARM64 (Kunpeng and Phytium) |
| LinxOS | V6.0.99 and V6.0.100 | x86_64 (including Hygon) or ARM64 (Kunpeng and Phytium) |
Note
Before you use a Linux distribution, configure the network and install a package manager such as YUM or Zypper first.
How do I deploy OceanBase Database in a production environment?
The following table describes the deployment solutions.
| Deployment solution | Characteristic | Infrastructure requirement | Applicable scenario |
|---|---|---|---|
| Three replicas in one IDC | A recovery point objective (RPO) of 0, a low recovery time objective (RTO), and automatic failover This solution enables your application to recover from some hardware failures, but not IDC-level or city-level failures. | One IDC | Scenarios where you do not expect your application to recover from IDC-level or city-level failures |
| Three replicas in three IDCs in the same region | An RPO of 0, a low RTO, and automatic failover This solution enables your application to recover from some hardware and IDC-level failures, but not city-level failures. | Three IDCs in the same region, with a short network latency among the IDCs | Scenarios where you expect your application to recover from IDC-level failures, but not city-level failures |
| Five replicas in five IDCs across three regions | An RPO of 0, a low RTO, and automatic failover This solution enables your application to recover from some hardware, IDC-level , and city-level failures. | Five IDCs across three regions, of which two regions are geographically close with a low network latency | Scenarios where you expect your application to recover from IDC-level and city-level failures |
| Two IDCs in the same region, with data replication enabled between clusters | An RPO greater than 0, a high RTO, and manual failover This solution enables your application to recover from some hardware and IDC-level failures, but not city-level failures. | Two IDCs in the same region | Scenarios with two IDCs in the same region and demanding IDC-level disaster recovery |
| Five replicas in three IDCs across two regions, with data replication enabled between clusters | IDC-level failure: an RPO of 0, a low RTO, and automatic failover City-level failure: an RPO greater than 0, a high RTO, and manual failover This solution enables your application to recover from some hardware, IDC-level, and city-level failures. | Three IDCs across two regions | Scenarios with three IDCs across two regions and demanding IDC-level and city-level disaster recovery |
What is LSE?
LSE, short for Large System Extensions, is a feature introduced in ARM 8.1. LSE provides a group of atomic operations for synchronous and mutual access. These atomic operations include Load-Exclusive (LDE), Store-Exclusive (STE), and Conditional Compare Exchange (CCXE). LSE also provides some new instructions and memory barriers for maintaining data consistency and sequence.
LSE instructions implement efficient access to the shared memory and provide more fine-grained lock mechanisms and lower overheads. Compared with conventional synchronous instructions, LSE instructions can reduce lock contention, improve concurrency performance, and provide higher scalability.
Which OCP version supports RPM packages with nolse marks?
OCP V4.3.0 and later support OceanBase Database RPM packages with a nolse mark, so that LSE instructions can be identified in the ARM architecture.
What are the considerations for deploying obd?
You can upload an OceanBase Database RPM package with or without a nolse mark to OceanBase Deployer (obd). obd automatically deploys the corresponding package based on system support information. Alternatively, you can upload a supported package to obd for installation.
What are the considerations for deploying OCP?
OCP allows you to upload an OceanBase Database RPM package with or without a nolse mark but does not support adaptive installation. When you select a version of the OceanBase cluster to be installed, you need to select a package based on the actual situation.