This topic describes the environment configurations required for deploying primary and standby clusters.
Hardware and OS
OceanBase Database can be deployed on x86 or advanced RISC machine (ARM) physical servers or mainstream virtual machines (VMs) and run in a mainstream Linux releases.
The primary and standby clusters can be deployed in a heterogeneous architecture:
Heterogeneous servers and operating systems (OSs)
For example, the primary cluster uses x86 servers, while the standby clusters use ARM servers.
Different numbers of zones
For example, the primary cluster is configured with five zones and has five replicas, while the standby cluster is configured with three zones and has three replicas.
Different numbers of servers in zones
For example, the primary cluster has three servers in each zone, while the standby cluster has one server in each zone.
Different numbers of servers in the zones of the primary and standby clusters result in unequal CPU, memory, and storage resources in the clusters.
Although OceanBase Database supports heterogeneous deployment, additional maintenance costs arise. For example, tenant creation may get stuck because of the unequal resources. To reduce the maintenance costs, OceanBase Database provides two best practices for heterogeneous deployment.
| Heterogeneous deployment plan | Example | Advantage | Disadvantage |
|---|---|---|---|
| Hardware configurations are equal. | The primary cluster has five replicas and five zones, with one server in each zone, and the standby cluster provides five peer servers. | No maintenance costs arise. | Deployment costs are high, and the standby cluster occupies a large amount of resources. |
| The primary and standby clusters have the same number of servers in each zone but different numbers of zones. | The primary cluster has five replicas and five zones, with three servers in each zone, and the standby cluster has three peer servers and one replica. | The zones have the same number of servers. This reduces the maintenance costs arising from the heterogeneous tenant resource configurations. In addition, the overall deployment costs are reduced because the number of zones can be flexibly configured. | The disaster recovery capabilities of the primary and standby clusters are different. After a primary/standby switchover, the computing resources of the new primary cluster may be insufficient, affecting the service capabilities. |
Requirements on the OceanBase Database system
Before you deploy the primary/standby cluster configuration, make sure that your OceanBase Database system meets the following requirements:
The primary/standby cluster configuration is available only in OceanBase Database Enterprise Edition.
The versions of the primary and standby clusters must be the same, except for rolling upgrades.
Note
A rolling upgrade upgrades the standby clusters and the primary cluster in sequence.
During the upgrade of the primary cluster, you cannot create a standby cluster. After the upgrade of the primary cluster is completed, a major compaction needs to be performed before you can create a standby cluster.
The primary/standby concept is specific to clusters and is unavailable for tenants. You cannot specify a blacklist or a whitelist for a tenant either.
OceanBase Database supports up to 31 standby clusters.
Each cluster has a cluster name (cluster_name) and a cluster ID (cluster_id), which uniquely identify the cluster.
In the primary/standby cluster configuration, the primary and standby clusters must have the same cluster name but different cluster IDs.
Locality and Primary Zone configurations of the primary cluster are specific to tenants, and are unavailable for tables, table groups, and databases.
ob_timestamp_servicemust be set to GTS for all common tenants in the primary cluster.enable_one_phase_commitmust be set toFalsefor the primary cluster. That is, one-phase transaction commits are not allowed in the primary cluster.Physical backup is available only for the primary cluster but not for standby clusters.
Physically restored tenant data cannot be synchronized to standby clusters. Therefore:
After tenants are physically restored in the primary cluster, a major compaction needs to be performed before you can create a standby cluster.
If a standby cluster is being configured, tenants cannot be physically restored in the primary cluster.
A standby cluster cannot be restored from a physical backup.