This topic describes the environment configurations required for deploying primary and standby clusters.
Hardware and OS
OceanBase Database can be deployed on x86 or ARM physical servers or mainstream virtual machines (VMs) and run in mainstream Linux releases.
The primary and standby clusters can be deployed in a heterogeneous architecture in the following scenarios:
Heterogeneous servers and operating systems (OSs)
For example, the primary cluster uses x86 servers, whereas the standby clusters use ARM servers.
Different numbers of zones
For example, the primary cluster is configured with five zones and has five replicas, whereas the standby cluster is configured with three zones and has three replicas.
Different numbers of OBServer nodes in zones
For example, the primary cluster has three OBServer nodes in each zone, whereas the standby cluster has one OBServer node in each zone.
Different numbers of OBServer nodes in the zones of the primary and standby clusters result in unequal CPU, memory, and storage resources in the clusters.
Although OceanBase Database supports heterogeneous deployment, additional maintenance costs arise. For example, tenant creation may get stuck because of unequal resources. To reduce the maintenance costs, OceanBase Database provides two best practices for heterogeneous deployment.
| Heterogeneous deployment plan | Example | Benefit | Drawback |
|---|---|---|---|
| Use equal hardware configurations. | The primary cluster has five replicas and five zones, with one OBServer node in each zone, and the standby cluster has five peer OBServer nodes. | No maintenance costs arise. | Deployment costs are high, and the standby cluster occupies a large amount of resources. |
| Use the same number of OBServer nodes in each zone but different numbers of zones for the primary and standby clusters. | The primary cluster has five replicas and five zones, with three servers in each zone, and the standby cluster has three peer servers and one replica. | The zones have the same number of OBServer nodes. This reduces the maintenance costs arising from the heterogeneous tenant resource configurations. In addition, the overall deployment costs are reduced because the number of zones can be flexibly configured. | The disaster recovery capabilities of the primary and standby clusters are different. After a primary/standby switchover, the computing resources of the new primary cluster may be insufficient, affecting the service capabilities. |
Requirements on the OceanBase Database system
Take note of the following requirements or rules on the OceanBase Database system before you deploy the primary/standby cluster configuration:
You must use OceanBase Database Enterprise Edition. The primary/standby cluster configuration is available only in OceanBase Database Enterprise Edition.
The versions of the primary and standby clusters must be the same, except for rolling upgrades.
Note
A rolling upgrade upgrades the standby clusters and the primary cluster in sequence.
During the upgrade of the primary cluster, you cannot create a standby cluster. After the upgrade of the primary cluster is complete, a major compaction needs to be performed before you can create a standby cluster. For more information about major compactions, see Manually initiate a major compaction.
The primary/standby concept is specific to clusters and not applicable to tenants. You cannot specify a blocklist or an allowlist for a tenant.
OceanBase Database supports up to 31 standby clusters.
Each cluster has a cluster name (cluster_name) and a cluster ID (cluster_id), which uniquely identify the cluster.
In the primary/standby cluster configuration, the primary and standby clusters must have the same cluster name but different cluster IDs.
Locality and primary zone configurations of the primary cluster are specific to tenants, and are unavailable for tables, table groups, and databases.
ob_timestamp_servicemust be set to GTS for all normal tenants in the primary cluster.enable_one_phase_commitmust be set toFalsefor the primary cluster. This means that one-phase transaction commits are not allowed in the primary cluster.Physical backup is available only for the primary cluster and is not for standby clusters.
Physically restored tenant data cannot be synchronized to standby clusters. Therefore:
After tenants are physically restored in the primary cluster, a major compaction needs to be performed before you can create a standby cluster.
If a standby cluster is being configured, tenants cannot be physically restored in the primary cluster.
A standby cluster cannot be restored from a physical backup.