This topic describes the precautions and use constraints concerning the primary/standby cluster configuration feature.
Precautions
A standby cluster is not allowed to trigger major freezes.
A standby cluster synchronizes major freeze information from the primary cluster. Only the primary cluster can initiate major freezes. The standby cluster schedules major compactions based on the major freeze information of the primary cluster. However, a standby cluster can trigger minor freezes independent of the primary cluster.
If the major freeze information of the standby cluster is out-of-sync, system tenant synchronization of the standby cluster may get stuck.
The synchronization latency of a standby cluster cannot be too large.
If the standby cluster is severely out of sync, it will perform rebuild internally to pull full data from the primary cluster, and then start to synchronize logs from the latest log point. This process results in voids in the internal multiversion data, causing failover failure and read failure for the standby cluster. The services can be recovered after data is synchronized from the primary cluster to the standby cluster.
To ensure that a lossy failover can succeed after a rebuild operation is performed on the standby cluster, you can set the
undo_retentionvariable for each common tenant of the primary cluster to specify the retention period of the multiversion data. For more information about theundo_retentionvariable, see undo_retention.For example, you can run the following command under a common tenant of the primary cluster to set undo_retention to 1800. In this way, a failover can succeed provided that the synchronization latency of the standby cluster is within 1800s, regardless of whether a rebuild operation is performed.
obclient> SET GLOBAL undo_retention = 1800;
Use constraints
| Item | Description |
|---|---|
| Cluster-specific synchronization | Data is synchronized from the primary cluster to standby clusters by cluster. |
| Tenant-specific synchronization | Data cannot be synchronized from the primary cluster to standby clusters by tenant. |
| Maximum number of standby clusters | 31 |
| Number of DDL statements for the system tenant of a standby cluster | Less than 500 per second |
| Synchronization of configuration items | Modifications of configuration items in the primary cluster are not synchronized to standby clusters. |
| Synchronization of system variables of common tenants | Modifications of common system variables in the primary cluster are synchronized to standby clusters. |
| Table- or database-specific Locality and Primary Zone settings in the primary cluster | Not supported |
| Tenant-specific Locality and Primary Zone settings in the primary cluster | Supported |
| Replica types | * Full-featured, log-only, read-only, and encrypted voting replicas are supported. * Hybrid deployment of replicas cannot be configured in Locality. |
| Upgrade mode | Rolling upgrades are supported starting from V2.2.60. |
| GTS | GTS is enabled for all common tenants. |
| Phase-one commits of distributed transactions | enable_one_phase_commit is set to False for the primary and standby clusters. |
| Schema objects | Tables can be copied, but Locality cannot be set for tables. |
| Adding a standby cluster | * After an upgrade, a standby cluster can be added only after a major compaction is performed. * The standby cluster must be empty. * The primary cluster does not contain tables under splitting. * The primary cluster can contain only Locality and Primary Zone configurations of tenants. * The primary cluster is not being upgraded. * The OceanBase Database version of the standby cluster is consistent with that of the primary cluster. * The replicas deleted from the schema are all garbage-collected. * GTS is enabled for all common tenants. * enable_one_phase_commit is set to False for the primary cluster. |
| Switchover | * All servers in the primary cluster are active. * No major compaction error has occurred. * If write is disabled during a switchover, the original primary cluster returns a 4688 error or a timeout error. * If the switchover duration exceeds the value of max_stale_time_for_weak_consistency, which is 5s by default, the standby cluster stops the weak consistency read service. * enable_rereplication of the standby cluster must be set to True. Otherwise, the switchover cannot be performed. * When one primary cluster and multiple standby clusters are configured, all standby clusters must be active, or inactive standby clusters must be in the DISABLED state. * The primary and standby clusters must be of the same version. * The primary cluster is not under physical backup. * No replica is in the Restore state. * In maximum protection mode, the log transfer mode of the primary cluster is SYNC. * The primary cluster is not being upgraded. |
| Failover | * Failovers are supported in maximum performance mode. * Failovers are supported in maximum protection mode. * Failovers are supported in maximum availability mode. * A lossy failover requires that all servers be active. * After a failover, a standby cluster can be added only after a major compaction is performed. |
| Protection mode | * The maximum performance, maximum protection, and maximum availability modes are supported. * The configurations of standby clusters in maximum protection or maximum availability mode are subject to the following constraints: * Only one standby cluster is in SYNC mode. * The standby cluster in SYNC mode cannot be modified, deleted, or disabled. * The log transfer mode of the standby cluster in SYNC mode cannot be changed to ASYNC. * Switchovers in maximum protection or maximum availability mode are subject to the following constraints: * Only the standby cluster in SYNC mode can be switched to the primary role. * The log transfer mode of the primary cluster is set to SYNC to ensure that it is in maximum protection mode after a primary/standby switchover. |
| Physical backup and restoration | * When standby clusters exist, the primary cluster cannot be restored. * Standby clusters do not support backup and log archiving. * Standby clusters cannot be physically restored. * After a tenant is physically restored in the primary cluster, a standby cluster can be added only after a major compaction is performed. |