This topic describes the usage notes and limitations on the primary/standby cluster configuration.
Considerations
A standby cluster is not allowed to trigger major freezes.
A standby cluster synchronizes major freeze information from the primary cluster. Only the primary cluster can initiate major freezes. The standby cluster schedules major compactions based on the major freeze information of the primary cluster. However, a standby cluster can trigger minor freezes independent of the primary cluster.
If the major freeze information of the standby cluster is out-of-sync, system tenant synchronization of the standby cluster may get stuck.
The synchronization latency of a standby cluster cannot be too high.
If the standby cluster is severely out of sync, it internally performs a rebuild operation to pull full data from the primary cluster, and then starts to synchronize logs from the latest log point. This process results in voids in the internal multiversion data, which further causes failover failure and read failure for the standby cluster. The services can be recovered after data is synchronized from the primary cluster to the standby cluster.
To ensure that a lossy failover can succeed after a rebuild operation is performed on the standby cluster, you can set the
undo_retentionvariable for each normal tenant of the primary cluster to specify the retention period of the multiversion data. For more information about theundo_retentionvariable, see undo_retention.For example, you can execute the following statement under a normal tenant of the primary cluster to set undo_retention to 1800. This way, a failover can succeed provided that the synchronization latency of the standby cluster is within 1800s, regardless of whether a rebuild operation is performed.
obclient> SET GLOBAL undo_retention = 1800;
Limitations
| Item | Limitation |
|---|---|
| Cluster-level synchronization | Data is synchronized from the primary cluster to standby clusters by cluster. |
| Tenant-level synchronization | Data cannot be synchronized from the primary cluster to standby clusters by tenant. |
| Maximum number of standby clusters | 31 |
| Maximum number of DDL statements for the sys tenant of a standby cluster | Less than 500 per second |
| Synchronization of parameters | Modifications to parameters in the primary cluster are not synchronized to standby clusters. |
| Synchronization of system variables of user tenants | Modifications to regular system variables in the primary cluster are synchronized to standby clusters. |
| Table- or database-level Locality and Primary Zone settings in the primary cluster | Not supported |
| Tenant-level locality and primary zone settings in the primary cluster | Supported |
| Replica types |
|
| Upgrade mode | Rolling upgrades are supported starting from OceanBase Database V2.2.60. |
| GTS | GTS must be enabled for all user tenants. |
| One-phase commits of distributed transactions | enable_one_phase_commit must be set to False for the primary and standby clusters. |
| Schema objects | Tables can be copied, but Locality cannot be set for tables. |
| Adding a standby cluster |
|
| Standby cluster operations | DML operations, explicit activation of transactions, SELECT FOR UPDATE, and DDL operations on the standby cluster are not allowed. |
| Switchover |
|
| Failover |
|
| Protection mode |
|
| Physical backup and restore |
|
| Disconnecting a standby cluster |
|