The multi-replica mechanism of OceanBase clusters provides superb disaster recovery capabilities. This mechanism implements an automatic failover upon a server-, IDC-, or city-level failure without data loss, that is, the recovery point objective (RPO) is 0.
The primary/standby cluster architecture is an important supplement to the high availability (HA) capabilities of OceanBase Database. If the primary cluster is unavailable (the majority of replicas fail), the standby cluster takes over the services. Lossless switchover (RPO = 0) and lossy switchover (RPO > 0) are supported to minimize service downtime.
OceanBase Database allows you to create, maintain, manage, and monitor one or more standby clusters. A standby cluster accommodates a hot backup for the production cluster, namely the primary cluster. The administrator can allocate resource-intensive table-related operations to standby clusters to improve system performance and resource utilization.
Primary/standby cluster configuration
The primary/standby cluster configuration of OceanBase Database supports one primary cluster and up to 31 standby clusters. You can manage the primary and standby clusters by using SQL statements or in OceanBase Cloud Platform (OCP).
The only cluster that supports business writes and strong consistency reads is the primary cluster, namely, the production cluster. It plays the PRIMARY role.
A standby cluster is a hot backup of the primary cluster to ensure transaction consistency. It plays the PHYSICAL STANDBY role. The primary cluster automatically transmits redo clogs to the standby cluster. After the standby cluster receives the redo clogs, it performs local log persistence by write-ahead logging (WAL) and applies the redo clogs replay operation to redo the changes in the user data and database object metadata of the primary cluster to ensure physical consistency with the primary cluster.
OceanBase Database also provides multiple data verification mechanisms to ensure the consistency between the data restored by standby clusters and the original data in the primary cluster.
Data restore verification mechanism
The log entry checksum verification ensures the accuracy of each log entry.
The log accumulation checksum verification ensures that all log entries are recorded.
The transaction checksum verification ensures the integrity of each transaction.
The row checksum verification ensures the integrity of each row.
Multi-replica data verification mechanism
The column checksum verification ensures the consistency between columns in the replicas.
The partition data checksum and row count verification ensures the consistency between partition data in the replicas.
Index table and primary table data verification mechanism
The column checksum verification ensures data consistency between the columns in the index table and the columns in the primary table.
The row count verification ensures the consistency between data sizes of the index table and the primary table.
Cross-cluster data verification mechanism
- The partition data checksum and row count verification ensures the consistency between partition data in the primary cluster and standby clusters.