The multi-replica mechanism of OceanBase clusters provides superb disaster recovery capabilities. This mechanism implements an automatic failover upon a server-,IDC-, or city-level failure without data loss, that is, the recovery point objective (RPO) is 0.
The primary/standby cluster architecture is an important supplement to the high-availability capability of OceanBase Database. When the primary cluster expectedly or unexpectedly becomes unavailable, for example, the majority replicas are faulty, a standby cluster can take over the services. OceanBase Database provides two disaster recovery capabilities: lossless failover with an RPO of 0 and lossy failover with an RPO greater than 0, to minimize the service downtime.
OceanBase Database allows you to create, maintain, manage, and monitor one or more standby clusters. A standby cluster accommodates a hot backup for the production cluster, that is, the primary cluster. The administrator can allocate resource-intensive table-related operations to standby clusters to improve system performance and resource utilization.
Primary/Standby cluster configuration
The primary/standby cluster configuration of OceanBase Database supports one primary cluster and up to 31 standby clusters. You can manage the primary and standby clusters by using SQL commands or on OceanBase Cloud Platform (OCP).
The only cluster that supports business writes and strong consistency reads is the primary cluster, namely, the production cluster. It plays the PRIMARY role.
A standby cluster is a data backup of the primary cluster to ensure transaction consistency. It plays the PHYSICAL STANDBY role. The primary cluster automatically transfers REDO logs to the standby clusters. The standby cluster stores REDO logs, and replays them to restore the user data and schema to ensure physical data consistency with the primary cluster.
OceanBase Database also provides multiple data verification mechanisms to ensure consistency between the data restored by the standby cluster and the original data in the primary cluster.
Data restoration verification mechanism
The log entry checksum verification ensures the accuracy of each log entry.
The log accumulation checksum verification ensures that all log entries are recorded.
The transaction checksum verification ensures the integrity of each transaction.
The row checksum verification ensures the integrity of each row.
Multi-replica data verification mechanism
The column checksum verification ensures the consistency between columns in the replicas.
The partition data checksum and row count verification ensures the consistency between partition data in the replicas.
Index table and primary table data verification mechanism
The column checksum verification ensures the data consistency between the columns in the index table and the columns in the primary table.
The row count verification ensures the consistency between data sizes of the index table and primary table.
Cross-cluster data verification mechanism
- The partition data checksum and row count verification ensures the consistency between partition data in the primary cluster and the standby cluster.