The high availability of OBKV-HBase relies on the high availability of the entire link, namely, the high availability of OBServer nodes, OceanBase Database Proxy (ODP), and the client on the link. OBKV-HBase inherits the high availability capability of OceanBase Database on the server side, and supports deployment for direction connection and connection through ODP respectively.
High availability of OBServer nodes
OceanBase Database provides a variety of technologies for ensuring high availability of services, including intra-cluster multi-replica disaster recovery, arbitration-based disaster recovery, and inter-cluster Physical Standby Database disaster recovery. For more information, see topics under High availability.
High availability of the link
High availability in proxy mode
OceanBase Database Proxy (ODP) is a dedicated proxy server for OceanBase Database.
The high availability feature of ODP covers the following two aspects:
- High availability of ODP services. ODP services can be recovered in a timely manner after a failure occurs.
- High availability of OceanBase Database.
For example, ODP can detect and identify faulty OBServer nodes and add them to the blocklist. It can also shield applications from database faults through proper routing. When an OBServer node is recovered, ODP adds the OBServer node back to the allowlist to ensure that resources of each server are fully utilized.
High availability in direct connection mode
In direct connection mode of OBKV-HBase, client requests are directly delivered to the corresponding OBServer node without being forwarded by ODP. In a distributed system, message addressing is the key to ensure that requests are delivered to the correct processing node. A common solution is to cache and query the routing table before delivering messages.
In proxy mode, ODP is deployed as an independent component, and the client does not need to concern about routing. Messages are delivered to ODP based on the load balancing algorithm or by using a component, and ODP performs addressing based on the routing table. The benefit is that all messages complying with the message protocol can be addressed by ODP, with little dependence on the client. The drawback is that ODP needs to be separately deployed and usually does not reside on the same physical node as the client and database server, leading to message forwarding and network overheads.
In direct connection mode, the client inherits the routing capability, and message routing is completed within the client. Messages are directly delivered to the corresponding processing node, avoiding message forwarding and network overheads. However, this solution strongly depends on the client.
The OBKV-HBase client also has the high availability capability similar to ODP.
- The OBKV-HBase client caches routing information. When routing information changes due to scaling, failure, or load balancing of OBServer nodes, the OBKV-HBase client refreshes the routing table and applies the message retransmission mechanism to mask database server changes from business, thereby ensuring business continuity.
- The URL of the Config Server is configured on the OBKV-HBase client for the client to pull the RegionServer list from the Config Server. The client then caches the RegionServer list and pulls routing information from the target RegionServer node on the list. When the RegionServer list of an OceanBase cluster changes, the changes will be pushed to the Config Server.