OceanBase Database Proxy (ODP) is a proxy service dedicated to OceanBase Database. ODP is equipped with high availability (HA) design.
ODP deployment modes
Merged deployment
Boot in OCP
ODP can access multiple clusters.
When OceanBase Cloud Platform (OCP) is down, ODP can still access the clusters that have been accessed but cannot access the clusters that have not been accessed.
Boot based on the RootService list (only access to a single cluster is allowed by using the IP address 127.0.0.1)
ODP accesses a fixed cluster that is independent of OCP. If the local server is faulty, the server load balancer (SLB) detects the fault and performs traffic switchover without the need for additional operations of ODP.
Server faults in specific scenarios can be handled.
Independent deployment
Boot in OCP
ODP can access multiple clusters.
When OCP is down, ODP can still access the clusters that have been accessed but cannot access the clusters that have not been accessed.
Boot based on the RootService list
- ODP accesses a fixed cluster that is independent of OCP. If an error occurs with the RootService list, the cluster may become inaccessible.
ODP disaster recovery
Fault detection
Schedules tasks to refresh the status of the OBServers, zones, and primary/standby cluster.
- Limits: This solution depends on the status updates of the OBServers. When an OBServer is down, for example, when its disk is unresponsive, the OBServer may not become inactive or be stopped. In this case, the OBServer status does not change.
Implements the server persistent connection and keepalive mechanisms to check the connections between ODP and backend OBservers. This way, ODP can identify the situation where an idle connection is abnormally closed at the earliest opportunity, releases the connection, and establishes a new one.
Implements the client keepalive mechanism to check the connections between ODP and clients to avoid timeout errors of idle SLBs.
Fault handling
Processing of ongoing requests Implements the asynchronous termination mechanism. When an OBServer fault is detected, the connection is promptly terminated to avoid a long waiting period and prevent the service connection pool from being exhausted. Processing of new requests
Use the blocklist to avoid sending new requests to a faulty OBServer.
An OBServer with multiple access failures is blocklisted. After an OBServer is blocklisted for a period of time, a detection request is sent to this OBServer to check whether it becomes accessible.
Limits: If a client times out and disconnects from an OBServer, the OBServer is not blocklisted.
If the status of the OBServer changes to inactive, the OBServer is blocklisted.
Limits: The recovery time objective (RTO) depends on the OBServer detection time and the interval at which a scheduled task is triggered to refresh the OBServer status. The default interval is 20s. You can also specify a custom interval.
Asynchronously refreshes the table location cache to detect replica switchover operations. If a replica switchover operation is performed upon a server fault, requests are routed to the new replica.
Limits: The OBServer must notify ODP of routing faults. If the OBServer is faulty and does not return routing information, the table location cache is not refreshed.
Adopts the primary/standby cluster architecture for two IDCs. If the primary cluster is changed, ODP can switch to the new primary cluster with efficiency.
Limits: The RTO depends on the OBServer switchover time and the interval at which a scheduled task is triggered to refresh the OBServer status. The default interval is 20s. You can also specify a custom interval.