ODP considers the replica locations involved in user requests, the read/write splitting routing strategy configured by users, the optimal network path to OceanBase Database based on its multi-region deployment, and the status and load of OceanBase Database servers to route user requests to the optimal OBServer node, ensuring the high-performance operation of OceanBase Database.
Before you read this section, we recommend that you first understand some routing concepts to better understand the following content.
Zone
Region
Server list
RS list
Location cache
Replica
Major compaction
Strong-consistency read/weak-consistency read
Read/write zone
Partitioned table
Partitioning key
Execution plans of OceanBase Database Proxy
OceanBase Database Proxy supports three types of execution plans: local, remote, and distributed. ODP tries its best to avoid using remote plans because they are inefficient and have poor performance. Instead, ODP tries to obtain local plans.
Role of ODP routing
After you understand the basic concepts and physical meanings of the related terms, you can understand the routing logic of ODP. ODP needs to accurately route SQL statements based on the design and physical distribution of partitions and the high efficiency of local execution plans. The process includes SQL statement parsing, partition calculation, partition information acquisition, and replica strategy selection.
Routing for non-partitioned tables
ODP can directly use the replica information in the Location Cache to route requests to non-partitioned tables. ODP maintains a mapping between partitions and OBServer nodes. The mapping is stored in the Location Cache. ODP parses the table name in an SQL statement and queries the cache for the IP address of the server that stores the partition to which the table belongs. The following three cases can occur based on the validity of the information in the cache:
The cache does not contain the required information. In this case, ODP queries the latest mapping from an OBServer node and stores the mapping in the cache.
The cache contains the required information, but the information is invalid. In this case, ODP requeries the OBServer node and updates the cache.
The cache contains valid information. In this case, ODP uses the information in the cache.
Routing for partitioned tables
Compared with non-partitioned tables, partitioned tables require additional processes to obtain partition IDs and related information. After ODP obtains the information in the Location Cache, it determines whether the table has primary or secondary partitions. Based on the partition key type and calculation method, ODP calculates the partition ID and obtains the information about the primary and standby replicas.
During partition calculation, ODP can obtain the partition key and its type from the table structure. ODP then parses the SQL statement to obtain the value of the partition key and calculates the partition to which the SQL statement belongs based on the table structure and the type of the partition key.
In normal cases, after ODP calculates the partition, ODP can route the SQL statement to the server that stores the partition. This avoids remote execution and improves efficiency. In ODP V3.2.0, the routing for partitioned tables whose partitions cannot be calculated is optimized. In the earlier version, a random tenant server is selected for routing in this scenario. In ODP V3.2.0, a random server that stores the partition is selected from the servers that store the partition for routing. This improves the hit rate and minimizes remote execution.
Replica routing (standard deployment)
For strong-consistency read requests where the SQL statement specifies the table name, the system routes the requests to the OBServer node that is the leader of the specified table partition. For weak-consistency read requests, login authentication requests, or strong-consistency read requests without a specified table name, the system uses one of the following three routing strategies, based on the deployment mode and the read consistency level: primary/standby balancing (default), prefer standby, or prefer non-merged standby.
Primary/Standby balancing (default)
The system selects an OBServer node for routing based on the following priority:
Nodes in the same region and IDC that are not involved in a major compaction.
Nodes in the same region but different IDCs that are not involved in a major compaction.
Nodes in the same region and IDC that are involved in a major compaction.
Nodes in the same region but different IDCs that are involved in a major compaction.
Nodes in different regions that are not involved in a major compaction.
Nodes in different regions that are involved in a major compaction.
Prefer standby
In a standard deployment and for weak-consistency read requests, the system supports the prefer follower routing strategy. You can enable this strategy by setting the proxy_route_policy variable at the user level. This strategy applies only to standard deployments and weak-consistency read requests, and it prioritizes follower nodes over nodes selected through primary/standby balancing.
To enable prefer follower routing in a standard deployment and for weak-consistency read requests, execute the set @proxy_route_policy='follower_first'; statement. In this mode, the system routes requests to follower nodes, even if the OBServer node is involved in a major compaction. The system selects an OBServer node for routing based on the following priority:
Follower nodes in the same region and IDC that are not involved in a major compaction.
Follower nodes in the same region but different IDCs that are not involved in a major compaction.
Follower nodes in the same region and IDC that are involved in a major compaction.
Follower nodes in the same region but different IDCs that are involved in a major compaction.
Leader nodes in the same region and IDC that are not involved in a major compaction.
Leader nodes in the same region but different IDCs that are not involved in a major compaction.
Follower nodes in different regions that are not involved in a major compaction.
Follower nodes in different regions that are involved in a major compaction.
Leader nodes in different regions that are not involved in a major compaction.
Leader nodes in different regions that are involved in a major compaction.
Prefer non-merged standby
In a standard deployment and for weak-consistency read requests, if you execute the set @proxy_route_policy='unmerge_follower_first'; statement, the system routes requests to follower nodes that are not involved in a major compaction. The system selects an OBServer node for routing based on the following priority:
Follower nodes in the same region and IDC that are not involved in a major compaction.
Follower nodes in the same region but different IDCs that are not involved in a major compaction.
Follower nodes in the same region and IDC that are involved in a major compaction.
Follower nodes in the same region but different IDCs that are involved in a major compaction.
Leader nodes in the same region and IDC that are not involved in a major compaction.
Leader nodes in the same region but different IDCs that are not involved in a major compaction.
Follower nodes in different regions that are not involved in a major compaction.
Follower nodes in different regions that are involved in a major compaction.
Leader nodes in different regions that are not involved in a major compaction.
Leader nodes in different regions that are involved in a major compaction.
Other cases
In a standard deployment and for weak-consistency read requests, if the proxy_route_policy variable has a value other than those specified earlier, the system uses the primary/standby balancing strategy as the fallback option.