OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Best Practices

All Versions

  • Deploy
    • Configuration guide for read-write splitting in AP scenarios
    • Best practices for read-write splitting
  • Migrate
    • Data transfer solutions in OceanBase Database
    • Overview on data migration
    • Best practices for importing data files to OceanBase Database
    • Best practice for migrating data from other databases to OceanBase Database
    • Massive data migration strategy
    • Best practices for migrating data from MyCat to OceanBase Database
    • Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode
  • Route
    • ODP routing best practices
  • Table Design
    • Best practices for table design and index optimization
    • Best practices for creating indexes on large tables
    • Best practices for database development
  • Develop
    • Best practices for connecting Java applications to OceanBase Database
    • Best practices for integrating Spark Catalog with OceanBase Database
    • Best practices for achieving optimal performance in batch DML using JDBC and OBServer
    • Best practices for bulk data cleanup in OceanBase Database
    • Best practices for PDML processing in OceanBase Database
    • Best practices for hot tables in OceanBase Database
    • Best practices for auto-increment columns and sequences in OceanBase Database
  • Manage
    • Best practices for resource throttling
    • Best practices for data load balancing
    • Best practices for security certification
    • Best practices for access control
    • Best practices for data encryption
  • Diagnose
    • Best practices for log interpretation in common scenarios
    • Best practices for end-to-end tracing
    • Best practices for using obdiag to collect performance information
    • Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements
    • Best practices for troubleshooting OceanBase Database performance issues
  • Performance Tuning
    • Best practices for handling slow queries
    • Best practices for collecting statistics to generate an efficient execution plan
    • Best practices for updating hotspot rows
    • Best practices for large object storage performance
    • Best practices for semi-structured storage performance
    • Best practices for OceanBase materialized views
  • Cloud Database
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through primary and standby databases across clouds
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud

Download PDF

Configuration guide for read-write splitting in AP scenarios Best practices for read-write splitting Data transfer solutions in OceanBase Database Overview on data migration Best practices for importing data files to OceanBase Database Best practice for migrating data from other databases to OceanBase Database Massive data migration strategy Best practices for migrating data from MyCat to OceanBase Database Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode ODP routing best practices Best practices for table design and index optimization Best practices for creating indexes on large tables Best practices for database development Best practices for connecting Java applications to OceanBase Database Best practices for integrating Spark Catalog with OceanBase Database Best practices for achieving optimal performance in batch DML using JDBC and OBServer Best practices for bulk data cleanup in OceanBase Database Best practices for PDML processing in OceanBase Database Best practices for hot tables in OceanBase Database Best practices for auto-increment columns and sequences in OceanBase Database Best practices for resource throttling Best practices for data load balancing Best practices for security certification Best practices for access control Best practices for data encryption Best practices for log interpretation in common scenarios Best practices for end-to-end tracing Best practices for using obdiag to collect performance information Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements Best practices for troubleshooting OceanBase Database performance issues Best practices for handling slow queries Best practices for collecting statistics to generate an efficient execution plan Best practices for updating hotspot rows Best practices for large object storage performance Best practices for semi-structured storage performance Best practices for OceanBase materialized views Best practices for achieving high availability through cross-cloud active-active deployment High availability through primary and standby databases across clouds High host CPU usage Best practices for read/write splitting in OceanBase Cloud
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Best Practices
  3. master
iconOceanBase Best Practices
master
  • master

Best practices for read/write splitting in OceanBase Cloud

Last Updated:2025-08-12 09:47:55  Updated
share
What is on this page
Principles
Weak-consistency read
Eventual consistency
Replicas
Access endpoints
Summary

folded

share

To alleviate the read and write pressure on enterprise databases and reduce the mutual impact between read and write operations, OceanBase Cloud provides read/write splitting capabilities. This allows read and write operations to be processed separately, enhancing system response speed and helping enterprises build efficient and stable database systems. This topic introduces the implementation methods and configuration procedures for read/write splitting in OceanBase Cloud, based on the weak-consistency read principle (eventual consistency) of OceanBase Database.

Principles

Weak-consistency read

In a distributed system, the multi-replica feature of databases allows for various types of read consistency. Strong-consistency reads ensure linearizability, meaning that the latest version of the data is always read. In contrast, weak-consistency reads do not guarantee that the latest data will be read in every operation.

OceanBase Database is a distributed database based on Multi-Paxos. It stores data in multiple replicas across different zones or nodes. When data is updated, the leader replica executes the modification statements and synchronizes the commit log (clog) to other replicas. The transaction is considered committed once the logs are persisted on the majority of replicas, including the leader. During this process, other replicas besides the leader persist logs, ensuring disaster recovery capabilities. However, other majority replicas do not guarantee that they will update the follower replicas after log persistence. Therefore, compared to the leader, some replicas may have lagging data states.

OceanBase Database supports two consistency levels: STRONG and WEAK.

  • STRONG indicates strong consistency, where the latest data is read, and requests are routed to the leader replica.

  • WEAK indicates weak consistency, where the latest data is not required to be read, and requests are prioritized to follower replicas. Write operations in OceanBase Database are always strong-consistency, meaning they are always handled by the leader replica. Read operations default to strong-consistency, handled by the leader replica. However, you can also specify weak-consistency reads, where follower replicas provide read services.

For more information, see Weak-consistency read.

Eventual consistency

Eventual consistency is a special form of weak consistency. After a write operation, the system ensures that all replicas eventually reach a consistent state after a period of propagation. As long as no new update operations occur, the system will eventually be consistent.

Replicas

OceanBase Database supports the following types of replicas, with read and write capabilities determined by the replica type. For more information, see Log streams and replicas.

  • Full-featured replica (FULL/F): A full-featured replica is a Paxos replica. It can form a Paxos group and participate in election voting, including leader and follower replicas. A follower replica is a role of a full-featured replica.

  • Read-only replica (READONLY/R): A read-only replica is a non-Paxos replica. It cannot form a Paxos group and does not participate in election voting. The read-only replica feature of an OceanBase cluster instance is implemented by the read-only replicas of the OceanBase kernel. They do not participate in log voting as Paxos members but instead act as observers, real-time catching up with the logs of Paxos members, and replaying the logs locally. When the business does not require high consistency in read data, they can provide read-only services. For more information about Paxos, see the Paxos protocol. At the proxy level, after you create a read-only replica access address, you can configure the proxy service to direct business traffic to the read-only replica's server to read data.

  • Columnstore replica (COLUMNSTORE/C): A columnstore replica is similar to a read-only replica. It is a non-Paxos replica and cannot form a Paxos group or participate in election voting.

Access endpoints

An access endpoint is a network address used to connect to and access cloud database services. Access endpoints include the primary access endpoint, read/write access endpoint, and read-only access endpoint:

  • The primary access endpoint and read/write access endpoint determine the routing strategy for requests based on whether a weak-consistency read is required. Weak-consistency read requests are prioritized to follower replicas or read-only replicas.

  • The read-only access endpoint only accepts weak-consistency read requests and routes them to follower replicas or read-only replicas.

  • To add an access endpoint:

    • You can set an endpoint to read-only to separate it from the primary access endpoint for read/write splitting.

    • You can set an endpoint for read/write splitting, automatically forwarding write requests to the leader replica and read requests to the standby instance or read-only replica based on the settings.

Add an endpoint to implement read/write splitting
Add a read-only replica to implement read/write splitting
Achieve read/write splitting through primary and standby instances

Implement read/write splitting by adding an access endpoint

Prerequisites

  • The cloud service provider of the current cluster is Huawei Cloud or Alibaba Cloud.

  • You have registered a cloud account.

  • A private connection or VPC peering connection is created for the cluster. For more information, see Obtain a connection string.

Procedure

  1. Log in to the OceanBase Cloud console.

  2. In the left-side navigation pane, click Instances.

  3. On the Instances page, click the name of the target cluster instance to go to the instance overview page.

  4. In the left-side navigation pane, click Tenants.

  5. On the Tenants page, click the name of the target tenant to go to the tenant overview page.

  6. On the right side of the deployment topology, click Add Access Endpoint.

  7. Set the access endpoint parameters as needed. | Parameter | Description | |----------|-----------| | Select the network type | Select a private connection or VPC peering connection as needed. | | Endpoint Type | Select an endpoint type to add:

    • Read-Only: Implements read/write splitting by isolating access to a read-only replica from the primary endpoint.
    • Read/Write Splitting: Supports automatic read/write splitting, where write requests are automatically forwarded to the primary replica, and read requests are forwarded to standby instances or read-only replicas based on the settings.
    | | Zone | Select an access zone for the address, which will serve as the access zone for the read-only replica and the data access zone for read requests. | | Read Traffic Distribution |
    • By Replica Type: Traffic is sent to the selected type of replica, and if there are multiple replicas of this type, they are distributed based on the configured traffic balancing strategy.
    • By Replica: Traffic is sent to the selected replica.

      Note

      Only Alibaba Cloud supports this option when the ODP version is V4.3.1 or later.

    | | Failover |
    • When replicas are selected by type, if the selected type of replicas is unavailable, read traffic is automatically routed to the primary replica.
    • When replicas are selected by replica, if a disaster recovery replica is specified, and all selected replicas are unavailable, traffic is routed to the disaster recovery replica (with automatic traffic balancing among replicas).
    | | Consistency Level | Eventual consistency. There is data replication latency between the read-only access zone and the primary access zone, which may result in a time lag in query results compared to the primary access zone (the exact results depend on the replication latency), but data will eventually remain consistent. |

What to do next

You can view the read/write mode on the deployment topology page of the tenant overview page.

Implement read/write splitting for a single instance by using read-only replicas

You can use read-only replicas to implement read/write splitting in HTAP scenarios. When analytical processing (AP) is complex, read/write splitting by using read-only replicas can isolate transaction processing (TP) and AP workloads and ensure the stable operation of TP services.

  • Hybrid transactional and analytical processing (HTAP) is a database architecture or system that supports online transaction processing (OLTP) and online analytical processing (OLAP). An HTAP system can process TP and AP workloads in real time in a single system, without the data synchronization latency and additional costs caused by the traditional architecture that separates TP and AP workloads. OceanBase Database implements HTAP based on the concept of "the same data and the same engine," enabling a system to efficiently process online real-time transactions and complex analysis tasks, thereby providing better business response capabilities.

  • TP (transaction processing) refers to a workload type that involves high-concurrency, short-duration, and high-volume read and write operations on a database. It emphasizes data consistency and real-time processing. TP typically involves a small amount of data, such as inserting, updating, or deleting data. TP supports online business processing scenarios (OLTP), such as bank transactions and order management. For example, when a user places an order, the system needs to simultaneously update the inventory table and the order table.

  • AP (analytical processing) refers to a workload type that involves complex queries and aggregate analysis on large-scale datasets to support decision-making. AP mainly involves read-only operations and typically scans a large amount of data. AP queries are complex and may contain multiple JOIN operations or aggregate operations. AP is suitable for online analytical processing (OLAP) scenarios, such as report generation and trend analysis. For example, a company can query all sales data of the previous month to analyze regional sales performance.

To implement read/write splitting for a single instance in an HTAP scenario, you need to create read-only replicas for the cluster instance, enable read-only replicas for the tenant in the cluster instance, and add the access endpoint of the read-only replica for the tenant.

  • Create read-only replicas for the cluster instance:

    First, configure read-only replicas for the cluster instance to ensure that the database system can support read operations independently of write operations. Read-only replicas are from nodes in the cluster that mainly handle query and read requests, while the primary node still handles write operations.

    The database system uses the primary/standby synchronization mechanism (logical replication or physical replication based on transaction logs) to keep the data of the read-only replicas consistent with that of the primary node. OceanBase Cloud uses its own architecture (OceanBase distributed storage design) to synchronize the data of the primary node to multiple read-only replicas in the final consistent or strong consistent mode.

  • Enable read-only replicas for the tenant in the cluster instance:

    In OceanBase Cloud, each tenant is a logically isolated database space. You can enable the read-only replica feature for a specific tenant. This means that the data of the tenant can be synchronized to the read-only nodes, supporting read/write splitting. Read-only replicas provide more resources for handling read operations, no longer limited by the throughput of the primary node for read operations.

    After read-only replicas are enabled, the primary node is reused for write operations, and all read operations can be distributed to the read-only replicas based on the strategy (such as load balancing strategy or fine-tuned routing strategy).

  • Add the access endpoint of the read-only replica for the tenant:

    The system adds a dedicated access endpoint for the read-only replica to the tenant (usually in the form of a domain name or IP address), allowing the client to distinguish between read and write requests.

    After the configuration is complete, the application can direct write operations (such as INSERT, UPDATE, and DELETE) to the primary node and read operations (such as SELECT queries) to the access endpoint of the read-only replica. This operation is independent for multiple tenants in the multi-tenant architecture of OceanBase Cloud.

Limitations

  • Instance type: Read-only replica feature is supported only for transactional cluster instances.

  • Deployment solution: For cluster instances deployed in two or more IDCs, OceanBase Database allows you to create read-only replicas. Each read-only replica allows you to add one more proxy endpoint.

  • Number of read-only replicas per zone: The number of read-only replica nodes in one zone cannot exceed the number of nodes in the corresponding zone of the cluster instance. For example, if the cluster instance has three zones with two nodes each, each zone can have at most two read-only replica nodes.

  • Specifications:

    • The node specifications of all read-only replicas must be the same.

    • The node specifications of all read-only replicas must be smaller than that of a single node of the cluster instance.

    • The minimum node specification of a read-only replica depends on the version of the cluster instance to which it belongs. If the cluster instance is of OceanBase Database V4.x, the minimum node specification of the read-only replica can be 4 cores. If the cluster instance is of OceanBase Database V3.x, the minimum node specification of the read-only replica can be 8 cores.

  • Number of read-only replicas:

    • For versions earlier than V4.3.5, you can create a maximum of three read-only replicas, one or two read-only columnstore replicas, or a combination of the two. For V4.3.5 and later, you can create a maximum of 10 read-only replicas.

    • If you plan to purchase more read-only replicas later, each new read-only replica instance you purchase allows you to add one more proxy endpoint.

  • Database proxy version: If you want to use the read-only columnstore replica feature, contact OceanBase Technical Support to upgrade the database proxy to ODP V4.3.2 or later.

Prerequisites

  • You have the Project Admin or Instance Admin role. For more information, see Role permissions.

  • You have created an instance and a tenant. For more information, see Create an instance and Create a tenant.

  • The private endpoint service and endpoints have been created for the cluster instance. For more information, see Connect to an instance.

Procedure

  1. Add a read-only replica.

    1. Log in to the OceanBase Cloud console.

    2. In the left-side navigation pane, click Instances.

    3. In the instance list, find the target cluster instance, click ··· in the Actions column, and then select Manage Read-Only Replicas.

    4. In the Manage Read-Only Replicas dialog box, select the type of the replica to be added.

      • Add a read-only replica: A read-only replica is suitable for scenarios that require read isolation. A read-only replica is created only in one zone. We recommend that you set the specifications of the read-only replica to be the same as those of the original instance. After the read-only replica is created, you can set up data synchronization in the tenant settings to use the read-only replica normally.

      • Add a read-only columnstore replica: A read-only columnstore replica stores data tables in columnar format. The OB kernel automatically converts the data storage format. This improves query performance in HTAP and OLAP scenarios. After the read-only columnstore replica is created, you need to create a read-only columnstore replica for the tenant on the tenant management page. Then, you can query the read-only columnstore replica for data analysis by using an SQL statement.

      Note

      If you want to use the read-only columnstore replica feature, contact OceanBase Technical Support to upgrade the database proxy to ODP V4.3.2 or later.

    5. After you click OK, the system redirects you to the payment page. Set Node and Read-Only Replica Storage as needed, and complete the payment.

      Parameter Description
      Zone The zone where the read-only replica is located.
      Read-Only Replica Specification The specifications of the read-only replica. The unit price is the same as that of the cluster instance.
      Number of Read-Only Replica Nodes The number of nodes used by the read-only replica. Before you set the node count and node specifications, note the following items:
      • In a single zone, the node specifications of all nodes of a read-only replica must be the same. The node specifications of read-only replicas in different zones can be different. For management convenience, we recommend that you set the node specifications and cluster instance specifications of all read-only replicas to be the same.
      • The number of read-only replica nodes in a single zone cannot exceed the number of nodes of the cluster instance in that zone. For example, if the cluster instance has three zones with two nodes in each zone, the maximum number of read-only replica nodes in each zone is two.
      • The minimum number of CPU cores supported for a read-only replica node depends on the version of the cluster instance to which the node belongs. If the cluster instance is OceanBase Database V4.x, the minimum number of CPU cores supported is four. If the cluster instance is OceanBase Database V3.x, the minimum number of CPU cores supported is eight.
      Storage Specification The storage size of the read-only replica, in GiB.

      Note

      The storage specifications of a read-only replica are constrained in the same way as those of the cluster instance.

  2. Add a read-only replica for a tenant.

    1. In the left-side navigation pane, click Tenants.

    2. In the tenant list, find the target tenant, click ··· in the Actions column, and then select Manage Read-Only Replicas.

    3. In the Manage Read-Only Replicas dialog box, you can manage read-only replicas for the tenant. To add a read-only replica for the tenant, turn on the switch in the Tenant Read-Only Replica column.

      Note

      • When the number of nodes of a read-only replica is greater than or equal to the number of units of the tenant and the remaining resources of all nodes of the read-only replica are greater than or equal to the resources of the tenant, you can enable a read-only replica for the tenant. For example, if the cluster instance is 3(1-1-1) or 2(1-1), the number of nodes of the read-only replica must be 1. If the cluster instance is 9(3-3-3), the number of nodes of the read-only replica can be 1 to 3. If the resources are insufficient, we recommend that you upgrade the node specifications of the read-only replica. For more information, see Manage read-only replica specifications.
      • We recommend that you set the specifications of the read-only replica for the tenant to be the same as those of the cluster instance. If the specifications are inconsistent, the read-only replica may fail to be added.
    4. In the dialog box that appears, click OK. On the topology map of the instance overview page, you can view the read/write splitting status.

  3. Add a read-only replica access address for the tenant.

    1. Click the name of the target tenant in the tenant list to go to the tenant overview page.

    2. On the right side of the deployment topology, click Add Access Endpoint.

    3. Set the access endpoint parameters as needed.

      Parameter Description
      Endpoint Type The type of the endpoint to be added. Read-Only: specifies a read-only replica. The read-only replica is isolated from the primary endpoint for read-only operations. This implements read/write splitting.
      Zone The zone of the endpoint. This zone serves as the zone of the read-only replica and the zone of the data available for read requests.
      Read Traffic Distribution
      • By Replica Type: traffic is sent to the replica of the selected type. If there are multiple replicas of the selected type, traffic is distributed to these replicas based on the configured traffic balancing strategy.
      • By Replica: traffic is sent to the selected replica.

        Note

        This option is supported only on Alibaba Cloud, and when the ODC version is V4.3.1 or later.

      Failover
      • If replicas of the selected type are unavailable, read traffic is automatically routed to the primary replica. This option is available only when the replicas are set by type.
      • If the specified replicas are unavailable, traffic is routed to the disaster recovery replica. If multiple replicas are specified, traffic is automatically balanced among them. This option is available only when the replicas are set by name.
      Consistency Level Eventual consistency. The read-only zone and the primary zone are in data replication latency. This may result in a time difference between the query result and the primary zone. However, the data will eventually be consistent.

Subsequent actions

You can view the read/write splitting mode in the deployment relationship diagram of the tenant overview page.

Achieve read/write splitting through primary and standby instances

In traditional databases, many read-only reporting applications, which are not sensitive to data timeliness, are migrated to standby databases. This approach meets the statistical query requirements and reduces the pressure on the primary database. OceanBase Cloud supports read/write splitting between primary and standby instances within the same cloud. You can create a standby instance, create private endpoints for tenants in both primary and standby instances, and configure automatic routing of write requests to the standby instance.

Prerequisites

Before you create a standby instance, make sure that the following conditions are met:

  • You have created an instance and a tenant. For more information, see Create an instance and Create a tenant.

  • The billing mode of the current cluster instance is pay-as-you-go.

  • The version of the current cluster instance is 4.2.1.7 or later.

  • The current cluster instance is running.

  • You have the Project Admin role. For more information, see Role permissions.

Procedure

  1. Create a standby instance.

    1. Log in to the OceanBase Cloud console.

    2. In the left-side navigation pane, click Instances.

    3. In the instance list, find the target cluster instance and click Create Standby Instance in the Actions column.

      Note

      You can create at most two standby instances for a primary cluster instance.

    4. On the order page, set the purchase parameters for the standby instance.

      Parameter Parameter Description
      Instance Type The instance type must be the same as that of the primary instance. This feature is available only for transactional cluster instances.
      Payment Method The payment method must be the same as that of the primary instance. This feature is available only for pay-as-you-go cluster instances.
      Cloud Vendor The cloud vendor must be the same as that of the primary instance.
      Region The region where the standby instance is located. This can be different from the region of the primary instance.
      Version The version must be the same as that of the primary instance.
      Deployment Mode The deployment mode must be the same as that of the primary instance.
      Zone The zone where the standby instance is located. This can be different from the zone of the primary instance.
      Compute The compute specification of the standby instance can be different from that of the primary instance. The minimum node specification of the standby instance cannot be less than 0.33 times the node specification of the primary instance.
      Storage The storage specification of the standby instance cannot be less than that of the primary instance.
      Quantity You can create only one standby instance at a time.
    5. In the Summary section, check whether the parameters and quantity are correctly set. After the check, read the service agreement and place the order.

      Note

    6. Backup costs are charged based on the backup space. For more information, see Backup and Restore Billing.
    7. During the feature promotion period, cross-region traffic is temporarily free.
    8. After the payment is successful, you can view the basic information of the standby instance, such as the instance type, series, zone, storage space, payment method, tags, and status, on the instance list page.

  2. After the private IP address is created for the tenant in the primary and standby instances, different operations are required for different cloud providers. For more information, see:

    • Connect to a database by using AWS PrivateLink

    • Connect to a database by using Huawei Cloud VPC endpoint

    • Connect to a database by using Alibaba Cloud VPC

  3. Set automatic routing for write requests to the standby instance.

    1. In the instance list, find the standby instance of the target cluster, click the cluster name, and go to the instance overview page.

    2. In the left-side navigation pane, click Tenants.

    3. In the tenant list, select the target tenant, click the tenant name, and go to the tenant overview page.

  4. On the right side of the deployment relationship diagram in the tenant overview page, find the Automatic Write Forwarding switch and enable or disable the feature.

    Note

    This feature must be enabled or disabled in the standby instance. On the tenant overview page of the primary instance, you can view the feature status, but you cannot directly enable or disable it. After the feature is enabled, write requests sent to the standby instance are automatically routed to the primary instance, and read requests continue to access the standby instance.

  5. In the dialog box that appears, click OK to enable automatic write forwarding.

Subsequent actions

You can view the read/write mode in the deployment relationship diagram of the tenant workspace.

Summary

You can select a read/write splitting solution based on your business needs.

Read/write splitting solution Advantages Scenarios
Read/write splitting by adding access endpoints
  • Easy to use, as you can configure different access endpoints to implement read/write splitting.
  • Supports request routing strategies, such as routing based on replica type or availability zone preference.
  • Read requests are prioritized to read-only replicas or standby replicas, reducing the read load on the primary database.
  • Low costs, with no additional charges.
  • Small and medium-sized systems.
  • Scenarios where data consistency is not a high requirement (such as log queries and report generation).
  • Scenarios with more read operations than write operations, such as in e-commerce where read operations on product information far exceed write operations on orders.
Read/write splitting by using read-only replicas in a single instance
  • You can implement read/write splitting in a single instance, isolating the AP (analytical processing) and TP (transaction processing) loads to avoid interference.
  • Supports HTAP scenarios, where transaction processing and analytical processing coexist.
  • Query performance is significantly improved, making it suitable for complex analytical tasks without affecting the high-concurrency transaction performance of the primary database.
  • Scenarios that require both transactional operations (TP, such as order updates) and analytical operations (AP, such as sales data statistics).
  • Scenarios with high data query complexity and lower requirements for read consistency in analytical workloads.
Read/write splitting by using primary and standby instances
  • Implements read/write splitting between primary and standby instances.
  • Supports cross-region deployment, enhancing system disaster recovery capabilities.
  • You can enable automatic write request routing, where write requests are automatically forwarded to the primary instance, and read requests are distributed to the standby instance, simplifying business configuration.
  • Cross-region disaster recovery and multi-region multi-backup scenarios.
  • Scenarios that require disaster recovery backup and local business read optimization, such as global business systems across regions.

Previous topic

High host CPU usage
Last
What is on this page
Principles
Weak-consistency read
Eventual consistency
Replicas
Access endpoints
Summary