OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Database

SQL - V4.4.2

    Download PDF

    OceanBase logo

    The Unified Distributed Database for the AI Era.

    Follow Us
    Products
    OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
    Resources
    DocsBlogLive DemosTraining & CertificationTicket
    Company
    About OceanBaseTrust CenterLegalPartnerContact Us
    Follow Us

    © OceanBase 2026. All rights reserved

    Cloud Service AgreementPrivacy PolicySecurity
    Contact Us
    Document Feedback
    1. Documentation Center
    2. OceanBase Database
    3. SQL
    4. V4.4.2
    iconOceanBase Database
    SQL - V 4.4.2
    SQL
    KV
    • V 4.6.0
    • V 4.4.2
    • V 4.3.5
    • V 4.3.3
    • V 4.3.1
    • V 4.3.0
    • V 4.2.5
    • V 4.2.2
    • V 4.2.1
    • V 4.2.0
    • V 4.1.0
    • V 4.0.0
    • V 3.1.4 and earlier

    Overview of data load balancing

    Last Updated:2026-05-11 09:03:52  Updated
    share
    What is on this page
    Horizontal scaling
    Partition balancing
    Table group weight-based balancing
    Partition weight-based balancing
    Parameters related to data load balancing

    folded

    share

    OceanBase Database provides horizontal scaling and data dynamic balancing capabilities for load balancing.

    Horizontal scaling refers to the ability to adjust the number of service nodes to scale the service capacity. For example, expanding from a single service node to two service nodes can increase the service capacity. Additionally, horizontal scaling requires data redistribution, such as evenly distributing data across two service nodes when expanding from one to two, or redistributing data to a single service node when scaling down from two to one.

    Data dynamic balancing refers to the ability to adjust data distribution without changing the number of service nodes to achieve dynamic load balancing across service nodes. For example, as tables and partitions are dynamically created and deleted, the number of partitions on different service nodes can vary significantly, leading to load imbalance. Using dynamic partition balancing, partitions can be evenly distributed across service nodes to achieve load balancing.

    Horizontal scaling

    The storage capacity and read/write service capability of a tenant in OceanBase Database are mainly affected by the following two factors:

    • Unit Number, which refers to the number of units providing services in each zone.

      You can scale the number of service nodes by increasing or decreasing the unit number, thereby achieving horizontal scaling of the read/write service and storage capacity.

      For more information about how to scale a tenant by adjusting the unit number, see Scale a tenant by adjusting the unit number.

    • Primary Zone, which refers to the list of zones providing read/write services.

      You can scale the number of zones providing read/write services by increasing or decreasing the number of first-priority primary zones, thereby achieving horizontal scaling of the read/write service across zones.

      For more information about how to scale a tenant by adjusting the primary zone, see Scale a tenant by adjusting the primary zone.

    By dynamically adjusting the unit number and primary zone, you can achieve horizontal scaling of the read/write service capability of a tenant within and across zones. The load balancing feature will automatically adjust the log streams and partition distribution based on the configured service capabilities.

    Partition balancing

    Partition balancing refers to dynamically adjusting partition distribution when tables and partitions change, so as to balance the number of partitions, partition weights, table group weights, and storage space across service nodes.

    OceanBase Database supports various table types, including non-partitioned tables, partitioned tables, and subpartitioned tables. Each type of table has its own balancing strategy. To describe the balancing effect more conveniently, OceanBase Database divides different table partitions into balancing groups. Within each balancing group, the number of partitions and storage space should be balanced. Balancing groups are independent of each other, and the system automatically adjusts the distribution between balancing groups. By default, OceanBase Database uses the following partition balancing strategy:

    • For partitioned tables: Each partitioned table is an independent balancing group, and all partitions of the table are distributed across service nodes.

    • For subpartitioned tables: All subpartitions under each partition form an independent balancing group, and all subpartitions under each partition are distributed across service nodes.

    • For non-partitioned tables: All non-partitioned tables are considered as a single balancing group, and all non-partitioned tables are distributed across service nodes.

    To more flexibly describe the clustering and distribution relationships between different table data, OceanBase Database introduces the concept of table groups.

    A table group is a logical concept that represents a collection of tables. Tables within a table group are physically stored close to each other. Tables with associative relationships often have the same partitioning rules. By grouping tables with the same partitioning rules together, you can achieve partition-wise joins, significantly optimizing read and write performance.

    In previous versions, OceanBase Database introduced the SHARDING attribute of a table group. Starting from OceanBase Database V4.4.2 BP1, the SCOPE attribute is also supported for table groups. The distribution of partitions in a table group is determined by both the SHARDING and SCOPE attributes. The SHARDING attribute determines the clustering method of corresponding partitions among tables in the table group. A group of clustered partitions is called a partition group, which is the smallest unit for distribution and dispersion. The SCOPE attribute determines the distribution range of all aggregated partition groups in the table group.

    The SHARDING attribute of a table group can be set to NONE, PARTITION, or ADAPTIVE. Specifically:

    • SHARDING = NONE: There are no restrictions on the partitioning method of tables in the table group, and partitions are not clustered.

      Notice

      In OceanBase Database V4.4.2, before V4.4.2 BP1, partitions of tables in a table group with SHARDING = NONE are not dispersed and are all stored in the same log stream. Starting from V4.4.2 BP1, whether to disperse the partitions of tables in a table group with SHARDING = NONE depends on the SCOPE attribute of the table group.

      • For a partitioned table, each partition is an independent partition group.

      • For a subpartitioned table, each partition–subpartition combination is an independent partition group.

      • For a non-partitioned table, the entire table is a partition group.

      • For a global index table, in a table group with SHARDING = NONE, the global index table is automatically bound to the table group. A global non-partitioned index table is a partition group. A global partitioned index table has each partition as an independent partition group.

    • SHARDING = PARTITION: The entire table group is a balancing group. Partitions that share the same partition index are clustered in the same partition group.

    • SHARDING = ADAPTIVE:

      • If all tables are partitioned tables, the entire table group is a balancing group. Partitions that share the same partition index are clustered in the same partition group.

      • If all tables are subpartitioned tables, partitions that share the same partition index form a balancing group. Within that balancing group, subpartitions that share the same subpartition index are clustered in the same partition group.

    The SCOPE attribute of a table group can be set to SERVER, ZONE, or CLUSTER. Specifically:

    • SCOPE = SERVER: The leader of each partition group (Partition Group) is located on the same node.

    • SCOPE = ZONE: The leader of each partition group (Partition Group) is located in the same zone and is dispersed across nodes in the zone.

    • SCOPE = CLUSTER: The leader of each partition group (Partition Group) is dispersed across nodes in the cluster.

    The following table describes the partition distribution methods for different combinations of the SHARDING and SCOPE attributes.

    SHARDING/SCOPE attribute value
    SERVER
    ZONE
    CLUSTER
    NONE All partitions of tables in the table group are clustered on the same node. (The partition distribution method is equivalent to the partition distribution method in previous versions when the SHARDING attribute is set to NONE. ) All partitions of tables in the table group are evenly dispersed across nodes in the same zone. All partitions of tables in the table group are evenly dispersed across nodes in the cluster.
    PARTITION Not supported. Not supported. Partitions are clustered at partition granularity, and partition groups are randomly scattered. (The partition distribution method is equivalent to the partition distribution method in previous versions when the SHARDING attribute is set to PARTITION. )
    ADAPTIVE Not supported. Not supported.
    • For a partitioned table: Partitions are clustered at partition granularity, and partition groups are scattered across the entire cluster.
    • For a subpartitioned table: Under each partition, subpartitions are clustered at subpartition granularity, and each partition group is scattered across the entire cluster.

    The partition distribution method is equivalent to the partition distribution method in previous versions when the SHARDING attribute is set to ADAPTIVE.

    For more information about table groups and how to manage them, see Create and manage table groups (MySQL-compatible mode) and Create and manage table groups (Oracle-compatible mode).

    Table group weight-based balancing

    In MySQL-compatible mode of OceanBase Database, user tables can be aggregated by binding a database to a table group with Sharding = 'NONE'. This feature supports automatic aggregation of user tables within the same database and across multiple databases.

    By setting weights for table groups with Sharding = 'NONE', you enable table group weight balancing so that tables in those table groups are distributed according to their weights. Note that in OceanBase Database V4.4.2 BP1 and later, you can set weights only for table groups with SCOPE = ZONE or SCOPE = 'SERVER'.

    Table group weight balancing is a process within the partition balance (PARTITION_BALANCE) task. You can manually trigger it or wait for the scheduled partition balance task to trigger it.

    Partition weight-based balancing

    To further address partition hotspots within and across tables, OceanBase Database provides a partition weight-based balancing strategy. Partition weight refers to the relative proportion of resources such as CPU, memory, and disk occupied by different partitions. Partition weight balancing is a process within the partition balance (PARTITION_BALANCE) task. You can manually trigger it or wait for the scheduled partition balance task to trigger it.

    The partition weight balancing algorithm uses partition movement and partition exchange to minimize the variance of the sum of partition weights across different user log streams.

    Only partitions with weights set will participate in partition weight balancing. Unweighted partitions will still be balanced based on their count.

    Parameters related to data load balancing

    • enable_rebalance

      The tenant-level parameter enable_rebalance is used to control whether to perform load balancing between tenants in the system tenant, and whether to perform load balancing within a tenant in a user tenant. The default value is true.

    • enable_transfer

      The tenant-level parameter enable_transfer is used to control whether to perform Transfer within a tenant. The default value is true. Specifically:

      • If the value of the parameter enable_rebalance is false, automatic load balancing will not be performed regardless of whether the value of the parameter enable_transfer is true or false.

      • If the value of the parameter enable_rebalance is true and the value of the parameter enable_transfer is true, it indicates that during tenant scaling, the system will automatically adjust the number of log streams within the tenant. This is achieved through operations such as log stream splitting, merging, and Transfer, to balance the leaders and partitions within the tenant.

      • If the value of the parameter enable_rebalance is true and the value of the parameter enable_transfer is false, it indicates that during tenant scaling, the system will not perform Transfer and will not change the number of log streams. It will only attempt to balance the existing log streams as much as possible.

    • partition_balance_schedule_interval

      The tenant-level parameter partition_balance_schedule_interval is used to control the time interval for generating partition load balancing tasks. When enable_rebalance is set to true, the system will automatically trigger partition load balancing tasks using partition_balance_schedule_interval as the time interval. The default value is 2h, and the valid range is [0s, +∞). A value of 0s indicates that partition balancing is disabled.

      Notice

      For V4.4.x versions, the default value of this parameter was changed to 0s starting from V4.4.1. For V4.4.1 and later versions, it is not recommended to use this parameter to control partition balancing. Instead, users can call the DBMS_BALANCE.TRIGGER_PARTITION_BALANCE procedure to trigger partition balancing manually or on a schedule.

    • enable_database_sharding_none

      The tenant-level parameter enable_database_sharding_none is used to control whether to enable automatic aggregation of user tables when creating a database. The default value is False, indicating that automatic aggregation of user tables is disabled by default when creating a database.

    • zone_disk_balance_tolerance_percentage

      The tenant-level parameter zone_disk_balance_tolerance_percentage is used to control the tolerance level of the disk balancing algorithm for table groups between zones. If the difference between the maximum and minimum disk usage across zones exceeds the maximum zone disk usage multiplied by this percentage, disk balancing between zones will be triggered. The default value is 10.

    Previous topic

    Data distribution
    Last

    Next topic

    Transfer partitions
    Next
    What is on this page
    Horizontal scaling
    Partition balancing
    Table group weight-based balancing
    Partition weight-based balancing
    Parameters related to data load balancing