V4.4.2_CE
Version information
- Release date: March 5, 2026
- Version: V4.4.2_CE
- RPM version: oceanbase-ce-4.4.2.0-100000032026021017
Version overview
OceanBase Database V4.4.2 is a long-term support (LTS) release designed for hybrid workloads, combining the transactional processing strengths of V4.2.5 LTS with the analytical capabilities of V4.3.5 LTS. This version delivers comprehensive kernel improvements in data management, compatibility, security, and operational diagnostics, offering robust and stable support for both mission-critical transactional and complex analytical workloads.
For transactional processing (TP) scenarios, V4.4.2 introduces partition exchange between subpartitioned and regular partitioned tables, greatly simplifying large-scale data partition management and reorganization. It adds session-level private temporary tables, completing the temporary table system. OBCDC now supports incremental data synchronization after table-level restores and can synchronize virtual generated columns, better serving downstream systems that rely on table-level incremental data and generated columns.
For key-value (KV) scenarios, V4.4.2 continues to improve OBKV with new features such as weak reads, TTL support for index scans, and hot key query optimizations. OBKV-HBase metrics governance is now complete, delivering lower latency and enhanced observability for high-concurrency KV and time-series workloads.
In terms of performance, V4.4.2 brings targeted optimizations: serial table creation is about 60% faster. On large-scale domestic CPU platforms, PDML performance is up by roughly 25%, and follower query performance for replicated tables has improved 14-fold, nearly matching leader query speeds and significantly boosting horizontal scalability. Adaptive task splitting for PX has been introduced, automatically identifying and splitting long-tail tasks for better load balancing in parallel queries and faster completion times.
For security and diagnostics, this release enhances deadlock detection mechanisms, improves Active Session History (ASH) data integrity, and restructures SQLSTAT statistics, giving database administrators more precise and reliable diagnostic tools for pinpointing performance bottlenecks and problematic SQL statements.
V4.4.2 is an LTS release recommended for transactional, analytical, and hybrid transactional/analytical (HTAP) workloads.
Key features
Kernel enhancements
Partition exchange between partitioned and subpartitioned tables
The partition exchange feature now supports exchanging partitions between partitioned and subpartitioned tables.
OBCDC incremental data synchronization after table-level restore
OBCDC now supports allowlist and blocklist functionality. During data synchronization, only tables on the allowlist that are not on the blocklist are synchronized. Previously, the table-level restore process (an offline DDL operation) prevented OBCDC from obtaining table names, making incremental data synchronization impossible for restored tables. In this version, when table-level restore completes on an OBServer node, the restored table name is recorded in the __all_ddl_operation table. This enables OBCDC to identify restored tables and match them against the allowlist/blocklist, supporting incremental data synchronization.
Enhancements to materialized views
V4.4.2 continues to enhance the capabilities of materialized views, including:
- Enhanced DDL capabilities, supporting
RENAMEand adding columns to materialized view logs. - Enhanced incremental refresh capabilities, supporting outer joins,
UNION ALL, single-table non-aggregate mode, and aggregate materialized views withLEFT JOIN. It also supports repeatedSELECT ITEMandGROUP BYcolumns, and enhancedMIN/MAXaggregate materialized view incremental refresh capabilities. - Enhanced nested materialized view capabilities, supporting cascading refreshes.
- During materialized view refreshes, dimension tables are not refreshed. When creating a materialized view, you can use the new
AS OF PROCTIME()syntax to specify tables that do not need to be refreshed. During incremental refreshes, the incremental data of these tables will not be refreshed. - Single-table aggregate incremental refresh materialized views now support the
MIN()/MAX()aggregate functions. - Automated management of materialized view log tables. When creating an incremental refresh materialized view, the system automatically creates or replaces the required materialized view log tables for the base table and periodically cleans up redundant materialized view log tables in the background.
- Support for the
md5_concat_wsfunction. - Optimized view content and error information for materialized view creation.
- Support for incremental materialized views without a primary key.
- Support for referencing views declared with
AS OF PROCTIME(). - Support for user-defined types (UDTs) and user-defined functions (UDFs), including the
minimalmode.
- Enhanced DDL capabilities, supporting
Enhancements to SQL capabilities
V4.4.2 enhances several SQL capabilities, including the ability to specify
WITH CHECK OPTIONwhen the view filter condition contains a subquery.Session-level private temporary tables
The temporary table feature was disabled starting from V4.1.0 BP4. This version re-enables temporary table functionality, supporting table creation, modification, deletion, and indexing operations.
Syntax extensions
This release now supports additional syntax including
CAST AS INTEGER,CAST AS ARRAY,SPLIT,CONTAINS, andNEXTVAL(SEQ)/CURRVAL(SEQ). The version also supportsARRAY<>format array definitions, JSON column conversion toTEXT/MEDIUMTEXT/LONGTEXT, and theINTOkeyword withINSERT OVERWRITE. Additional improvements include allowing SQL statements to end with--without errors and changingRANGEandMAXVALUEfrom reserved to non-reserved keywords.Standby database read optimization and stability enhancements
V4.4.2 optimizes standby database read operations by improving transaction state inference. Instead of each replica independently collecting participant states, the leader coordinator now centrally collects and caches these states, significantly reducing message exchanges. A new retry mechanism automatically forwards requests to other replicas when one falls behind, preventing blocking and incorrect transaction state inference, thereby improving standby read stability.
Heterogeneous zone support
OceanBase Database 4.0 introduced homogeneous zone architecture, which simplified cluster deployment but removed some 3.x features. A key limitation was that single node failures required scaling down all zones, causing resource waste. This version introduces heterogeneous zone support to enhance scalability and reduce resource waste.
With heterogeneous zone deployment, tenants can configure different UNIT counts for each zone. When a single node fails, it can be removed directly without affecting other zones, reducing operational overhead. Single node failures also don't impact load balancing in other zones, minimizing business traffic disruption and enabling smoother scaling operations.
OBCDC virtual generated column synchronization
Starting from OceanBase Database 4.0, virtual generated columns are no longer recorded in clogs, preventing OBCDC synchronization. This caused data loss for downstream systems that depend on virtual generated columns. This version of OBCDC now supports virtual generated column synchronization, meeting downstream system data requirements.
SQL compatibility
Union Distinct RCTE
CTE now supports both Recursive Union All and Recursive Union Distinct. The new version adds support for Recursive Union Distinct to ensure the uniqueness of output data. At the same time, Recursive Union All has been enhanced to allow data spilling to disk when memory is insufficient.
OBKV/Multimodel enhancements
V4.4.2 enhances the capabilities of the OBKV/multimodel scenarios, mainly in the following aspects:
OBKV supports weak reads and reads from nearby IDCs
- Weak reads allow applications to read data from non-primary replicas (follower). These replicas may not have the latest data, so the read data may be slightly older than the primary replica.
- Reads from nearby IDCs allow clients to prioritize reading data from the nearest IDC based on their current geographical location or network topology, reducing network latency and improving response speed.
Time To Live (TTL) index scan support
Users can now specify indexes to accelerate TTL data cleanup. By default, TTL tasks perform full table scans and read complete rows to check expiration status. Creating local indexes on expiration columns reduces I/O overhead from full table scans and accelerates cleanup operations.
Enhancements to OBKV-HBase monitoring
The monitoring capabilities of OBKV-HBase are enhanced, mainly in the following aspects: * Distinguishing between HBase get and scan operations in QPS, RT, and P99 monitoring statistics, using independent monitoring items for tracking. * In batch scenarios, distinguishing between single put and single delete operations to provide more granular monitoring statistics. * In non-distributed environments, supporting audit tracking for HBase batch get operations. * In the OBKV-HBase client, returning corresponding OPS, RT, and P99 data for operations.
Support for put operations in the time series model
Because the
Vcolumn in the time series model is of the JSON type, and large object (LOB) types do not support the put operation, the time series put operation is implemented using the insert interface. The insert interface checks for primary key conflicts during execution. Stress testing has shown that these conflict checks consume a significant amount of read bandwidth. Conflict checking requires a get operation on the sstable, resulting in high read latency. Furthermore, in random write scenarios—which represent most user use cases—the I/O bandwidth consumed by dump and merge operations (minor and major compactions) is also considerable. Together, these factors quickly push cloud disk bandwidth to its limit, triggering write throttling. To resolve these issues, support for the put operation was introduced in version V4.4.2.Query optimization in OBKV-HBase
OBKV-HBase queries use a full-range scan strategy, scanning all qualifiers' data first and then filtering. This approach generates a large amount of redundant scanning and processing overhead on the server side, especially when querying a wide row or a multi-version row that involves only a few qualifiers, making the latency problem particularly evident. In V4.4.2, a new parameter
HBASE_HTABLE_HOTKEY_GET_OPTIMIZE_ENABLEis introduced to specify the ability to quickly skip wide rows or multi-version rows to reduce query latency.
Performance improvements
Serial table creation performance optimization
While V4.2.x introduced parallel execution for DDL operations like CREATE and DROP, unsupported DDL operations cannot utilize parallel capabilities, necessitating serial DDL performance optimization. This feature targets serial table creation by optimizing schema generation, writing, and refreshing processes to significantly improve table creation speed and moderately enhance other DDL operation performance. Testing demonstrates that V4.4.2 performs 60% better than V4.3.5 when creating 1,000 simple sysbench tables.
Library function memory allocation optimization for high-frequency small objects
V4.4.2 optimizes library function memory allocation in high-frequency small-object scenarios using mimalloc memory allocator concepts. By implementing sharded memory block organization, delayed idle memory recycling, and lock-free mechanisms, the version significantly improves memory allocation performance, reduces allocation latency and resource contention, and provides efficient, stable memory management for large-scale, high-concurrency workloads.
Performance optimization for large-scale domestic CPU environments
V4.4.2 optimizes CPU hotspots in domestic CPU environments, focusing on PDML and replicated table query performance. In 256-core environments, high-concurrency PDML operations see 25% performance improvements while reducing memory consumption in non-PDML scenarios. Replicated table queries from followers improve 14-fold, increasing from 20,000 QPS to over 270,000 QPS—within 10% of the 300,000 QPS achieved when querying leaders directly. These optimizations also benefit 32-core environments.
Adaptive task splitting for PX
To address the issue of long-tail tasks in PX scans, V4.4.2 adaptively splits long-tail tasks to achieve load rebalancing and accelerate query speed.
Improvements in usability
Optimized deadlock detection diagnostics
To enhance the usability of deadlock detection, the new version introduces several diagnostic optimizations. When a deadlock is detected, the internal table now reports significantly richer information, improving the mapping logic from locks to transactions. Additionally, the system now supports deadlock detection in scenarios involving primary key switching. Furthermore, the issue of inaccurate diagnostic information maintenance during dump scenarios has been fixed, enhancing the stability of diagnostics.
Real-time statistics collection
GV$OB_SQL_AUDIT and GV$SQL_PLAN_MONITOR are commonly used SQL monitoring views that record key information for each SQL request, including source, execution status, resource consumption, wait time, SQL text, execution plan, operator consumption time, and output rows. However, in scenarios involving storage layer information collection, such as reverse scanning, there are still gaps in the information. To further improve the usability of SQL diagnostics, version 4.4.2 introduces real-time statistics collection from the storage layer and records them in the monitoring views. These include the number of microblocks and rows accessed, the number of microblocks and rows pushed down, the time spent on the push-down path, the time spent without the push-down path, and the number of microblocks and rows skipped by the index. Additionally, the number of rows filtered and the time spent on filtering are also recorded. The collection of statistical information in monitoring views such as SQL audit has been optimized to address issues of incompleteness and inaccuracy.
Enhanced ASH data integrity
In V4.2.5, ASH was restructured to track the status of tenant queue requests. Each request writes a record to the ASH buffer every second. Currently, the ASH buffer is 30M and can store approximately 50,000 records. It checks every two minutes whether to persist data to disk. However, in scenarios with a large number of queued requests, the ASH buffer may be overwhelmed before it can persist the data, leading to the loss of important diagnostic data. In V4.4.2, the __all_virtual_ash table has been modified to include a weight column, with a default value of 1. When the ASH buffer is at risk of being quickly filled (usually due to queue buildup), it does not write an ASH record for every queued wait event. Instead, it uses the weight to indicate the importance of the record, thereby enhancing the integrity of diagnostic data.
SQLSTAT restructuring
SQLSTAT records statistical information based on the SQL ID and plan hash. This information can be used to directly determine whether an SQL statement is problematic and to identify RT anomalies. However, the current version of SQLSTAT has some known issues:
- Plan cache shares the library cache, and excessive usage by SQLSTAT can prevent plans from being added to the plan cache.
- When updating SQLSTAT snapshots, only the leader node's data is updated, while the follower nodes' data snapshots remain unchanged, leading to data errors on the follower nodes.
- SQLSTAT persists data every hour, which cannot promptly reflect the execution status of SQL statements. To address these issues, version 4.4.2 restructures SQLSTAT. The main changes include:
- Removing SQLSTAT from the library cache and maintaining a tenant hashmap, which is periodically snapshotted and purged.
- Accumulating data to the physical plan after each SQL execution and then to the tenant hashmap after reaching the cumulative threshold.
- Accumulating data from the physical plan to the tenant hashmap before querying the virtual table and then outputting the data.
Compatibility changes
View changes
The following views have been added or modified:
| View | Change type | Description |
|---|---|---|
| DBA_OB_LOB_CHECK_TASKS | New | Records the progress of LOB consistency checks and recovery task scans. You can query this view to obtain information about tenant-level tasks and LS-level tasks. |
| CDB_OB_LOB_CHECK_TASKS | New | Records the progress of LOB consistency checks and recovery task scans. |
| DBA_OB_LOB_CHECK_EXCEPTION_RESULT | New | Displays tables and tablets with exceptions in LOB consistency checks and the exception types. |
| CDB_OB_LOB_CHECK_EXCEPTION_RESULT | New | Displays tables and tablets with exceptions in LOB consistency checks and the exception types. |
| GV$ACTIVE_SESSION_HISTORY / V$ACTIVE_SESSION_HISTORY / GV$OB_ACTIVE_SESSION_HISTORY / V$OB_ACTIVE_SESSION_HISTORY | New columns |
|
| DBA_WR_SQLSTAT/CDB_WR_SQLSTAT | Implementation change | SAMPLE_TIME indicates the time when the SQLSTAT snapshot is written to the disk. In a single WR snapshot interval, the SQLSTAT snapshot may be written to the disk multiple times. |
| GV$OB_SQLSTAT | New columns | latest_active_time indicates the last time when the SQLSTAT was active. |
| DBA_OB_KV_TTL_TASKS | New columns | scan_index records the indexes scanned by TTL tasks. |
| DBA_OB_KV_TTL_TASK_HISTORY | New columns | scan_index records the indexes scanned by TTL tasks. |
| CDB_OB_KV_TTL_TASKS | New columns | scan_index records the indexes scanned by TTL tasks. |
| CDB_OB_KV_TTL_TASK_HISTORY | New columns | scan_index records the indexes scanned by TTL tasks. |
System variable changes
| System variable | Change type | Description |
|---|---|---|
| json_float_full_precision | New |
|
Function changes
| Function | Change type | Description |
|---|---|---|
| DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS() | New parameter |
|
Syntax changes
| Syntax | Description |
|---|---|
| WITH CHECK OPTION with subqueries | Compatible with WITH CHECK OPTION when the view filter condition contains a subquery. |
Recommended versions of components and tools
The following table lists the recommended versions of components and tools for OceanBase Database V4.4.2.
| Component | Version | Remarks |
|---|---|---|
| ODP | V4.3.6 | - |
| OCP | V4.4.1 | - |
| OBD | V4.2.0 | - |
| ODC | V4.4.1 BP1 | - |
| OBCDC | V4.4.2 | - |
| OMS | V4.2.12 | - |
| OBClient | V2.2.10 | - |
| LibobClient | V2.2.10 | - |
Upgrade notes
- Upgrade paths
- Upgrade from V4.2.5 BP7 or V4.4.1 to V4.4.2 is supported.
- Upgrade to V4.4.2 is not supported for clusters that use vector data.
- Upgrade from V4.3.0, V4.3.1, V4.3.2, V4.3.3, V4.3.4, V4.3.5.0, V4.3.5 BP1, or V4.3.5 BP2 to V4.4.2 is supported. However, this is supported only in POC and test environments and carries certain risks.
- Upgrade considerations
If you encounter the following error during pre-upgrade checks when upgrading to V4.4.2, the upgrade is currently not supported.
tenant has sys table with progressive_merge_round=1: tenant_id xxx table_ids 'xxx'
