OceanBase V4.4.2 is an LTS release for hybrid TP/AP workloads. It combines the transaction processing strength of V4.2.5 LTS with the analytical processing strength of V4.3.5 LTS and improves the kernel across data management, compatibility, security, and operations, giving you stronger, more stable support for both critical transaction and complex analytical workloads.
V4.4.2 is an LTS release and is recommended for TP, AP, and HTAP workloads.
Key features
Kernel enhancements
Partition exchange between primary and secondary partition tables
You can exchange a primary partition of a secondary partition table with a primary partition table. See Partition exchange (MySQL mode) and Partition exchange (Oracle mode).
OBCDC: incremental sync after table-level restore
OBCDC supports whitelist/blacklist filtering so that only tables on the whitelist and not on the blacklist are synced. In the new version, at the end of the OBServer table-level restore flow, tables are matched against the whitelist and blacklist by name, so incremental sync is supported after restore.
Materialized view improvements
The V4.4.2 version continues to enhance the capabilities of materialized views, including:
- Enhanced DDL capabilities, supporting
RENAMEand adding columns to materialized view logs. - Enhanced incremental refresh capabilities, supporting outer joins,
UNION ALL, non-aggregated single-table mode, and aggregated materialized views withLEFT JOIN. It also supports repeatedSELECT ITEMandGROUP BYcolumns, and enhancedMIN/MAXaggregated materialized view incremental refresh capabilities. - Enhanced nested materialized view capabilities, supporting cascading refreshes.
- During materialized view refreshes, dimension tables are not refreshed. When creating a materialized view, you can use the new
AS OF PROCTIME()syntax to specify tables that do not need to be refreshed. During incremental refreshes, the incremental data of these tables is not refreshed. - Support for
MIN()/MAX()aggregate functions in single-table aggregated incremental refresh materialized views. - Automated management of MLOG tables. When creating an incremental refresh materialized view, the system automatically creates or replaces the required MLOG tables for the base table and periodically cleans up redundant MLOG tables in the background.
- Support for the
md5_concat_wsfunction. - Optimized view content and error messages for materialized view creation.
- Support for incremental materialized views without a primary key.
- Support for referencing views declared with
AS OF PROCTIME(). - Support for user-defined types (UDTs) and user-defined functions (UDFs) in materialized views, including the
minimalmode.
- Enhanced DDL capabilities, supporting
Session-level private temporary tables
Temporary tables were disabled in MySQL-compatible mode from V4.1.0 BP4. V4.4.2 re-enables them in MySQL-compatible mode: you can create, alter, and drop temporary tables and create indexes on them.
Heterogeneous zones
You can deploy zones with different unit counts. A tenant can use two unit counts; when a node fails, you can remove the failed node without changing other zones, which simplifies operations and reduces impact on traffic for smooth scale-in and scale-out. See Smooth scaling example.
OBCDC: sync virtual generated columns
From OceanBase 4.0, virtual generated columns are no longer stored in the CLOG, so OBCDC could not sync them—downstream systems that depend on those columns would see missing data. V4.4.2 adds support for syncing virtual generated columns so downstream systems can consume virtual column data.
Oracle compatibility
INTERVAL partitioning
Oracle-compatible mode now supports INTERVAL partitioning: when inserted data falls outside existing partition ranges, new partitions are created automatically, reducing partition management overhead. See Partition overview.
OBKV and multi-model
OBKV: weak read and read-from-nearby-IDC
- Weak read: Read from follower replicas (data may be slightly behind the leader).
- Read from nearby IDC: Clients prefer the nearest IDC by location or network topology to reduce latency and improve response time.
Performance
Faster serial table creation
To improve DDL performance, the 4.2.x version supports parallel execution of DDL operations such as CREATE and DROP. However, when unsupported DDL operations are present, parallel DDL capabilities cannot be utilized. Therefore, optimizing serial DDL performance is necessary. This feature targets serial table creation, optimizing schema generation, writing, and refresh processes. It significantly improves the speed of serial table creation and slightly enhances the performance of other DDL operations. Testing showed that creating 1,000 sysbench simple tables in V4.4.2 improved performance by 60% compared to V4.3.5.
Large-scale domestic hardware
Optimizations target CPU hotspots in typical workloads on domestic CPU environments, including PDML and replication-table queries. On 256-core systems, high-concurrency PDML is about 25% faster and memory use in non-PDML scenarios is reduced; replication-table queries from followers go from ~20k QPS to 270k+ QPS (within 10% of leader query throughput at ~300k QPS). Benefits also apply on smaller (e.g. 32-core) systems.
PX adaptive task splitting
Long-tail tasks in PX scans are adaptively split for better load balance and faster queries. See Distributed execution and parallel queries.
Security
Application-transparent column encryption
V4.4.2 supports transparent column-level data protection. DBAs can define protection rules on table columns and use DDL and permissions to create, change, or drop sensitive rules. SELECT on protected columns without PLAINACCESS returns encrypted values; DML returns a no-permission result.
Usability improvements
Enhanced ASH data integrity
V4.2.5 reworked ASH so it can track tenant queue request state; each request writes one record per second to a 30 MB ASH buffer (~50k records). Every two minutes the system decides whether to flush to disk. Under heavy queue backlog, the buffer could fill before flush and overwrite data, losing important diagnostic events. V4.4.2 adds a
weightcolumn (default 1) to__all_virtual_ash. When the buffer is at risk of filling quickly (e.g. due to backlog), instead of writing one ASH record per backlog wait event, a single record usesweightto represent multiple events, improving diagnostic completeness.SQLSTAT rework
SQLSTAT collects SQL execution statistics to help find RT anomalies. Previous issues included high lib cache usage limiting plan cache, follower data being wrong because only the leader snapshot was updated, and delayed persistence. V4.4.2 reworks SQLSTAT: statistics are kept in a tenant-level hashmap with periodic snapshots and eviction; they are accumulated at the physical plan layer and then aggregated into the hashmap, and virtual table queries trigger sync, improving accuracy and timeliness.