| description | |
|---|---|
| keywords | |
| dir-name | |
| dir-name-en | |
| tenant-type |
V4.2.0 BP1
Version information
- Release date: September 14, 2023
- Version: V4.2.0 BP1
- RPM version: oceanbase-4.2.0.0-101000042023091319
Bug fixes
Fixed the issue where, in the Oracle mode, when the RS tenant and the regular tenant are not on the same server, Error 4013 may occur during SQL execution if memory allocation using the tenant ID is attempted on the RS server.
Fixed the correctness issue caused by the execution of the
index skip scanplan when the full index suffix condition is not specified for parallel execution. For example, the index key isc1, c2, c3, but onlyc2is specified andc3is not specified.Fixed the issue where explicit setting of
_optimizer_better_inlist_costing=truefor a specific tenant may cause core dump or other abnormal behavior after a restart.Fixed the issue where after upgrading from V4.1.x to V4.2.x, incorrect maintenance of
max_ls_idleads to prolonged execution times for business operations.
V4.2.0 Beta
Version information
- Release date: August 2, 2023
- Version: V4.2.0 Beta
- Description: The Beta version resolved most of the issues and is becoming more and more stable. However, there may still be some minor issues or errors that need to be addressed in the final stable release, so we recommend that you use this version in a testing environment.
Overview
OceanBase Database V4.2.0 further enhanced the core features based on V4.1.0 and included all major functionalities of the V3.x series. It further improved product scalability, resource utilization, performance, compatibility, and ease of use. V4.2.x will serve as a long-term support (LTS) version.
Core features of V4.2.0 include:
MySQL compatibility
- Function index: OceanBase Database already supports function-based indexes in earlier versions in Oracle mode. The MySQL mode of V4.2.0 now supports MySQL 8.0-compatible function-based indexes, further enhancing feature stability.
- OBCDC: The new version now supports the output of row data according to the column definition sequence, meeting the compatibility requirements of MySQL Binlog Service. OBCDC no longer depends on the system tenant, but supports data synchronization with user tenants. This means that user tenants can use OBCDC on their own, without relying on the system tenant. V4.2.0 also supports data synchronization for tables without primary keys.
Oracle compatibility
- LOCK TABLE: OceanBase Database provides LOCK TABLE syntaxes for Oracle tenants in V4.1.0 and earlier versions. However, many limitations on using the syntaxes exist. OceanBase Database V4.2.0 provides higher compatibility with LOCK TABLE syntaxes and enhances the features that are unavailable in earlier versions. Specifically, you can lock multiple tables, partitions, and subpartitions, and make the WAIT N and NOWAIT keywords take effect.
Multi-model support
- Oracle XML: The XMLType data type is supported in Oracle mode. Nine XML expressions and basic XML creation, query, and modification capabilities are provided.
Performance improvements
- Enhancement of optimizer/executor/procedural language (PL) capabilities: Enhanced the query rewriting and optimization capabilities, with a focus on improving the performance of executor operators. Optimized the plan cache and parallel execution logic for PL and supported SQL statements that return complex data types.
- Statistics functionality enhancement: Enhanced the stability and availability of the statistics functionality, added the monitoring and diagnostic functionality for statistics collection, and optimized the performance and memory overhead of statistics collection.
- Dynamic sampling: Supports dynamic sampling, allowing optimizers to collect statistical information during SQL execution and generate optimal execution plans. This helps to improve query performance.
- Query plan evolution: Supports query plan evolution to avoid performance regression of execution plans.
- Runtime Filter: Provided full support for Runtime Filter, improving the analytical processing (AP) performance.
- Performance optimization under 4 CPU cores (4C): Optimized the performance in various OLTP scenarios by over 20% in 4C environments with default parameter settings.
- Performance optimization of reading transaction data: Optimized the performance of querying transaction tables during the minor compaction of uncommitted data.
- Parallel minor compaction of transaction data tables: OceanBase Database V4.1.0 already supports parallel MINOR_MERGE (combining multiple mini SSTables into a minor SSTable). V4.2.0 additionally supports parallel MINI_MERGE for transaction data tables (freezing the MemTable and converting it into a mini SSTable through minor compaction).
- Fast generation of random data: Introduced the built-in function
GENERATOR(N). This function can be used with random functions such asRANDOM([N]),RANDSTR(N, gen)and distribution control functions likeNORMAL(<mean>, <stddev>, <gen>),UNIFORM(<min>, <max>, <gen>), andZIPF(<s>, <N>, <gen>)to generate random data quickly. Compared with Oracle Connect By and MySQL Recursive CTE, OceanBase Table Generator has significant performance advantages.
OLAP capabilities
- Read-only external tables: Supports the creation and reading of read-only external tables when performing online analytical processing (OLAP) operations, thereby reducing costs of loading or inserting data into databases.
Scalability enhancement
- Transfer and load balancing: Uses the transfer technology to split and merge log streams, supporting data migration across nodes on a per-partition basis. This provides flexible scaling and dynamic data balancing capabilities.
- Replicated tables: Reconstructed replicated tables based on the single-node log stream architecture. The new version introduced partition-based readable version number verification and log stream-based lease-granting mechanism to ensure the correctness of strong consistency reads. Additionally, this version improved the ability to switch leaders without killing transactions. This means that uncommitted replicated table transactions can continue to execute after a leader switch initiated by users or load balancing. Compared with V3.x, V4.2.0 has better write transaction performance, stronger disaster recovery capability for replicated tables, and lower impact on read operations in case of replica failure.
High availability enhancement
- Physical standby database: OceanBase Database V4.1.0 introduced an archive-based primary/standby solution, offering flexible configuration without requiring network connection between primary and standby tenants. However, many scenarios still require a primary/standby solution based on direct network connection to reduce storage and network costs, and to simplify backup and restore operations. V4.2.0 extended support in these scenarios.
- Log write throttling: When the speed of committing logs to the disk is greater than the minor compaction speed, the log disk will become full. In V4.2.0, when the log disk usage reaches a specified threshold, the speed of committing logs to the disk is limited, to minimize the occurrences that the log disk becomes full because of the high write pressure in the cluster. For more information, see the
log_disk_throttling_percentageandlog_disk_throttling_maximum_durationparameters.
Resource utilization optimization
- Idle CPU optimization: Made optimizations for high-frequency, time-consuming SQL statements and background threads to reduce the CPU overhead of internal SQL execution and periodic execution of background threads. According to a test conducted on a single tenant in an idle environment (30 tables and a million rows of data per table), the default CPU overhead is around 0.2C.
- On-demand expansion of disk data files: Supports progressive disk expansion, where an appropriate disk size is first pre-allocated and then automatically expanded based on actual usage.
- Shared memory identification mechanism: Added a defense mechanism in the technical means to mark the code that is expected to use the shared memory and to optimize the situation where the shared resources increase with the code merge.
Usability enhancement
- Auto degree of parallelism (DOP): Support for auto DOP to automatically calculate query parallelism, reducing the threshold for parallel execution.
- End-to-end diagnostics enhancement: OceanBase Database V4.1.0 introduced end-to-end diagnostic capabilities, and V4.2.0 added some trace logging information based on the end-to-end diagnostic framework. This version also provides interactive Show Trace capabilities based on transactions and SQL queries.
- Logical plan management: Made several improvements to make logical plans more user-friendly. These improvements include the ability to save EXPLAIN plan information, feedback on the execution of logical plans, and real-time plans for running SQL queries. Additionally, rich interfaces for the DBMS_XPLAN package were also provided to display plans for specific queries, baseline plans, and real-time session plans in a formatted manner.
- Background thread statistics: Versions earlier than V4.2.0 tended to record front-end thread diagnosis information from a client perspective. However, for OceanBase Database, background threads constitute the vast majority. Therefore, it is necessary to learn about the state of background threads, both for troubleshooting issues and diagnosing performance. V4.2.0 adds the
[G]V$OB_THREADview to record the status information of all threads.
Security enhancement
- Backup and restore for encryption-enabled tenants: OceanBase Database V4.1.0 does not support backup and restore for tenants that have enabled encryption. OceanBase Database V4.2.0 adapts the transparent data encryption (TDE) capability to the new backup and restore mechanism. You can separately back up the primary keys during the backup and specify the primary key backup information during the restore. Features such as backup and restore and the primary/standby databases are also supported for encrypted tenants.
National standards
- GB18030-2022 character set: Provided support for GB18030-2022 character set since OceanBase Database V4.2.0. GB18030-2022 character set was released on July 19, 2022, and formally implemented on August 1, 2023. It supports more rare Chinese characters and minority languages, making it a mandatory standard for information systems.
Product forms
- Standalone architecture-based primary/standby solution: OceanBase Database V4.1.0 introduced the standalone architecture, and V4.2.0 further expanded this architecture to include a primary/standby solution.
- Read-only replicas (R replicas): OceanBase Database V4.2.0 introduced read-only replicas to provide read-only capabilities, which are different from full-featured replicas.
Description of core features
MySQL compatibility
Function index: MySQL 8.0 introduced the function-based index feature to optimize query performance. Typically, if a function or expression is used for a query column, a regular index cannot be used for optimization, and a function-based index is needed to improve query performance. Function index can use the return value of a function as part of the index, thereby speeding up query execution. Starting from V4.2.0, OceanBase Database supports MySQL 8.0-compatible function-based index under the MySQL tenant, and it can be created using statements such as CREATE TABLE, CREATE INDEX, and ALTER TABLE. In addition, V4.2.0 also enhances the index extraction capability and can effectively use indexes in most scenarios where there is implicit conversion outside the index expression, or the reference column on a prefix index has a LIKE predicate. For more information, see Create an index.
Oracle compatibility
LOCK TABLE: OceanBase Database provides LOCK TABLE syntaxes for Oracle tenants in V4.1.0 and earlier versions. You can lock a single table and make the WAIT N and NOWAIT keywords take effect. However, many limitations on using the syntaxes exist. For example, the LOCK TABLE statement can lock only a single table, but it cannot lock multiple tables or partitions. The WAIT N and NOWAIT keywords do not actually take effect. After you specify the WAIT N or NOWAIT keyword in a statement, the execution behavior remains unchanged. The statement waits for a lock until the statement or transaction times out. OceanBase Database V4.2.0 and later provide higher compatibility with LOCK TABLE syntaxes and enhances the features that are unavailable in earlier versions. Specifically, you can lock multiple tables, partitions, and subpartitions, and make the WAIT N and NOWAIT keywords take effect. For more information, see LOCK TABLE.
Multi-model support
Support for Oracle XML: XML is a standard self-explanatory text-based data exchange format. XML is widely needed in configuration description and data exchange scenarios. OceanBase Database V4.2.0 supports the following features of XML:
- XMLType: You can define data of the XML type and manipulate XML data by using PL or SQL statements.
- Basic functions: OceanBase Database V4.2.0 supports constructor functions such as XMLParse, XMLElement, XMLAttribute, XMLContent, and XMLAgg, and XPath-based query functions such as Extract and ExtractXML. You can call the XMLUpdate function to make incremental modifications on XML documents, such as modifying the data of an XML node. You can call the XMLSERIALIZE function to convert the XML data stored in a database into standard XML text.
- Storage: OceanBase Database V4.2.0 supports the native XML binary storage format, which is a query-friendly storage format. Compared with text-based storage, this format can avoid parsing XML documents and accelerate XML queries.
- Indexes: OceanBase Database V4.2.0 supports the creation of indexes in XML documents based on virtual generated columns.
For more information, see XMLType overview.
Performance improvements
Statistics functionality enhancement
Starting from OceanBase Database V4.0.0, statistics collection is fully initiated and maintained by the SQL layer, and statistics are no longer collected during major compaction. However, due to the incomplete functionality of statistics collection, there are some issues with performance and memory consumption during collection, which further affects the generation of the SQL plan. Additionally, the lack of effective monitoring methods for statistics collection makes it difficult for users to determine the status and expiration of statistics collection on various data tables, resulting in operational difficulties.
V4.2.0 enhanced the stability and usability of statistics collection. Online statistics collection optimizes the data structure and collection performance, improving performance by around 10% for the INSERT INTO SELECT and LOAD DATA methods. Offline statistics collection optimizes the plan generation path and memory usage, improving performance by 10%-20% in the scenario where 64 concurrent offline statistics collection tasks are being performed on the partitioned table tpcds_100g.
In addition, V4.2.0 now supports statistics collection monitoring and diagnostics, with the addition of the dynamic performance view [G]V$OB_OPT_STAT_GATHER_MONITOR for querying the real-time status of statistics collection tasks, the dictionary view DBA_OB_TASK_OPT_STAT_GATHER_HISTORY for querying the execution status of history statistics collection tasks, and the dictionary view DBA_OB_TABLE_OPT_STAT_GATHER_HISTORY for querying the execution status of history collection tasks on tables.
Dynamic sampling
When executing SQL queries, the optimizer in OceanBase Database needs to collect statistical information on tables and indexes in order to choose the optimal execution plan. If the statistical information is inaccurate or incomplete, the chosen execution plan may not be optimal, leading to a decrease in query performance. Basic statistical information is usually obtained through automatic or manual collection. However, if the data distribution changes, statistical information is not collected, or complex SQL queries are encountered, existing statistical information may no longer be accurate.
OceanBase Database V4.2.0 introduced the dynamic sampling feature, which samples database objects in advance during the plan generation phase, estimates the number of rows through sampling, and generates better execution plans for the cost model. Currently, you can control the dynamic sampling feature through system variables, query hints, and system parameters and the sampling set size is limited by the degree of parallelism. For more information, see Perform dynamic sampling by using optimizer.
Runtime Filter
Since V3.1.0, OceanBase Database has supported Join Bloom Filter to quickly filter data during data scanning in Join operations. In V4.0.0, Partition Bloom Filter was introduced to dynamically filter partitions for scenarios where Join keys are partition columns or partition column prefixes. Furthermore, in V4.1.0, multiple Bloom Filters can be generated within adjacent data flow operators (DFOs) and within a single DFO. Filters that dynamically filter data during execution, such as In, Range, and Bloom Filter, are collectively referred to as Runtime Filter (RF). V4.2.0 provides full support for Runtime Filter, improving the AP processing performance. You can manually enable Runtime Filter by using px_join_filter or px_part_join_filter, and disable it by using no_px_join_filter or no_px_part_join_filter. Additionally, four system variables are provided to adjust the execution strategy of Runtime Filter if necessary. For more information, see Runtime Filter.
Performance optimization under 4C
OceanBase Database V4.2.0 improved Sysbench performance under 4C16G (4 CPU cores and 16 GB of memory) scenarios by over 20% with a series of optimizations. These optimizations include auto-increment ID optimization, New Sort adaptive optimization, Callback Allocator optimization in PDML scenarios, time function call optimization, optimization of obtaining snapshots of single log stream statements, optimization of obtaining snapshots at tenant level, and optimization of enable_trace_log and enable_sql_audit.
Fast generation of random data
Table functions can return a data table as a result. Based on this, V4.2.0 introduced the Table Generator feature, which includes the built-in function generator(N). This function can be called in a Table function as table(generator(N)). In addition, new random functions such as RANDOM([N]) and RANDSTR(N, gen) and distribution control functions such as NORMAL(<mean> , <stddev> , <gen>), UNIFORM(<min> , <max> , <gen>), andZIPF(<s> , <N> , <gen>) are added and can be used with Table Generator to generate data. Compared with Oracle Connect By and MySQL Recursive CTE, OceanBase Table Generator has significant performance advantages. For more information, see GENERATOR.
OLAP capabilities
Read-only external tables
Tables in a database are stored in the database's storage space, while data of external tables is stored in an external storage service. External tables can be queried like regular tables, but are limited to read operations and do not support data manipulation language (DML) operations. Typically, when processing external data, data must be loaded or inserted into the database by using ETL tools. During this process, the database consumes resources such as storage space, disk I/O, and CPU. In contrast, external tables can directly read external data sources for query analysis without loading or inserting them into the database. For more information about the usage method and limitations, see Create external tables.
Scalability enhancement
Transfer and load balancing
OceanBase Database V4.0.0 underwent an architecture upgrade by introducing the concept of adaptive log streams to achieve an integrated architecture for standalone and distributed modes. This architecture provides more flexible scalability and supports dynamic transformation between standalone and distributed modes. However, the scalability was not fully improved in the released version V4.1.0, as the partitions and log streams were statically bound and therefore partition distribution could not be dynamically adjusted. This restricted the capabilities of automatic load balancing and deployment flexibility.
OceanBase Database V4.2.0 is the first version to fully implement the integrated architecture for standalone and distributed modes. It uses the transfer technology to split and merge log streams, supporting data migration across nodes on a per-partition basis. This truly achieved dynamic transformation between standalone and distributed modes, further improving the scalability.
The load balancing feature at the tenant level is mainly reflected in the following two aspects:
- Tenant-level horizontal scaling: Users can extend their read-write service capabilities within and across zones by dynamically adjusting the Unit Number (the number of units that provide services in each zone) and Primary Zone (the list of zones that provide read-write services). The load balancing module will adaptively adjust the distribution of log streams and partitions based on the configured service capabilities.
- Partition balancing: Partition distribution can be dynamically adjusted to achieve balance in the number of partitions and storage space on service nodes, even in the case of dynamic changes in tables and partitions.
For more information, see Load balancing.
Replicated tables
Replicated tables are a special type of table in OceanBase Database that can read the latest modifications from any "healthy" replica. For users with low write frequency and those that are more concerned about read operation latency and load balancing, replicated tables are a good choice. Replicated tables can sacrifice a small part of transaction commit performance while reading the latest data on any "healthy" follower. Here, "healthy" means that the follower has a smooth network connection and minimal replay progress gap with the leader.
In OceanBase Database V3.2.0, the replicated table feature was already supported, allowing a backup replica to be created on every server specified for a given tenant, and the main replica and all backup data are kept in strong synchronization using a fully synchronous policy. V4.2.0 further improved the replicated table feature, enhancing the ability to switch leaders without killing transactions. That is, when a user or load balancing initiates a leader switch, the execution can continue after the switch without killing transactions. Additionally, the new version of replicated tables has better write transaction performance, stronger disaster recovery capability for replicated tables, and lower impact on read operations when a replica fails. For more information, see Create a table.
High availability enhancement
Physical standby database
OceanBase Database V4.1.0 introduced a primary/standby solution based on third-party media archiving. This solution meets the basic requirements of primary and standby databases and is flexible in configuration, for example, not requiring network connectivity between primary and standby tenants. However, compared with the direct connectivity method, this approach has some disadvantages, such as increased storage and network costs, decreased performance and stability, increased maintenance difficulty, and usage limitations such as requiring the archiving to enabled for the standby database before a switchover.
Considering these factors, V4.2.0 enhanced physical standby databases with direct network connectivity, which shortened the log synchronization link between primary and standby databases and is more suitable for standalone and small to medium-sized users. Additionally, considering limitations in available bandwidth and potential instabilities in communication between standby databases, V4.2.0 also supports bandwidth throttling for standby databases. This helps avoid situations where excessive traffic from some standby databases affects the performance of other nodes when network bandwidth is limited. For more information about how to use the physical standby database, see Overview of physical standby database.
Resource utilization optimization
On-demand disk file expansion
OceanBase Database uses a strategy of pre-allocating disk space, which ensures that the database occupies a continuous block of disk space, avoiding resource shortages caused by other applications preempting the disk. Additionally, OceanBase Database allows customization of the file system based on this strategy to improve data access efficiency. However, in the case of a small amount of user data, there may be wasted disk space. V4.2.0 provides a new user configuration option for progressive disk expansion, which pre-allocates a reasonable disk size and automatically expands it based on the actual usage of the disk. For more information, see Configure dynamic expansion of disk data files.
Usability enhancement
Auto DOP
Parallel execution is a key capability for optimizing large data queries, and the optimizer generally uses DOP to describe the amount of parallel resources available. In previous versions, users had to determine whether to enable parallelism and adjust the DOP size with the consideration of execution conditions, response time requirements, and system resources. If needed, they had to modify the system variables and HINT based on their experience. V4.2.0 introduced the auto DOP feature while retaining the ability to manually adjust the DOP. This feature evaluates the time required to execute the query when generating a query plan, automatically determines whether to enable parallel execution, and determines the optimal DOP for the current query. The DOP selection policy can be controlled through the system variable parallel_degree_policy. For more information, see Auto DOP.
End-to-end diagnostics enhancement
OceanBase Database V4.2.0 significantly improved its diagnostic capabilities, including the first implementation of visual end-to-end tracing for SQL requests. This feature helps users quickly locate and trace specific problems during a particular execution phase, and identify the server and module in which the issue occurred with the aid of detailed execution information. It also provides Show Trace capabilities based on transactions and SQL queries. For business departments, they are often more concerned about the overall time consumption of a business service. For example, in an OLTP system, a business service is typically composed of one or more transactions. Therefore, tracing at the transaction level aligns better with the user's actual business scenario.
Each transaction forms a trace, recording the relevant execution information of each SQL request in the entire internal process of OBClient > ODP > OBServer. Users can quickly identify which statements were executed in a transaction and view their execution details from the client's perspective. Show Trace also provides convenient business system association capabilities for users. Users can set the app trace id corresponding to the calling request on the business database connection through the JDBC interface or SQL interface. This app trace id will be recorded in the end-to-end tracing information and displayed in Show Trace. When users find errors in a request or service database call associated with a certain app trace id, they can use this app trace id to quickly search for the corresponding database trace associated with the app trace id in the end-to-end diagnostic system, and then directly view the time consumption and error points of the request in each phase of the database link to quickly determine the component that triggered the issue. For more information, see Show a trace.
Logical plan management
V4.2.0 enhanced logical plan management with the following features:
- After a user obtains the plan information of a query through the EXPLAIN syntax, the user can query the history through DBMS_XPLAN.display before the current session is disconnected. Alternatively, the user can specify to save the information to a target table using the EXPLAIN INTO syntax.
- Automatically saves the plans of all executed queries, including physical and logical plans, for subsequent troubleshooting. The database clears the saved query plans when the cluster is restarted.
- Displays the baseline plan of SQL plan management (SPM) using
DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE. - Displays the plan information of executing queries using
DBMS_XPLAN.DISPLAY_ACTIVE_SESSION_PLANandSESSION_ID.
For more information, see DISPLAY_ACTIVE_SESSION_PLAN.
National standards
GB18030-2022 character set
GB18030-2022 character set was released on July 19, 2022, and formally implemented on August 1, 2023, becoming a mandatory standard for full-text execution in information systems. Compared with the GB18030-2005 standard, the new standard supports more rare Chinese characters and minority languages. OceanBase Database V4.2.0 supports the new GB18030_2022 character set, along with the corresponding sorting methods. The character set name is GB18030_2022 in MySQL mode and ZHS32GB18030_2022 in Oracle mode. For more information, see Character sets.
Product forms
Standalone architecture-based primary/standby solution
OceanBase Database V4.1.0 introduced the standalone architecture, allowing users to deploy OceanBase Database on a single server. In V4.2.0, this architecture is further expanded to include a primary/standby solution. This allows users to deploy OceanBase Database on two servers, with one acting as the primary database and the other as the standby database. For more information, see the Physical standby database section of this topic.
Read-only replicas (R replicas)
Read-only replicas do not participate in the consensus protocol voting and cannot become leaders. They only replay logs generated by the leader and generate corresponding data locally. In other words, read-only replicas can only provide read services but not write services.
OceanBase Database V4.2.0 introduced read-only replicas, which can be used by replicated tables. That is, with broadcast log streams, read-only replicas can be deployed on OBServer nodes that do not need to participate in the voting, while full-featured replicas are deployed on OBServer nodes that need to participate in the voting. The purpose of doing so is to ensure that, ideally, replicated tables can provide strong consistency reads on any OBServer node.
Here is a comparison between read-only replicas and full-featured replicas:
| Replica type | Participates in election voting? | Participates in log voting? | SSTable | Has clog | Has MemTable |
|---|---|---|---|---|---|
| Full-featured replica | Yes | Yes | Yes | Yes | Yes |
| Read-only replica | No | No | Yes | Yes | Yes |
Performance test report
The tests aim at comparing the performance of OceanBase Database V4.2 and V4.1 in different scenarios.
Test environment specifications
| Type | Specification |
|---|---|
| CPU architecture | x86_64 |
| ECS type | ecs.g7.8xlarge |
| Compute | 32 cores |
| Memory | 128 GB |
| Disk | 300 GB system disk, 2*400 GB cloud disks |
| Operating system | CentOS Linux release 7.9.2009 (Core) |
Version for test
| Product | Version information |
|---|---|
| OBServer (V4.2) | OBServer (OceanBase_CE 4.2.0.0) REVISION: 1-b274f1768fb31f46c38e691b2883324b01605b4b BUILD_TIME: Jul 19 2023 08:31:51 |
| ODP (V4.2) | ODP (OceanBase 4.2.0.0 3883.el7) REVISION: 3883-local-b6e947efa65fec48be04f5fb78d70019651503cf BUILD_TIME: Jul 20 2023 09:36:15 |
| OBServer (V4.1) | OBServer (OceanBase_CE 4.1.0.0) REVISION: 100000172023031416-6d4641c7bcd0462f0bf3faed011803c087ea1705 BUILD_TIME: Mar 14 2023 16:53:58 |
| ODP (V4.1) | ODP (OceanBase 4.1.0.0 1) REVISION: 5617-local-e6798c479feaab9f9a60b89f87e4df5e284250b6 BUILD_TIME: Mar 11 2023 21:42:11 |
Note
To improve user experience and usability, the following tests are based on a small number of basic commonly used parameter settings for tuning, without performing extensive large-scale tuning or point-to-point optimization.
OLTP test with Sysbench
Test plan
- Deploy an OceanBase cluster by using OBD. Deploy ODP and Sysbench on two separate servers, which will act as stress servers to avoid resource contention. Note that ODP is deployed on a separate server with 64 CPU cores and 128 GB of memory (server type: ecs.c7.16xlarge).
- Configure the OceanBase cluster. Make sure that the cluster is in the 1-1-1 architecture, with three zones and each zone having one OBServer node. After the cluster is deployed successfully, create the tenant and user required for running the Sysbench test (The
systenant is a built-in system tenant used for managing the cluster, therefore, do not use thesystenant to run the test). In two separate tests, set the primary zone for the tenant toRANDOMandZone1, respectively.RANDOMindicates that the leader of the newly created table partition is randomly assigned to one of the three servers, whileZone1indicates that the leader for the newly created table partition is concentrated on one server in Zone1. - Launch the Sysbench client and run the
point_select,read_write,read_only, andwrite_onlytests. - Set
--timeto60sfor each round of test. The number of threads can be 32, 64, 128, 256, 512, or 1,024. - Prepare the test data: 30 non-partitioned tables with 1 million rows of data (
--tables=30,-table_sizeset to 1000000).
Tenant specifications
- MAX_CPU = 26
- MEMORY_SIZE = 70g
Parameter tuning
ALTER system SET enable_sql_audit=false;
ALTER system SET enable_perf_event=false;
ALTER system SET syslog_level='PERF';
ALTER system SET enable_record_trace_log=false;
Test results
Point select performance
Threads V4.1 QPS [RANDOM] V4.1 95% Latency (ms) [RANDOM] V4.2 QPS [RANDOM] V4.2 95% Latency (ms) [RANDOM] V4.2 QPS [Zone1] V4.2 95% Latency (ms) [Zone1] 32 135146.05 0.27 138746.60 0.26 137423.12 0.27 64 252277.60 0.30 252231.37 0.29 246275.56 0.31 128 431564.13 0.36 447755.19 0.34 377949.26 0.50 256 686271.21 0.55 730315.66 0.48 461791.47 0.94 512 937428.74 0.95 1009966.93 0.90 535462.87 1.67 1024 985232.01 2.35 1012734.80 2.66 537828.40 3.89 Read-only performance
Threads V4.1 QPS [RANDOM] V4.1 95% Latency (ms) [RANDOM] V4.2 QPS [RANDOM] V4.2 95% Latency (ms) [RANDOM] V4.2 QPS [Zone1] V4.2 95% Latency (ms) [Zone1] 32 115232.45 4.74 121733.00 4.65 119054.75 4.74 64 214733.98 5.18 221563.16 5.09 208659.70 5.37 128 366616.03 6.09 392138.56 5.67 282377.72 8.43 256 553922.27 8.74 577951.13 8.58 316931.91 23.10 512 662368.79 15.83 763726.51 17.01 317668.32 42.61 1024 653733.97 54.83 740835.95 38.94 315535.24 80.03 Write-only performance
Threads V4.1 QPS [RANDOM] V4.1 95% Latency (ms) [RANDOM] V4.2 QPS [RANDOM] V4.2 95% Latency (ms) [RANDOM] V4.2 QPS [Zone1] V4.2 95% Latency (ms) [Zone1] 32 36840.88 5.37 43984.28 7.17 60851.38 4.10 64 62582.56 8.13 82554.92 6.55 110359.75 4.49 128 107054.05 9.56 114874.89 10.09 167415.76 5.99 256 155494.00 13.70 181982.10 12.52 209163.02 10.27 512 242458.53 17.32 253635.91 19.29 238568.50 18.28 1024 291343.40 31.94 292482.33 36.89 249176.87 37.56 Read/Write performance
Threads V4.1 QPS [RANDOM] V4.1 95% Latency (ms) [RANDOM] V4.2 QPS [RANDOM] V4.2 95% Latency (ms) [RANDOM] V4.2 QPS [Zone1] V4.2 95% Latency (ms) [Zone1] 32 66669.18 11.45 72554.47 11.87 92995.95 7.84 64 124559.80 12.52 139369.33 11.65 162191.70 9.22 128 209150.84 15.27 247061.25 12.30 219285.86 13.70 256 309329.85 20.00 313660.08 23.95 246672.33 25.74 512 395940.95 33.12 497734.89 25.74 270444.49 57.87 1024 454400.54 64.47 547816.87 54.83 256076.76 121.08
TPC-C benchmark test with BenchmarkSQL
Test plan
- Deploy an OceanBase cluster by using OBD. Deploy ODP and TPC-C on one server to avoid insufficient stress on the client.
- Configure the OceanBase cluster. Make sure that the cluster is in the 1-1-1 architecture, with three zones and each zone having one OBServer node. After the cluster is deployed successfully, create the tenant and user required for running the TPC-C benchmark test (The
systenant is a built-in system tenant used for managing the cluster, therefore, do not use thesystenant to run the test). In two separate tests, set the primary zone for the tenant toRANDOMandZone1, respectively.RANDOMindicates that the leader of the newly created table partition is randomly assigned to one of the three servers, whileZone1indicates that the leader for the newly created table partition is concentrated on one server in Zone1.
Tenant specifications
- MAX_CPU = 26
- MEMORY_SIZE = 70g
Parameter tuning
#obproxy
ALTER proxyconfig SET proxy_mem_limited='4G';
ALTER proxyconfig set enable_compression_protocol=false;
#sys
ALTER system SET enable_sql_audit=false;
ALTER system SET enable_perf_event=false;
ALTER system SET syslog_level='PERF';
ALTER system SET enable_record_trace_log=false;
Software versions
- mysql-connector-java-5.1.47
- BenchmarkSQL V5.0
Test configuration
- warehouse = 1000
- loadWorder = 40
- terminals = 800
- runMins = 5
- newOrderWeight = 45
- paymentWeight = 43
- orderStatusWeight = 4
- deliveryWeight = 4
- stockLevelWeight = 4
Test results
| Test type | V4.1 RANDOM | V4.2 RANDOM | V4.2 Zone1 |
|---|---|---|---|
| tpmC (NewOrders) | 293350.85 | 289711.96 | 139886.99 |
| tpmTOTAL | 652025.02 | 644025.66 | 310953.51 |
| Transaction Count | 3260125 | 3221995 | 1555658 |
TPC-H benchmark test
Test plan
- Deploy an OceanBase cluster by using OBD. Deploy TPC-H on a separate server as the stress machine. You do not need to deploy ODP, as you can directly connect to any server for testing.
- Configure the OceanBase cluster. Make sure that the cluster is in the 1-1-1 architecture, with three zones and each zone having one OBServer node. After the cluster is deployed successfully, create the tenant and user required for running the TPC-H test (The
systenant is a built-in system tenant used for managing the cluster, therefore, do not use thesystenant to run the test). Set the primary zone of the tenant toRANDOM. Perform data loading first, then execute the 22 SQL queries multiple times and calculate the average. - Prepare 100 GB of test data.
Tenant specifications
- MAX_CPU = 26
- MEMORY_SIZE = 70g
Parameter tuning
#sys
ALTER system flush plan cache GLOBAL;
ALTER system SET enable_sql_audit=false;
ALTER system SET enable_perf_event=false;
ALTER system SET syslog_level='PERF';
ALTER system SET enable_record_trace_log=false;
# Test tenant
SET GLOBAL ob_sql_work_area_percentage = 80;
SET GLOBAL ob_query_timeout = 36000000000;
SET GLOBAL ob_trx_timeout = 36000000000;
SET GLOBAL max_allowed_packet = 67108864;
SET GLOBAL parallel_servers_target = 624;
Test results
| Query | V4.1 (s) | V4.2 (s) |
|---|---|---|
| Q1 | 2.18 | 2.24 |
| Q2 | 0.23 | 0.48 |
| Q3 | 1.48 | 1.49 |
| Q4 | 0.53 | 0.66 |
| Q5 | 0.98 | 0.95 |
| Q6 | 0.13 | 0.14 |
| Q7 | 1.46 | 1.35 |
| Q8 | 0.92 | 1.09 |
| Q9 | 2.82 | 4.46 |
| Q10 | 1.10 | 0.95 |
| Q11 | 0.17 | 0.19 |
| Q12 | 1.36 | 1.34 |
| Q13 | 1.89 | 1.86 |
| Q14 | 0.29 | 0.41 |
| Q15 | 0.88 | 0.88 |
| Q16 | 0.66 | 0.67 |
| Q17 | 1.62 | 1.57 |
| Q18 | 0.82 | 0.91 |
| Q19 | 0.61 | 0.64 |
| Q20 | 1.07 | 1.12 |
| Q21 | 2.54 | 2.52 |
| Q22 | 1.08 | 1.11 |
| Total | 24.82 | 27.03 |
Note
In OceanBase Database V4.2.0, an optimization path for Runtime Filter was introduced. However, the optimizer has not fully adjusted its filtering estimation for Runtime Filter. As a result, there may be estimation deviations in certain scenarios, which can lead to the selection of suboptimal execution plans and longer execution times. Improvements are expected to be implemented in V4.2.2.
TPC-DS benchmark test
Test plan
- Deploy an OceanBase cluster by using OBD. Deploy TPC-DS on a separate server as the stress machine. You do not need to deploy ODP, as you can directly connect to any server for testing.
- Configure the OceanBase cluster. Make sure that the cluster is in the 1-1-1 architecture, with three zones and each zone having one OBServer node. After the cluster is deployed successfully, create the tenant and user required for running the TPC-DS test (The
systenant is a built-in system tenant used for managing the cluster, therefore, do not use thesystenant to run the test). Set the primary zone of the tenant toRANDOM. Perform data loading first, then execute the 99 SQL queries multiple times and calculate the average. - Prepare 100 GB of test data.
Tenant specifications
- MAX_CPU = 26
- MEMORY_SIZE = 70g
Parameter tuning
#sys
ALTER system flush plan cache GLOBAL;
ALTER system SET enable_sql_audit=false;
ALTER system SET enable_perf_event=false;
ALTER system SET syslog_level='PERF';
ALTER system SET enable_record_trace_log=false;
# Test tenant
SET GLOBAL ob_sql_work_area_percentage = 80;
SET GLOBAL ob_query_timeout = 36000000000;
SET GLOBAL ob_trx_timeout = 36000000000;
SET GLOBAL max_allowed_packet = 67108864;
SET GLOBAL parallel_servers_target = 624;
Test results
| Query | V4.1 (s) | V4.2 (s) |
|---|---|---|
| Q1 | 0.63 | 0.64 |
| Q2 | 3.46 | 1.17 |
| Q3 | 0.16 | 0.17 |
| Q4 | 9.43 | 9.76 |
| Q5 | 1.63 | 1.71 |
| Q6 | 0.37 | 0.42 |
| Q7 | 0.28 | 0.31 |
| Q8 | 0.39 | 0.26 |
| Q9 | 1.89 | 1.90 |
| Q10 | 0.50 | 0.51 |
| Q11 | 5.79 | 6.02 |
| Q12 | 0.28 | 0.23 |
| Q13 | 2.04 | 0.57 |
| Q14 | 11.38 | 11.22 |
| Q15 | 0.23 | 0.23 |
| Q16 | 0.92 | 0.95 |
| Q17 | 0.52 | 0.46 |
| Q18 | 0.32 | 0.37 |
| Q19 | 0.17 | 0.28 |
| Q20 | 0.19 | 0.19 |
| Q21 | 0.29 | 0.31 |
| Q22 | 1.11 | 1.23 |
| Q23 | 14.20 | 14.55 |
| Q24 | 1.56 | 1.41 |
| Q25 | 0.62 | 0.49 |
| Q26 | 0.21 | 0.22 |
| Q27 | 0.35 | 0.42 |
| Q28 | 7.44 | 6.58 |
| Q29 | 0.64 | 0.53 |
| Q30 | 0.27 | 0.31 |
| Q31 | 0.62 | 0.67 |
| Q32 | 0.11 | 0.11 |
| Q33 | 0.77 | 0.55 |
| Q34 | 0.37 | 0.56 |
| Q35 | 0.87 | 0.90 |
| Q36 | 0.31 | 0.32 |
| Q37 | 0.42 | 0.46 |
| Q38 | 1.49 | 1.60 |
| Q39 | 2.31 | 0.85 |
| Q40 | 0.16 | 0.17 |
| Q41 | 0.18 | 0.08 |
| Q42 | 0.11 | 0.13 |
| Q43 | 0.64 | 0.65 |
| Q44 | 0.45 | 0.49 |
| Q45 | 0.39 | 0.32 |
| Q46 | 0.81 | 0.65 |
| Q47 | 1.16 | 1.21 |
| Q48 | 0.61 | 0.66 |
| Q49 | 0.86 | 0.92 |
| Q50 | 0.81 | 0.68 |
| Q51 | 2.82 | 2.83 |
| Q52 | 0.11 | 0.12 |
| Q53 | 0.24 | 0.25 |
| Q54 | 1.05 | 1.04 |
| Q55 | 0.11 | 0.11 |
| Q56 | 1.01 | 0.56 |
| Q57 | 0.79 | 0.80 |
| Q58 | 0.60 | 0.65 |
| Q59 | 13.64 | 3.11 |
| Q60 | 1.13 | 0.69 |
| Q61 | 0.30 | 0.33 |
| Q62 | 0.40 | 0.42 |
| Q63 | 0.23 | 0.24 |
| Q64 | 1.84 | 1.89 |
| Q65 | 2.70 | 2.77 |
| Q66 | 0.46 | 0.49 |
| Q67 | 16.12 | 9.59 |
| Q68 | 0.67 | 0.48 |
| Q69 | 0.44 | 0.46 |
| Q70 | 1.18 | 1.36 |
| Q71 | 0.54 | 0.54 |
| Q72 | 1.47 | 1.34 |
| Q73 | 0.31 | 0.41 |
| Q74 | 6.36 | 4.36 |
| Q75 | 2.26 | 2.55 |
| Q76 | 0.49 | 0.50 |
| Q77 | 0.71 | 0.74 |
| Q78 | 3.64 | 3.46 |
| Q79 | 0.65 | 0.70 |
| Q80 | 1.78 | 3.45 |
| Q81 | 0.30 | 0.26 |
| Q82 | 0.67 | 0.79 |
| Q83 | 0.69 | 0.76 |
| Q84 | 0.19 | 0.20 |
| Q85 | 0.84 | 0.92 |
| Q86 | 0.36 | 0.33 |
| Q87 | 1.59 | 1.70 |
| Q88 | 0.65 | 0.85 |
| Q89 | 0.30 | 0.32 |
| Q90 | 0.19 | 0.47 |
| Q91 | 0.13 | 0.16 |
| Q92 | 0.10 | 0.10 |
| Q93 | 0.75 | 0.76 |
| Q94 | 0.62 | 0.65 |
| Q95 | 6.37 | 6.36 |
| Q96 | 0.38 | 0.44 |
| Q97 | 0.93 | 0.94 |
| Q98 | 0.30 | 0.30 |
| Q99 | 0.70 | 0.80 |
| Total time | 160.83 | 138.75 |
Compatibility changes
Product behavioral changes
| Feature | Version | Description |
|---|---|---|
| Table group | 4.2 |
|
| Cap parameters | 4.2 | The default unit for Cap parameters is MB, but it is often wrongly assumed by users to be bytes, leading to settings that are far beyond the limit. To avoid unexpected results caused by updates to parameters without specifying the unit, the value unit must now be specified when updating Cap parameters. Updates without specifying the unit will fail. |
View changes
| View | Version | Change type | Description |
|---|---|---|---|
| CDB/DBA_OB_TABLEGROUPS | 4.2 | Modified | The partition attribute field will no longer have any meaning and will be displayed as NULL. |
| CDB/DBA_OB_TABLEGROUP_PARTITIONS | 4.2 | Deprecated | The view will be deprecated and will no longer display data. |
| CDB/DBA_OB_TABLEGROUP_SUBPARTITIONS | 4.2 | Deprecated | The view will be deprecated and will no longer display data. |
| CDB/DBA_OB_BALANCE_JOBS | 4.2 | New | Displays the load balancing jobs being executed under a tenant. Only one load balancing job (JOB) can run under a tenant at a time, and each job can generate multiple load balancing tasks (TASK). CDB displays data of all the tenants, but only the system tenant can access it. |
| CDB/DBA_OB_BALANCE_JOB_HISTORY | 4.2 | New | Displays the history of load balancing jobs in the tenant. CDB displays data of all the tenants, but only the system tenant can access it. |
| CDB/DBA_OB_BALANCE_TASKS | 4.2 | New | Displays the load balancing jobs being executed in the tenant. Multiple load balancing tasks can run at the same time, and these tasks (TASK) belong to the same load balancing job (JOB). CDB displays data of all the tenants, but only the system tenant can access it. |
| CDB/DBA_OB_BALANCE_TASK_HISTORY | 4.2 | New | Displays the history of load balancing tasks in the tenant. CDB displays data of all the tenants, but only the system tenant can access it. |
| CDB/DBA_OB_TRANSFER_TASKS | 4.2 | New | Displays the transfer tasks being executed in the tenant. CDB displays data of all the tenants, but only the system tenant can access it. |
| CDB/DBA_OB_TRANSFER_TASK_HISTORY | 4.2 | New | Displays the history of transfer tasks in the tenant. CDB displays data of all the tenants, but only the system tenant can access it. |
| CDB/DBA_OB_TABLE_LOCATIONS | 4.2 | Modified | Added columns DUPLICATE_SCOPE, OBJECT_ID, TABLEGROUP_NAME, TABLEGROUP_ID, and SHARDING. This allows you to better observe the load balancing status of the cluster or tenant. |
| DBA_OB_TENANTS | 4.2 | Modified | Added columns UNIT_NUM and COMPATIBLE to display the number of units in the cluster or tenant. |
| [G]V$OB_THREAD | 4.2 | New | Displays the status information of all threads, with each row representing a thread. |
| [G]V$OB_LOCKS | 4.2 | New | Implements the proprietary lock view [G]V$OB_LOCKS of OceanBase Database based on the V$LOCK view of Oracle. OceanBase Database adds locks of the TR type to the Oracle V$LOCK view to describe "row locks" before a minor compaction occurs. |
| [G]V$OB_ARBITRATION_SERVICE_STATUS | 4.2 | New | Displays the connection status between the OceanBase cluster and the arbitration service. |
| [G]V$OB_ARBITRATION_MEMBER_INFO | 4.2 | New | Displays information about the arbitration members of the cluster. |
| [G]V$OB_OPT_STAT_GATHER_MONITOR | 4.2 | New | Displays the real-time status of statistics collection tasks. |
| DBA_OB_TASK_OPT_STAT_GATHER_HISTORY | 4.2 | New | Displays the execution status of historical collection tasks. |
| DBA_OB_TABLE_OPT_STAT_GATHER_HISTORY | 4.2 | New | Displays the execution status of historical collection tasks for tables. |
| [G]V$OB_UNITS | 4.2 | Modified | Added the ZONE_TYPE and REGION columns. |
| CDB/DBA_OB_DATA_DICTIONARY_IN_LOG | 4.2 | Modified | Displays the range of the data dictionary in the system log streams for easy consumption by CDC. |
| [G]V$OB_LOCKS | 4.2 | New | Displays the current user's status for holding or requesting locks on various tables. |
| CDB/DBA_OB_LOG_RESTORE_SOURCE | 4.2 | New | Displays the log restore source of the physically restored tenant and standby tenant. |
| V$OB_LS_LOG_RESTORE_STATUS | 4.2 | New | Displays the log restore status at the log stream level. |
| V$OB_TIMESTAMP_SERVICE | 4.2 | New | Displays the GTS and STS values of the tenant's clock service, the information about the node where the clock service is located, and the clock source type. |
| [G]V$OB_PX_P2P_DATAHUB | 4.2 | New | Displays information about parallel data transmission operations. |
| [G]V$SQL_JOIN_FILTER | 4.2 | New | Displays execution information of join filters. |
| DBA_OB_TABLE_STAT_STALE_INFO | 4.2 | New | Displays the number of DDL operations performed on each table since the last statistics collection operation, and whether the current statistics have expired. |
| CDB/DBA/ALL_OB_EXTERNAL_TABLE_FILES | 4.2 | New | Displays the file list corresponding to all created external tables in the current tenant. |
| CDB/DBA_OB_ACCESS_POINT | 4.2 | New | Displays the list of servers that the tenants can access. |
| DBA_DB_LINKS | 4.2 | New | Displays all DBLinks created in the current tenant. This view is new for MySQL tenants. |
| [G]V$OB_SQL_PLAN | 4.2 | New | Displays the plan information collected by running the EXPLAIN PLAN command. |
| [G]V$OB_LOG_STAT | 4.2 | Modified | Added the ARBITRATION_MEMBER, DEGRADED_LIST, and LEARNER_LIST columns. |
System variable changes
| Variale | Version | Change type | Description |
|---|---|---|---|
| runtime_filter_type | 4.2 | New | A global and session-level system variable used to set the tenant-level Runtime Filter type, which includes BLOOM_FILTER, RANGE, and IN types. If the value is set to an empty string, the Runtime Filter feature is turned off. |
| runtime_filter_wait_time_ms | 4.2 | New | A global and session-level system variable used to set the maximum wait time for Runtime Filter. Default value: 10 ms. |
| runtime_filter_max_in_num | 4.2 | New | A global and session-level system variable used to set the maximum number of distinct values (NDV) in the Runtime IN Filter. Default value: 1024. |
| runtime_bloom_filter_max_size | 4.2 | New | A global and session-level system variable used to set the maximum memory usage for Runtime Bloom Filter, in bytes. Default value: 2 1024 1024 * 1024 (2 GB). |
| optimizer_dynamic_sampling | 4.2 | New | A global and session-level system variable that specifies the level of dynamic sampling. Valid values: 0 and 1. The value 0 indicates to disable dynamic sampling and 1 indicates to enable it. Default value: 1. |
| parallel_degree_policy | 4.2 | New | A global and session-level system variable that specifies the strategy for SQL parallel execution. Default value: MANUAL, which specifies to use auto DOP. |
| parallel_degree_limit | 4.2 | New | A global and session-level system variable used to limit the maximum DOP chosen by the optimizer when the auto DOP policy is used. Default value: 0, which specifies not to limit the DOP chosen by the optimizer. |
| parallel_min_scan_time_threshold | 4.2 | New | A global and session-level system variable used to increase the DOP when the estimated execution time for a base table scan exceeds the value of parallel_min_scan_time_threshold, thereby initiating more parallel queries. Default value: 1000 ms. |
| secure_file_priv | 4.2 | Modified | A global system variable that can now only be modified by SQL statements executed by clients connected to the database via a local Unix Socket. This change was made for security reasons. |
Parameter changes
| Parameter | Version | Change type | Description |
|---|---|---|---|
| enable_rebalance | 4.2 | Modified | Adjusted to be a tenant-level parameter. Controls inter-tenant balancing for the system tenant and intra-tenant balancing for regular tenants. |
| balancer_idle_time | 4.2 | Modified | Adjusted to be a tenant-level parameter. Represents the interval between the disaster recovery thread and balancing thread for the system tenant, and the interval for the log stream balancing thread for regular tenants. |
| partition_balance_schedule_interval | 4.2 | New | A tenant-level parameter that sets the partition balance scheduling interval. |
| log_disk_throttling_percentage | 4.2 | New | A tenant-level parameter that represents the percentage of non-recoverable log disk space that triggers log write throttling. When the non-recoverable log disk space reaches log_disk_throttling_percentage/100, log write throttling is triggered. Log write throttling stops when the non-recoverable log disk space falls below log_disk_throttling_percentage/100. Setting this parameter to 100 disables log write throttling. Default value: 60. |
| log_disk_throttling_maximum_duration | 4.2 | New | A tenant-level parameter that represents the maximum expected available time for log disk space after log write throttling is triggered. The remaining available log disk space is expected to be consumed after log_disk_throttling_maximum_duration. A larger value extends the duration of log throttling and increases its intensity. Default value: 2 h. |
| _optimizer_ads_time_limit | 4.2 | New | A hidden tenant-level parameter that controls the maximum sampling time (in seconds) for optimizer dynamic sampling. Default value: 10. |
| datafile_maxsize | 4.2 | New | A cluster-level parameter that controls the upper limit for automatic disk data file expansion. Default value: 0. |
| datafile_next | 4.2 | New | A cluster-level parameter that controls the step size for automatic disk data file expansion. Default value: 0. |
| _datafile_usage_upper_bound_percentage | 4.2 | New | A cluster-level parameter that triggers automatic disk file expansion when the data file usage exceeds this value. Default value: 90. |
| standby_db_fetch_log_rpc_timeout | 4.2 | New | A tenant-level parameter that sets the RPC timeout for fetching logs in the standby database, to control the detection of an unavailable server in the main database and switch to another server. Default value: 15s. |
| location_refresh_thread_count | 4.2 | Modified | Changed the default value from 4 to 2. |
| enable_record_trace_id | 4.2 | Modified | Changed the default value from True to False. |
| plan_cache_evict_interval | 4.2 | Modified | Changed the default value from 1s to 5s. |
| devname | 4.2 | Modified | Changed the effective mode of this parameter from dynamically taking effective to read-only. |
| local_ip | 4.2 | New | A cluster-level parameter that represents the IP address of the installed OBServer node. This parameter is read-only. |
| observer_id | 4.2 | New | A cluster-level parameter that is used by the RootService to assign a unique identifier to OBServer nodes in the cluster. This parameter is read-only. |
| standby_fetch_log_bandwidth_limit | 4.2 | New | A cluster-level parameter that is used to set the maximum bandwidth occupied by all servers in the standby database when synchronizing logs from the primary database. The default value is 0, which specifies not to limit the bandwidth. |
| ls_gc_delay_time | 4.2 | New | A tenant-level parameter that specifies the delay time for deleting log streams in the primary tenant in a physical standby scenario. |
| archive_lag_target | 4.2 | New | A tenant-level parameter that specifies the delay time for log archiving in the tenant. |
| standby_db_preferred_upstream_log_region | 4.2 | New | A tenant-level parameter that specifies the preferred region for synchronizing upstream logs in a physical standby scenario. |
| storage_meta_cache_priority | 4.2 | New | A cluster-level parameter that specifies the priority of storing Meta Cache in kvcache. |
| range_optimizer_max_mem_size | 4.2 | New | A tenant-level parameter that specifies the maximum memory usage of the Query Range module. |
| rootservice_memory_limit | 4.2 | Deprecated | A cluster-level parameter that specifies the maximum memory capacity for Root Service. |
| trace_log_sampling_interval | 4.2 | Deprecated | A cluster-level parameter that specifies the interval for periodically printing trace log information. |
| tenant_cpu_variation_per_server | 4.2 | Deprecated | A cluster-level parameter that specifies the maximum allowed deviation in CPU quota between multiple units of a tenant. |
| token_reserved_percentage | 4.2 | Deprecated | A cluster-level parameter that specifies a percentage of idle tokens reserved for the tenant. |
| global_write_halt_residual_memory | 4.2 | Deprecated | A cluster-level parameter that specifies the threshold of remaining global memory that triggers the pause of write operations for regular tenants. The sys tenant is not affected. |
| plan_cache_high_watermark | 4.2 | Deprecated | A cluster-level parameter that specifies the threshold of memory usage for the execution plan cache. When the memory usage exceeds the threshold, automatic eviction is triggered. |
| plan_cache_low_watermark | 4.2 | Deprecated | A cluster-level parameter that specifies the threshold of memory usage for the execution plan cache. When the memory usage is less than the threshold, eviction is stopped. |
| max_px_worker_count | 4.2 | Deprecated | A cluster-level parameter that specifies the maximum number of threads used by the SQL parallel query engine. |
| io_category_config | 4.2 | Deprecated | A tenant-level parameter that specifies the percentage of I/O requests for each category. |
Function/PL package changes
| Function/PL package | Version | Change type | Description |
|---|---|---|---|
DBMS_SQL.LAST_ERROR_POSITION |
4.2 | New | Returns the position where the syntax error occurs during the last call to DBMS_SQL.PARSE. This package applies to the Oracle mode. |
DBMS_SESSION.RESET_PACKAGE |
4.2 | New | Clears the status of all package variables in the current session. This package applies to the Oracle mode. |
RANDOM([N]) |
4.2 | New | Generates a random 64-digit integer. This function applies to both modes. |
RANDSTR(N, gen) |
4.2 | New | Generates a random string of N characters in length. gen indicates the random method. This function applies to both modes. |
NORMAL(<mean> , <stddev> , <gen>) |
4.2 | New | Returns a floating-point number that follows a normal distribution (also known as Gaussian distribution). This function applies to both modes. |
UNIFORM(<min> , <max> , <gen>) |
4.2 | New | Returns an integer or floating-point number that follows a uniform distribution. This function applies to both modes. |
ZIPF(<s> , <N> , <gen>) |
4.2 | New | Returns an integer that follows a Zipf distribution. This function applies to both modes. |
Recommended versions of tools
The recommended platform tool versions for OceanBase Database V4.2.0 Beta are described in the following table:
| Tool | Version | Remarks |
|---|---|---|
| ODP | ODP V4.2.1 | - |
| OCP | OCP V4.1 | OCP V4.1 is recommended for now. OCP V4.2.0 is expected be released in the middle of August 2023. |
| ODC | ODC V4.1.3 BP3 | ODC V4.1.3 BP3 is recommended for now. ODC V4.2.1 is expected be released at the end of August 2023. |
| OBCDC | OBCDC V4.2.0 | - |
| OMS | OMS V4.1.0 | OMS V4.1.0 is recommended for now. OMS V4.2.0 is expected be released in the middle of August 2023. |
| OceanBase C++ Call Interface (OCCI) | OCCI V1.0.3 | - |
| OceanBase Call Interface (OBCI) | OBCI V2.0.7 | - |
| OceanBase Embedded SQL in C (ECOB) | ECOB V1.1.8 | - |
| OBClient | OBClient V2.2.3 | - |
| LibOBClient | LibOBClient V2.2.3 | - |
| OBJDBC | JDBC V2.4.4 | - |
| ODBC | ODBC V2.0.8 | - |
Upgrade notes
- Online upgrade from OceanBase Database V4.1.0 to OceanBase Database V4.2.0_Beta is supported.
- Online upgrade from OceanBase Database V3.x to OceanBase Database V4.2.0 is not supported.
- OceanBase Database V3.x data can be migrated and upgraded to OceanBase Database V4.2.0 by using the logical migration feature of OMS.
Considerations
- Enabling RPC SSL is not recommended in this version. Related issues will be optimized in future versions.
- Upgrading from GB18030 to GB18030-2022 character set requires a logical upgrade.