V4.1.0
Version information
- Release date: March 31, 2023
- Version: V4.1.0
Overview
OceanBase Database Enterprise Edition V4.0 is a beta version with a comprehensively upgraded distributed database architecture. OceanBase Database Enterprise Edition V4.1 is oriented to public cloud and open source scenarios. In addition to its rich variety of features, this version is further optimized and enhanced in compatibility with MySQL 8.0 as well as the performance and ease of use at reduced costs. This version is also designed to support large-scale replication, and therefore can be comprehensively commercialized. This version provides the following core capabilities:
Enhanced features
This version supports more online DDL operations, tenant-level input/output operations per second (IOPS) isolation, the Latin1 character set, table locks, and deadlock detection.
Enhanced compatibility
- MySQL mode: Compatibility with the functions, variables, and SQL modes in MySQL 8.0 is improved. DDL operations are supported for FOREIGN KEY constraints. The column information of a view can be displayed.
- Oracle mode: This version supports JSON data types, writes to Oracle Database by using DBLinks, and Chinese character sets and collations.
Improved performance
Transactional processing (TP) performance
- Compared with OceanBase Database Enterprise Edition V4.0, this version improves the comprehensive read-write performance by about 40% in running the Sysbench benchmark with 1024 threads.
- Compared with OceanBase Database Enterprise Edition V4.0, this version improves the performance by about 8% in terms of tpmC (NewOrders) in running the TPC-C benchmark with 1000 warehouses.
Analytical processing (AP) performance
- The TPC-H query performance is improved. Compared with OceanBase Database V4.0, this version improves the overall performance by about 17% in sequentially executing 22 SQL statements involving 100 GB of data.
- The TPC-DS query performance is improved. This version takes about 161 seconds to sequentially execute 99 SQL statements involving 100 GB of data.
Data import performance When you import a 74-GB lineitem table in bypass mode to an OBServer node with the c6a.12xlarge specification, which provides 48 vCPUs and 96 GB of memory, the import speed can reach 165 MB/s.
Transaction performance The routing of distributed transactions is optimized based on OceanBase Database Proxy (ODP), improving the performance of distributed transactions.
Other performance improvements This version also improves the performance of the NESTED LOOP JOIN operator, the execution of Index Skip Scan, the parallel execution of the TRUNCATE TABLE statement, and the compilation feedback.
Enhanced stability
The large transaction processing capability is improved. The dependency on the Network Time Protocol (NTP) service is eliminated. Upper limits on the number of partitions and the LOB storage specification are supported.
Enhanced high availability
Physical Standby Database based on archive logs is supported, achieving a recovery time objective (RTO) of less than 8s in more scenarios. Arbitration replicas and tenant-level backup and restore are supported.
Improved O&M
- The monitoring capability is improved. End-to-end tracing and session status monitoring and diagnostics are supported.
- Storage persistence is optimized. Adaptive compactions are supported and space optimization is implemented for small tables.
- Views are standardized. Totally 73 data dictionary views and dynamic performance views are improved.
- Online upgrades of OceanBase Database V4.0 series are supported.
- Statistics are collected in real time.
Product form
A standalone edition is released.
Updated features
Enhanced features
Enhanced online DDL operations
The OceanBase Database architecture of an earlier version supports limited types of online DDL operations. For example, primary key modifications are not supported, which causes inconvenience in database use. More online DDL operations related to data migration are supported since OceanBase Database V4.0. The following online DDL operations are supported:
MySQL mode
- Modify the name and type of a column, including the conversion between the character, numeric, date, and other data types.
- Add, modify, and delete a primary key.
- Modify the collation of a table or column online.
- Add and delete a foreign key.
- Convert a regular table to a partitioned table.
Oracle mode
- Modify the name and type of a column, including the conversion between the character, numeric, date, and other data types.
- Add, modify, and delete a primary key.
- Convert a regular table to a partitioned table.
In addition, the transaction commit logs of DLL operations take effect immediately upon commitment. This ensures the stability and reliability of commit log replay. The
GV$SESSION_LONGOPSview displays the execution progress of DDL operations in real time to help users observe and track the internal running status of time-consuming DDL operations.Optimized tenant-level IOPS isolation
OceanBase Database supports tenant-level IOPS isolation since V4.0. Resources are isolated based on resource units between tenants and based on I/O categories within each tenant. This ensures that each task in a tenant uses only the allocated resources and does not contend for resources of other tasks. However, the I/O categories are fixed and cannot be customized based on tasks, and resources cannot be further isolated for different users in the same tenant. Therefore, I/O resources and CPU resources are incorporated into the Resource Manager framework for unified resource isolation on the UI.
OceanBase Database V4.1 allows you to use the
DBMS_RESOURCE_MANAGERpackage to customize resource isolation strategies for I/O resources. Specifically, you can callCREATE_PLAN_DIRECTIVEto set theMIN_IOPS,MAX_IOPS, andWEIGHT_IOPSparameters to define an isolation plan. You can define IOPS isolation plans for both Oracle and MySQL tenants. For more information, see Resource isolation overview.Support for the Latin1 character set
As the number of community edition users and OceanBase Database users outside the Chinese mainland increases, an increasing number of projects require the Latin1 character set. With the Latin1 character set, complex character set conversion can be avoided for migration projects involving OceanBase Database. The Latin1 character set is supported since OceanBase Database V4.1. The
latin1_binandlatin1_swedish_cicollations are supported in MySQL mode, and thelatin1_bincollation is supported in Oracle mode.Table locks and deadlock detection
A table lock allows you to lock a table or partition in a specified manner to avoid data corruption caused by concurrent operations on the table or partition. OceanBase Database V4.1 supports more online DDL operations. Therefore, the table lock feature is provided to ensure data accuracy in concurrent DDL and DML operations. The LOCK TABLE syntax is supported in SHARE and EXCLUSIVE lock modes. Deadlocks due to lock contention can be detected.
Enhanced compatibility
Compatibility with MySQL
Compatibility with MySQL 8.0 in system functions, variables, and SQL modes is improved.
Most features of MySQL 5.7 are supported in OceanBase Database V4.0 and earlier versions. As OceanBase Database gains more experienced MySQL users in ApsaraDB for OceanBase and OceanBase Database Community Edition, the business system becomes more dependent on the features of MySQL 8.0. MySQL 8.0 has been released for over six years and its features are mature. Therefore, OceanBase Database systematically supports the features of MySQL 8.0 since V4.1 in terms of:
System functions
- Character operation functions: BIN_TO_UUID(), CHARACTER_LENGTH(), LOAD_FILE(), IS_UUID(), UUID_TO_BIN(), and OCTET_LENGTH()
- Datetime functions: ADDTIME() and DAYNAME()
- Encryption and decryption functions: DECODE(), DES_DECRYPT(), DES_ENCRYPT(), ENCODE(), and ENCRYPT()
- Performance schema functions: FORMAT_BYTES() and FORMAT_PICO_TIME()
- Information functions: CURRENT_ROLE() and ICU_VERSION()
- Window functions: BIT_AND(), BIT_OR(), and BIT_XOR()
- Other functions: NAME_CONST()
SQL modes
- Independent SQL modes: REAL_AS_FLOAT and TIME_TRUNCATE_FRACTIONAL
- Combination SQL modes: ANSI, DB2, MAXDB, MSSQL, ORACLE, POSTGRESQL, and TRADITIONAL
Information schemas
- View compatibility: View fields in OceanBase Database are the same as those in MySQL 8.0, but the field data types, widths, or constraints are adjusted for compatibility. For example, OceanBase Database provides the following views: VIEW_TABLE_USAGE, VIEWS, TABLES, STATISTICS, ENGINES, TABLE_CONSTRAINTS, REFERENTIAL_CONSTRAINTS, CHECK_CONSTRAINTS, PARAMETERS, ROUTINES, and PARTITIONS.
- Performance improvement: The overall query performance of system views such as STATISTICS is improved based on local table indexes.
Float(m, d)- OceanBase Database V4.1 is fully compatible with the data types and precisions in MySQL 8.0.
DDL operations are supported for FOREIGN KEY constraints.
By default, OceanBase Database checks foreign keys. To disable or enable foreign key check, modify the
FOREIGN_KEY_CHECKStenant variable. A FOREIGN KEY constraint requires that the constrained column must contain only values from the primary key column of another table. In OceanBase Database of earlier versions, theFOREIGN_KEY_CHECKStenant variable takes effect only on DML operations. In OceanBase Database V4.1, theFOREIGN_KEY_CHECKStenant variable also takes effect on DDL operations in the same way as in a MySQL database.
Column information of views
In a MySQL database, the column information of tables and views is stored as metadata in a data dictionary. You can query the
INFORMATION_SCHEMA.COLUMNSview for the column information. OceanBase Database persistently stores only the column information of tables. OceanBase Database V4.1 avoids parsing complex view dependencies by dynamically parsing view definitions and displays the column information of views in theINFORMATION_SCHEMA.COLUMNSview. In Oracle mode, you can query theDBA_TAB_COLSview for column information of views.
Compatibility with Oracle
Support for JSON data types
OceanBase Database V3.2 supports JSON data types in MySQL mode. In actual business scenarios of customers, JSON data types also need to be supported in Oracle mode. The use of JSON data types and regular functions is beneficial for upper-layer business logic. Therefore, OceanBase Database supports JSON data types in Oracle mode since V4.1 to improve its multi-mode data capabilities. The following table describes the capabilities of this version.
Category Description Data scope This version supports the JSON scope defined in the ECMA-404 and IETF RFC 8259 (Oracle 21C) standards in strict mode and most JSON capabilities of Oracle in non-strict mode. Data types This version allows you to save data of a JSON type as JSON text, and to save text with an IS JSON constraint as JSON text. DDL operations This version supports the CREATE TABLE,ADD COLUMN, andDROP COLUMNDDL operations.Query and DML operation This version supports access by using the dot notation or by using the json_value/json_queryexpression. It also supports the use of JSON text in SELECT, INSERT, UPDATE, DELETE, and REPLACE statements. You can partially modify the JSON text in an UPDATE statement.Functions This version supports most functions of Oracle 19C, including JSON_VALUE, JSON_QUERY, JSON_OBJECT, JSON_EQUAL, IS JSON, JSON_ARRAY, JSON_ARRAYAGG, JSON_OBJECTAGG, JSON_MERGEPATCH, and JSON_EXISTS. Type conversion This version supports conversion between strings or other scalar values and JSON values. The TREAT statement is required for explicit conversion. Support for writes to an Oracle database by using a DBLink
OceanBase Database V3.2 supports read-only access to Oracle databases through a DBLink. When it comes to real-life database replacement scenarios, businesses often need to avoid making extensive changes to their existing logic. To address this challenge, OceanBase Database V4.1 has expanded its capabilities to enable write access between OceanBase databases and Oracle databases, using Oracle Call Interface (OCI) drivers and XA transaction capabilities. This is a significant advantage for businesses as it eliminates the need to make major changes to their existing logic during database replacement. The supported DML statements include INSERT, DELETE, UPDATE, and MERGE INTO. Note that writes using a DBLink are currently not supported in PL.
Support for Chinese character sets and collations
This version supports Chinese character sets and collations and the NLSSORT() sorting function. You can sort data by Pinyin, strokes, or radicals of Chinese characters.
Improved performance
The data import performance is improved.
The data import performance is an important metric in the online analytical processing (OLAP) field. Data analysis and big data applications are often characterized by a large data volume. The capability of a database to quickly load business data in the TXT or CSV format to provide external query and analysis services is a key competitive strength that affects the choice of customers in the AP field. In the general data import procedure of a database system, the SQL layer parses the data received from the client, performs error tolerance processing, and routes and forwards the data. Then, the data is encapsulated in transactions and persisted for storage. This procedure has the following bottlenecks: data is processed row by row and is filtered and parsed in the SQL layer, and a large volume of clogs is generated. Therefore, OLAP databases are specifically optimized. The most common practice is to use the bypass import technology to shield the consumption and cost in SQL processing so that data is directly persisted to the disk. OceanBase Database V4.1 uses the bypass import technology to streamline the data loading path by skipping modules such as the SQL layer, transaction module, and MemTables, to directly persist data to SSTables, thereby significantly accelerating data import. To use the bypass import capability, you must add a hint in the LOAD DATA statement, for example,
load data /*+ direct(true , 1024 ) parallel(64)*/ infile '/data/1/hits.tsv' into table hits Fields Terminated By '\t';. You can query theV$SESSION_LONGOPSview for the import progress. Primary key conflict and error tolerance processing is supported in different modes. At present, only CSV data files are supported. When you import a 74-GB lineitem table in bypass mode to an OBServer node with the c6a.12xlarge specification, which provides 48 vCPUs and 96 GB of memory, the import speed can reach 165 MB/s. For more information, see Overview of bypass import.The routing of distributed transactions is optimized based on OceanBase Database Proxy (ODP), improving the performance of distributed transactions.
During business data modeling, transactions are generally processed on the same data node to avoid compromising the performance due to cross-node access of distributed transactions. In earlier versions of OceanBase Database, only requests in the same transaction can be processed on the same OBServer node. Therefore, ODP cannot route SQL queries based on the locations of data to access. This increases the latency in SQL processing within distributed transactions, thereby compromising the performance. During performance testing, the execution sequence of SQL statements must be adjusted to improve the performance. Much time and energy are consumed to reduce the negative impact caused by the routing of distributed transactions on the performance. OceanBase Database V4.1 optimizes routing for distributed transactions. ODP synchronizes the running status of transactions to OBServer nodes. Not all the running status information needs to be sent to the coordinator. A route can be calculated for each SQL statement. This ensures that SQL requests can be executed on any OBServer node in the cluster, thereby avoiding unnecessary processing latency. You can specify the
enable_distributed_routeparameter of ODP to enable routing optimization for distributed transactions, which is enabled by default. At present, routing optimization is not supported for eXtended Architecture (XA) transactions.Compilation feedback is optimized.
By using the compilation feedback optimization technology, the overall performance is improved by about 10% to 15% in running the Sysbench benchmark.
Other performance improvements:
- The performance of the NESTED LOOP JOIN operator is improved. The performance of the multistage NESTED LOOP JOIN operator is improved by about 10 times in specific Group Rescan scenarios.
- The execution of Index Skip Scan is optimized. When the NDV operator is used to calculate more than 0.5 million rows of data, the performance can be improved by about 10 times.
- The parallel execution of the TRUNCATE TABLE statement is optimized, improving the statement execution efficiency by about 10 times.
Enhanced stability
The large transaction processing capability is improved.
Based on the new adaptive log stream architecture, the transaction engine is redesigned to solve multiple issues related to large transactions of most distributed databases, such as large transaction logs, a large number of participants, and slow transaction commits. The new transaction engine stably handles various tasks such as online transactions, batch processing, and data correction. It ensures database reliability in complex business scenarios.
The dependency on the NTP service is eliminated.
The new automatic leader election protocol no longer depends on the NTP clock. In earlier versions, the clock offset among all OBServer nodes has to be controlled within 100 ms. OceanBase Database V4.1 allows a clock offset of up to 2s and supports dynamic clock modification without affecting data accuracy or cluster stability.
The limit on the number of partitions is extended.
The new adaptive log stream architecture allows all partitions in a unit to share a resource group. A partition no longer needs to separately apply for resource reservation as in earlier versions, hence the higher utilization of system resources. You no longer need to specify the maximum number of partitions on an OBServer node based on the configurations. However, the maximum number of partitions is still limited by the available physical resources of a server.
The limit on the DBMS_LOB storage specification is extended.
In earlier versions of OceanBase Database, the size of a stored LOB is limited to 48 MB. This poses a forcible restriction on the use of LOBs. In this upgraded architecture, an LOB macroblock is split into multiple LOB metadata records for storage on the storage layer. When the data is retrieved, these metadata records are aggregated into a continuous buffer, which is then returned to the SQL layer for processing. This breaks the limit on the data storage size and extends the LOB size to 512 MB by using regular SQL statements. In Oracle mode, the LOB size can be extended to a TB level by using the DBMS_LOB package.
Enhanced high availability
Physical Standby Database based on archive logs is supported.
To continuously improve database high availability is one of objectives in the design of OceanBase Database. Based on the distributed architecture and multi-replica data management mechanism, OceanBase Database can meet the high availability requirements of business systems in most scenarios. However, for industries such as banking and insurance that have extremely high requirements on the high availability of databases, when a single cluster is deployed in three IDCs across two regions for geo-disaster recovery, services may still become unavailable due to poor network communication across regions. Therefore, OceanBase Database provides the Physical Standby Database solution to resolve the issues in cross-region geo-disaster recovery. In earlier versions of OceanBase Database, Physical Standby Database uses a cluster-level log transmission strategy. In this strategy, the primary and standby clusters must detect the log synchronization status of each other, and log coupling exists between tenants. In OceanBase Database V4.1, cluster-level Physical Standby Database is replaced with tenant-level Physical Standby Databases. The primary and standby tenants are fully decoupled and unaware of each other. Archive logs are asynchronously synchronized by using a third-party network file system (NFS) or object storage service (OSS). The primary and standby tenants support resumable transmission of logs, switchover, and failover. In the tenant-level Physical Standby Database solution, an OceanBase cluster can contain both primary and standby tenants to help business applications achieve more balanced resource utilization. For more information, see Overview of Physical Standby Database.
The RTO is reduced to less than 8s in more scenarios.
Compared with OceanBase Database V4.0, OceanBase Database V4.1 is architecturally upgraded and optimized in terms of fault detection for the distributed architecture and partition election capabilities, aiming to achieve an RTO of less than 8s in most scenarios. Specifically:
- When the primary OBServer node crashes due to a fault, the standby OBServer node works with ODP to recover services.
- When the log disk or data disk of the primary OBServer node becomes faulty, the primary node returns an error but does not go offline. The replicas on this node are migrated to a functioning node and a new leader is elected to provide services. In this case, the number of available replicas decreases and the faulty node needs to be manually recovered.
- The arbitration replica is supported.
In high availability deployment mode, we recommend that you deploy five replicas in three IDCs across two regions for a private cloud, and deploy two full-featured replicas and one read-only replica for a public cloud. This can ensure continuous availability for businesses but also involves issues of high network bandwidth overheads and deployment costs. OceanBase Database V4.1 supports the arbitration service to solve the issue that the response time (RT) increases when a replica in the same region fails in the deployment scenario of three IDCs across two regions. The arbitration service can also effectively reduce the cross-region bandwidth overheads. Before you use the arbitration service, you must configure the arbitration OBServer nodes to start in arbitration mode when you deploy the cluster. Then, run the
add arbitrationcommand to add the arbitration service. At present, you cannot change the type of the arbitration replica or deploy a backup node for the arbitration node.Tenant-level backup and restore is supported.
Multitenancy is a core capability of OceanBase Database. In most customer systems, multiple tenants are created in each cluster. Each cluster represents a business unit. The backup frequency and strategy vary with the business type and importance. OceanBase Database V4.1 supports tenant-level backup and restore. You can restore the backup data to a new tenant. In addition, the backup snapshot retention strategies are optimized to reduce the impact of backup process on disk space. Data backups and log backups are stored in different directories, which allows you to separately store them on media that differ in performance.
Enhanced O&M capabilities
Online upgrades of OceanBase Database V4.0 series are supported.
OceanBase Database V4.0 is an architecturally upgraded version, in which data storage, internal tables, views, parameters, and some behaviors are incompatible with those in earlier versions. Therefore, OceanBase Database cannot be directly upgraded from V3.x to V4.0. We recommend that you use OceanBase Migration Service (OMS) to migrate the data logic. System data compatibility and physical online upgrades are the key capability objectives of OceanBase Database since V4.1. These capabilities also meet customer requirements for upgrades, reduce the upgrade complexity and costs, and facilitate the application and promotion of OceanBase Database V4.1.
OceanBase Database V4.1 adopts the rotating upgrade strategy.
After an upgrade is initiated, the cluster enters the hybrid deployment state. The system replaces the binary programs of OBServer nodes by zone and exits the hybrid deployment state until the binary programs of all nodes are replaced. At this time, binary programs of the later database version host the data and behaviors of the earlier version in compatibility mode.
After you run an upgrade command in a tenant, the system verifies and corrects the variables, system tables, and data. You must run the same upgrade command for all the tenants in the cluster one by one to finish the overall upgrade of the cluster. We recommend that you use OceanBase Cloud Platform (OCP) to upgrade your cluster online.
OceanBase Database V4.1 supports the following upgrade scenarios:
- Online cluster upgrades
- Separate upgrades of the primary and standby clusters. We recommend that you first upgrade the standby cluster and then the primary cluster, and that you use the
switchovercommand to adjust the status of the tenants in the cluster before the upgrade to ensure that they are in the consistent state. - Continuous upgrades since OceanBase Database V4.0, including the upgrades of functional versions and patch versions
- Upgraded OBServer nodes do not support rollbacks. Therefore, we recommend that you back up the data and binary programs before the upgrade.
- Tenant-level upgrades. We recommend that you upgrade all tenants in the cluster to the same version.
Do not perform the following operations during the upgrade, which are allowed after the upgrade is completed:
- DDL operations
- Major freezes
- Migration, replication, and load balancing
- Physical backup and restore
- Switchover and failover
- Tenant creation
- Monitoring is optimized to support end-to-end tracing.
OceanBase Database has been put into business operation for a long time. Its internal data access links are very complex. When a timeout error occurs, it is often impossible to quickly locate the root cause. Therefore, an experienced O&M engineer is required to troubleshoot each process, which is hardly responsive and affects the O&M efficiency. OceanBase Database V4.0 provides the end-to-end tracing mechanism to help you locate errors in all processes of the whole link, from the business application to the client driver such as the Java Database Connectivity (JDBC) client or OCI driver, ODP, and OBServer node. You can specify the
MODULE,ACTION,CLIENT_INFO, andCLIENT_IDENTIFIERparameters for applications by using PL/SQL statements or OBClient to identify the link in use. An O&M engineer can use a PL/SQL package to check related views and determine whether to enable end-to-end tracing and set diagnosis information output strategies. Diagnosis logs are generated and stored in ODP and OBServer log files in the OpenTracing data model. The engineer can parse the diagnosis logs to obtain relevant diagnosis information such as the execution time of each SQL and transaction in the whole link. OceanBase Database V4.1 supports the SHOW TRACE statement. You can specify theob_enable_show_tracevariable to perform interactive analysis and show the execution path of the previous SQL statement on the OBServer node.The session status can be monitored and diagnosed based on the Active Sessions History (ASH) report.
OceanBase Database of an earlier version allows you to obtain the status information of an SQL statement that is being executed, such as the wait events. However, you can obtain only the last wait event in a session by querying the
V$ACTIVE_SESSION_HISTORYview. OceanBase Database V4.0 supports a comprehensive ASH report that describes the relationship between sessions and wait events. The report contains not only the status of the SQL statements being executed, but also the status history of multiple metrics, such as SESSION, USER, SQL, and WaitEvent. The ASH report collects the status of all active sessions in the system at an interval of 1s. The collection process does not involve row locking or affect the execution of SQL statements. OCP analyzes the collected status information to help you learn about the system load and wait events in the past period of time, and identify exceptions in time.Views are standardized.
Views and internal tables are redesigned. Totally 73 standard views are provided. We recommend that you query views for information such as the metadata of OceanBase Database.
Statistics are collected in real time.
The accuracy of statistics is the basis for the optimizer to generate a proper execution plan. Earlier versions of OceanBase Database already support statistics collection in the following ways:
- Use the DBMS_STATS package or the ANALYZE statement to manually trigger statistics collection.
- Configure scheduled tasks to automatically collect statistics.
- Automatically collect statistics when the data amount exceeds the specified threshold. OceanBase Database V4.1 supports online statistics collection. You can enable this feature by using the
_optimizer_gather_stats_on_loadsystem variable or hints. If the business system supports the CTAS or INSERT statement, OceanBase Database updates incremental statistics in real time. This avoids system overheads caused by full table scans performed in regular statistics collection.
Storage persistence is optimized.
- Adaptive compaction: A compaction consumes a large number of I/O operations and CPU resources. Therefore, different compaction strategies are needed in different business scenarios. In the log-structured merge (LSM) storage system, the core objective of compactions is to achieve an optimal balance among write amplification, read amplification, and space amplification. In the adaptive compaction strategy, the system collects statistics about and analyzes the historical minor compactions and query operations performed on a table to extract the characteristics of the table, determines whether to schedule a major compaction for this table based on these characteristics, and completes a major compaction that is imperceptible to users.
- Table space optimization: If a small table is split into many partitions, storage amplification appears in macroblock storage. This is prone to result in insufficient disk space when a large number of partitions exist. Multiple small SSTables are stored in a physical macroblock for compaction and storage. This achieves a utilization of about 4% and a fragment ratio of about 1.5% for the macroblock when a small table is split into many partitions, significantly improving the disk utilization.
Product form
A standalone edition is released.
Based on the integrated architecture of OceanBase Database V4.0 that supports both standalone and distributed deployment, we released a standalone edition for OceanBase Database V4.1. The standalone edition can be deployed by using OCP or OceanBase Deployer (OBD), enabling independent software vendors (ISVs), small-sized customers, and community users to quickly get started with the new features of OceanBase Database V4.1. When the standalone edition can no longer meet your project requirements, business requirements, or high availability requirements, you can upgrade the standalone edition to a distributed edition online with ease.
Performance report
The following table shows the performance test data of OceanBase Database Community Edition V4.0.
Test environment specifications:
| CPU platform architecture | x86_64 |
|---|---|
| ECS type | ecs.g7.8xlarge |
| Computing resource | 32 cores |
| Memory | 128 GB |
| Disk resource | 500 GB ESSD |
| Operating system | CentOS Linux release 7.9.2009 (Core) |
Tested version:
| Product | Version information |
|---|---|
| OBServer (V4.1) | OBServer (OceanBase_CE 4.1.0.0) REVISION: 100000172023031416-6d4641c7bcd0462f0bf3faed011803c087ea1705 BUILD_TIME: Mar 14 2023 16:53:58 |
| ODP (V4.1) | ODP (OceanBase 4.1.0.0 1) REVISION: 5617-local-e6798c479feaab9f9a60b89f87e4df5e284250b6 BUILD_TIME: Mar 11 2023 21:42:11 |
| OBServer (V4.0) | OBServer (OceanBase_CE 4.0.0.0) REVISION: 103000022023011215-05bbad0279302d7274e1b5ab79323a2c915c1981 BUILD_TIME: Jan 12 2023 15:28:27 |
| ODP (V4.0) | ODP (OceanBase 4.0.0 5) REVISION: 1-local-9d5c90e562c61d9fcf8894993187aa20239db47e BUILD_TIME: Oct 28 2022 23:01:19 |
Sysbench OLTP benchmark
Test plan:
- Use OBD to deploy an OceanBase cluster. Install ODP on a separate server with a specification of ecs.c7.16xlarge, which provides 64 CPU cores and 128 GB of memory, and install Sysbench tools on another server to avoid resource contention.
- Deploy the cluster in the 1-1-1 architecture, where the cluster has three zones and each zone has one OBServer node. After successful deployment, create the tenant and user required for running the Sysbench benchmark. The sys tenant is a built-in system tenant used for managing the cluster. Do not use the sys tenant to run the benchmark. Set
primary_zonetoRANDOMfor the tenant, which indicates that the leader of the new table partitions is randomly assigned to one of the three nodes. - Launch the Sysbench client and run the point_select, read_write, read_only, and write_only tests.
- Set the
timeparameter to60sfor each round of test. The number of threads can be 32, 64, 128, 256, 512, or 1,024. - The test dataset consists of 30 non-partitioned tables with 1 million rows of data. To be specific, set
tablesto30andtable_sizeto1000000.
Tenant specifications:
- MAX_CPU = 26
- MEMORY_SIZE = 70 GB
Results: Point Select performance
| Threads | V4.0 QPS | V4.0 95% Latency (ms) | V4.1 QPS | V4.1 95% Latency (ms) |
|---|---|---|---|---|
| 32 | 139594.87 | 0.26 | 135146.05 | 0.27 |
| 64 | 257088.19 | 0.29 | 251219.37 | 0.30 |
| 128 | 453625.18 | 0.34 | 431564.13 | 0.36 |
| 256 | 746710.21 | 0.50 | 686271.21 | 0.55 |
| 512 | 964910.06 | 0.81 | 920502.51 | 0.92 |
| 1024 | 988444.97 | 1.64 | 972018.58 | 1.76 |
Read-only performance
| Threads | V4.0 QPS | V4.0 95% Latency (ms) | V4.1 QPS | V4.1 95% Latency (ms) |
|---|---|---|---|---|
| 32 | 100505.96 | 6.43 | 117325.66 | 4.65 |
| 64 | 189356.32 | 6.55 | 214733.98 | 5.18 |
| 128 | 326115.21 | 7.43 | 370499.19 | 6.09 |
| 256 | 479483.89 | 10.46 | 572924.33 | 8.13 |
| 512 | 584193.84 | 19.29 | 705032.91 | 13.95 |
| 1024 | 561898.43 | 70.55 | 755182.87 | 28.16 |
Write-only performance
| Threads | V4.0 QPS | V4.0 95% Latency (ms) | V4.1 QPS | V4.1 95% Latency (ms) |
|---|---|---|---|---|
| 32 | 38301.16 | 6.43 | 42329.95 | 5.37 |
| 64 | 71365.18 | 6.91 | 74408.79 | 6.09 |
| 128 | 117430.28 | 8.13 | 126026.90 | Jul. 30 |
| 256 | 182082.02 | 10.84 | 197493.43 | 9.56 |
| 512 | 252286.87 | 16.71 | 288284.95 | 13.95 |
| 1024 | 290982.27 | 31.37 | 354554.10 | 25.74 |
Read/Write performance
| Threads | V4.0 QPS | V4.0 95% Latency (ms) | V4.1 QPS | V4.1 95% Latency (ms) |
|---|---|---|---|---|
| 32 | 63498.13 | 12.30 | 74857.82 | 9.39 |
| 64 | 120362.58 | 12.52 | 138795.77 | 10.27 |
| 128 | 197802.61 | 15.55 | 234340.23 | 12.08 |
| 256 | 303103.47 | 20.37 | 365657.91 | 16.12 |
| 512 | 383006.31 | 33.12 | 479109.44 | 25.74 |
| 1024 | 389587.53 | 104.84 | 561333.10 | 48.34 |
TPC-C benchmark test with BenchmarkSQL
Test plan:
- Use OBD to deploy an OceanBase cluster. Deploy ODP and the TPC-C tools on the same server to avoid insufficient stress on the client.
- Deploy the OceanBase cluster in the 1-1-1 architecture, where the cluster has three zones and each zone has one OBServer node. After successful deployment, create the tenant and users required for running the TPC-C benchmark. The sys tenant is a built-in system tenant for managing the cluster. Do not use the sys tenant to run the benchmark. Set
primary_zonetoRANDOMfor the tenant.
Tenant specifications:
- MAX_CPU = 26
- MEMORY_SIZE = 70 GB
Software versions: mysql-connector-java-5.1.47 BenchmarkSQL V5.0
Test configuration:
- warehouse = 1000
- loadWorder = 40
- terminals = 800
- runMins = 5
- newOrderWeight = 45
- paymentWeight = 43
- orderStatusWeight = 4
- deliveryWeight = 4
- stockLevelWeight = 4
Results:
| V4.0 | V4.1 | |
|---|---|---|
| tpmC (NewOrders) | 307,021.0 | 332559.26 |
| tpmTOTAL | 682,517.67 | 738374.78 |
TPC-H benchmark
Test plan:
- Use OBD to deploy an OceanBase cluster. Deploy the TPC-H client on a server for stress testing. You do not need to deploy ODP. You can connect to any server during testing.
- Deploy the OceanBase cluster in the 1-1-1 architecture, where the cluster has three zones and each zone has one OBServer node. After successful deployment, create the tenant and users required for running the TPC-H benchmark. The sys tenant is a built-in system tenant for managing the cluster. Do not use the sys tenant to run the benchmark. Set
primary_zonetoRANDOMfor the tenant. Load the data, sequentially execute 22 SQL queries, and take the average value. - Size of the test dataset: 100 GB.
Tenant specifications:
- MAX_CPU = 26
- MEMORY_SIZE = 70 GB
Results:
| Query | V4.0 (s) | V4.1 (s) |
|---|---|---|
| Q1 | 2.34 | 2.06 |
| Q2 | 0.14 | 0.22 |
| Q3 | 0.72 | 1.50 |
| Q4 | 0.56 | 0.46 |
| Q5 | 2.25 | 0.95 |
| Q6 | 0.23 | 0.13 |
| Q7 | 1.52 | 1.48 |
| Q8 | 0.70 | 0.61 |
| Q9 | 5.22 | 2.95 |
| Q10 | 1.24 | 1.02 |
| Q11 | 0.23 | 0.18 |
| Q12 | 1.62 | 1.13 |
| Q13 | 2.41 | 1.95 |
| Q14 | 0.36 | 0.28 |
| Q15 | 0.79 | 0.88 |
| Q16 | 0.66 | 0.62 |
| Q17 | 0.63 | 0.54 |
| Q18 | 0.93 | 0.83 |
| Q19 | 0.78 | 0.61 |
| Q20 | 1.17 | 1.09 |
| Q21 | 2.42 | 2.63 |
| Q22 | 1.24 | 1.09 |
| Total | 28.16 | 23.21 |
TPC-DS benchmark
Test plan:
- Use OBD to deploy an OceanBase cluster. Deploy the client on a separate server for stress testing. You do not need to deploy ODP. You can connect to any server during testing.
- Deploy the OceanBase cluster in the 1-1-1 architecture, where the cluster has three zones and each zone has one OBServer node. After successful deployment, create the tenant and users required for running the TPC-DS benchmark. The sys tenant is a built-in system tenant for managing the cluster. Do not use the sys tenant to run the benchmark. Set
primary_zonetoRANDOMfor the tenant. Load the data, sequentially execute 99 SQL queries, and take the average value. - Size of the test dataset: 100 GB.
Tenant information:
- MAX_CPU = 26
- MEMORY_SIZE = 70 GB
Results:
| Query | V4.1 (s) |
|---|---|
| Q1 | 0.63 |
| Q2 | 3.46 |
| Q3 | 0.16 |
| Q4 | 9.43 |
| Q5 | 1.63 |
| Q6 | 0.37 |
| Q7 | 0.28 |
| Q8 | 0.39 |
| Q9 | 1.89 |
| Q10 | 0.50 |
| Q11 | 5.79 |
| Q12 | 0.28 |
| Q13 | 2.04 |
| Q14 | 11.38 |
| Q15 | 0.23 |
| Q16 | 0.92 |
| Q17 | 0.52 |
| Q18 | 0.32 |
| Q19 | 0.17 |
| Q20 | 0.19 |
| Q21 | 0.29 |
| Q22 | 1.11 |
| Q23 | 14.20 |
| Q24 | 1.56 |
| Q25 | 0.62 |
| Q26 | 0.21 |
| Q27 | 0.35 |
| Q28 | 7.44 |
| Q29 | 0.64 |
| Q30 | 0.27 |
| Q31 | 0.62 |
| Q32 | 0.11 |
| Q33 | 0.77 |
| Q34 | 0.37 |
| Q35 | 0.87 |
| Q36 | 0.31 |
| Q37 | 0.42 |
| Q38 | 1.49 |
| Q39 | 2.31 |
| Q40 | 0.16 |
| Q41 | 0.18 |
| Q42 | 0.11 |
| Q43 | 0.64 |
| Q44 | 0.45 |
| Q45 | 0.39 |
| Q46 | 0.81 |
| Q47 | 1.16 |
| Q48 | 0.61 |
| Q49 | 0.86 |
| Q50 | 0.81 |
| Q51 | 2.82 |
| Q52 | 0.11 |
| Q53 | 0.24 |
| Q54 | 1.05 |
| Q55 | 0.11 |
| Q56 | 1.01 |
| Q57 | 0.79 |
| Q58 | 0.60 |
| Q59 | 13.64 |
| Q60 | 1.13 |
| Q61 | 0.30 |
| Q62 | 0.40 |
| Q63 | 0.23 |
| Q64 | 1.84 |
| Q65 | 2.70 |
| Q66 | 0.46 |
| Q67 | 16.12 |
| Q68 | 0.67 |
| Q69 | 0.44 |
| Q70 | 1.18 |
| Q71 | 0.54 |
| Q72 | 1.47 |
| Q73 | 0.31 |
| Q74 | 6.36 |
| Q75 | 2.26 |
| Q76 | 0.49 |
| Q77 | 0.71 |
| Q78 | 3.64 |
| Q79 | 0.65 |
| Q80 | 1.78 |
| Q81 | 0.30 |
| Q82 | 0.67 |
| Q83 | 0.69 |
| Q84 | 0.19 |
| Q85 | 0.84 |
| Q86 | 0.36 |
| Q87 | 1.59 |
| Q88 | 0.65 |
| Q89 | 0.30 |
| Q90 | 0.19 |
| Q91 | 0.13 |
| Q92 | 0.10 |
| Q93 | 0.75 |
| Q94 | 0.62 |
| Q95 | 6.37 |
| Q96 | 0.38 |
| Q97 | 0.93 |
| Q98 | 0.30 |
| Q99 | 0.70 |
| Total time | 160.83 |
Compatibility changes
Product behavioral changes
The following table describes the changes made based on OceanBase Database Community Edition V4.0.
| Feature | Version | Description |
|---|---|---|
| Floating-point data types | 4.1 | V4.0: The float(m,d) format definition is no longer supported. Only float(m) and float(m,0) are supported. V4.1: OceanBase Database V4.1 is fully compatible with the data types and precisions in MySQL 8.0. |
| Backup file name extension | 4.1 | The file name extension obbak is added to identify backup files of OceanBase Database. The file name extension .obarc is added to identify archive log files of OceanBase Database. |
| UNUSABLE indexes | 4.1 | UNUSABLE indexes that are generated when partitions are deleted cannot be restored by using the backup and restore operation. |
| MAJOR FREEZE | 4.1 | You can initiate a major compaction task for a specific tenant, tablet, or table. |
Views, entity tables, and virtual tables
The following table describes the changes made based on OceanBase Database Community Edition V4.0.
| Views, entity tables, and virtual tables | Description | New/Change | Version |
|---|---|---|---|
__all_virtual_plan_table |
Stores the current session explain plans. | New | 4.1 |
__all_virtual_sql_plan |
Stores the executed logical plans. | New | 4.1 |
__all_virtual_ha_diagnose |
Displays the diagnostic information about the current garbage collection (GC) status, including gc_diagnose_info, restore_handler_role, restore_handler_proposal_id, restore_context_info, and restore_err_context_info. |
Change | 4.1 |
__all_virtual_malloc_sample_info |
Displays the stack information about memory allocation for each module. | New | 4.1 |
__all_ddl_task_status |
The execution_id column is added to identify each single-replica building task. |
Change | 4.1 |
__all_virtual_span_info |
Stores diagnostic information. | New | 4.1 |
__all_virtual_show_trace |
Displays the processed diagnostic information. | New | 4.1 |
__all_virtual_io_scheduler |
Displays the IOPS and bandwidth statistics. | New | 4.1 |
__all_virtual_io_status |
Displays the queuing status of different types of I/O requests for different tenants. | New | 4.1 |
V$OB_SQL_AUDIT |
The PARTITION_HIT field is added to indicate whether partition routing is remotely performed. |
Change | 4.1 |
DBA_RSRC_PLAN_DIRECTIVES |
Displays the configuration of I/O resources in the resource manager. | New | 4.1 |
V$OB_TABLET_STATS |
Displays the tablet-level statistics on adaptive compactions, which are used in troubleshooting. | New | 4.1 |
DBA_OB_TENANTS |
Regular users are allowed to query this view and the following columns are added to display information such as the locality, synchronization timestamp, and tenant type: TENANT_ROLE, SWITCHOVER_STATUS, SWITCHOVER_EPOCH, SYNC_SCN, REPLAYABLE_SCN, READABLE_SCN, RECOVERY_UNTIL_SCN, LOG_MODE, and ARBITRATION_SERVICE_STATUS. |
Change | 4.1 |
V$OB_LOG_STAT |
The arbitration_member and degraded_list columns are added to indicate the arbitration members and the list of degraded replicas. |
Change | 4.1 |
V$OB_TRANSACTION_SCHEDULERS |
Displays the information about schedulers of transactions, which can be used for troubleshooting. | New | 4.1 |
DBA_OB_ARBITRATION_SERVICE |
Displays the arbitration service configuration information of the current cluster. | New | 4.1 |
DBA_OB_LS_ARB_REPLICA_TASKS |
Displays the running arbitration replica tasks of the current tenant and the corresponding user tenant. | New | 4.1 |
DBA_OB_LS_ARB_REPLICA_TASK_HISTORY |
Displays the executed arbitration replica tasks of the current tenant and the corresponding user tenant. | New | 4.1 |
V$OB_ARCHIVE_DEST_STATUS |
Displays the status of each archive destination of the tenant. | New | 4.1 |
DBA_OB_LS_LOG_ARCHIVE_PROGRESS |
Displays the archiving progress at the log stream level. | New | 4.1 |
DBA_OB_LS_LOG_RESTORE_STAT |
Displays the restore summary at the log stream level. | New | 4.1 |
DBA_OB_CLUSTER_EVENT_HISTORY |
Displays only events related to cluster upgrades. This view is accessible only from the sys tenant. | New | 4.1 |
DBA_OB_TENANTS |
Displays the basic information of all tenants. | Change | 4.1 |
For more information, see System views.
Parameter changes
The following table describes the changes made based on OceanBase Database Community Edition V4.0.
| Parameter | Description | Status change | Version |
|---|---|---|---|
ob_proxy_readonly_transaction_routing_policy |
The routing strategy for read-only statements. | Deprecated. | 4.1 |
ob_plan_table_memory_limit |
The maximum memory available for the optimizer to store logical plans. | Default value: 32 MB. Value range: 1 to 1024. | 4.1 |
tenant_task_queue_size |
The length of the request queue of the tenant. | The default value is changed from 65536 to 8192. | 4.1 |
ob_enable_trace_log |
Specifies whether to enable SQL Trace. | Deprecated. | 4.1 |
ob_enable_show_trace |
Specifies whether to enable Show Trace. | Default value: 0, which specifies to disable Show Trace. | 4.1 |
_data_storage_io_timeout |
The maximum I/O timeout period of the data disk. | The default value is changed from 120s to 10s, and the minimum value is changed from 5s to 1s. | 4.1 |
_enable_pkt_nio |
Specifies whether to enable PKT-NIO for the new remote procedure call (RPC) framework. | Default value: True. | 4.1 |
rpc_memory_limit_percentage |
The maximum memory for RPCs in the tenant. | Default value: 0, which specifies not to limit the memory for RPCs. | 4.1 |
_private_buffer_size |
The size of the log buffer that triggers log writing. | The default value is changed from 256 KB to 16 KB. | 4.1 |
_mvcc_gc_using_min_txn_snapshot |
Specifies whether to use active transaction snapshots to store multiversion data. | Default value: True. | 4.1 |
_ob_enable_dynamic_worker |
Specifies whether to enable dynamic threads. | Default value: True. | 4.1 |
data_storage_warning_tolerance_time |
The maximum duration of I/O failures tolerable on the data disk before the data disk is considered damaged. | Default value: 5s. Value range: 1s to 300s. | 4.1 |
log_storage_warning_tolerance_time |
The maximum duration of I/O failures tolerable on the log disk before the log disk is considered damaged. | Default value: 5s. Value range: 1s to 300s. | 4.1 |
_min_malloc_sample_interval |
The minimum sampling interval in memory allocation. | Default value: 16. Value range: 1 to 10000. | 4.1 |
_max_malloc_sample_interval |
The maximum sampling interval in memory allocation. | Default value: 256. Value range: 1 to 10000. | 4.1 |
ob_max_read_stale_time |
The session-level maximum latency in weak-consistency reads. For the weak-consistency read requests in the current session, the latency of the read data is within the specified threshold. You can set different latency thresholds for different sessions. | Default value: 5s. | 4.1 |
log_archive_concurrency |
The total number of worker threads for archiving logs. You can dynamically increase or decrease the number of threads. | The effective scope of this parameter is changed from the cluster level to the tenant level. | 4.1 |
log_restore_concurrency |
The number of worker threads for restoring logs from the physical database or standby database. You can dynamically increase or decrease the number of threads. | The effective scope of this parameter is changed from the cluster level to the tenant level. | 4.1 |
_ob_enable_direct_load |
Specifies whether to enable bypass import. | Default value: True. | 4.1 |
_enable_transaction_internal_routing |
Specifies whether DML statements executed within a transaction can be routed to any OBServer node. | Default value: True. | 4.1 |
log_restore_source |
The path from which logs are restored. | Default value: "". | 4.1 |
ob_startup_mode |
The startup mode of the observer process. | Default value: normal. The value arbitration specifies to start the observer process in arbitration mode. |
4.1 |
_ob_plan_cache_auto_flush_interval |
The interval for periodically flushing the plan cache. | Default value: 0s, which specifies not to automatically flush the plan cache. | 4.1 |
enable_user_defined_rewrite_rules |
Specifies whether to enable user-defined rules. | Default value: False. |
4.1 |
Note
- An online upgrade from OceanBase Database V3.x to OceanBase Database V4.x is not supported.
- You can use OMS to migrate the data logic of OceanBase Database V3.x to OceanBase Database V4.1.
- An online upgrade from OceanBase Database V4.0 to OceanBase Database V4.1 is supported.
Supported components
The following table describes the recommended component versions for OceanBase Database V4.1.0.
| Component | Version |
|---|---|
| OCP | V4.0.3 |
| OceanBase Developer Center (ODC) | V4.1.2 |
| OMS | Iterated in March 2023 and intended only for ApsaraDB for OceanBase |
| OceanBase Call Interface (OBCI) | V2.0.6 |
| OceanBase C++ Call Interface (OCCI) | V1.0.2 |
| OceanBase Embedded SQL in C (ECOB) | V1.1.7 |
| OceanBase Database Proxy (ODP) | V4.1.0 |
| OceanBase JDBC | V2.4.3 |
| OceanBase Open Database Connectivity (ODBC) driver | V2.0.7 |
| OceanBase Client (OBClient) | V2.2.2 |
| OBD | V1.6.0 |
Limitations
- In OceanBase Database V4.1.0, load balancing is not supported after replica scale-out. Therefore, data that already exists before the scale-out cannot be distributed to the newly added nodes. This issue will be resolved in V4.2.
- OceanBase Database V4.1.0 does not support replicated tables. This issue will be resolved in V4.2.
- OceanBase Database V4.1.0 allows you to initiate data backup from the primary cluster. You will be able to initiate data backup from the standby cluster in later versions.
- In OceanBase Database V4.1.0, the minimum specification of a node in the production environment is 4C16GB. Each tenant requires more than 2 GB of memory and more than 1 CPU core.
- OceanBase Database V4.1.0 supports only full-featured replicas. Other types of replicas such as read-only replicas will be supported in later versions.
- In OceanBase Database V4.1.0, you can only increase the number of resource units (UNIT_NUM) of a tenant. You will be allowed to decrease the number of resource units in later versions.
- In OceanBase Database V4.1.0, if you execute the DROP DATABASE statement to drop a database that contains more than 200 tables, the operation may fail. We recommend that you first execute DROP TABLE and then DROP DATABASE. This issue will be resolved in V4.2.
V4.1.0 BP1
Version information
- Release date: May 8, 2023
- Version: V4.1.0 BP1
Parameter changes
| Parameter | New/Change | Default value | Description |
|---|---|---|---|
| memory_limit | Change | 0 | The maximum memory for an OBServer node. When the default value 0 is used, the available memory size is calculated by using the following formula: Size of the physical memory of the server × memory_limit_percentage. This is a cluster-level parameter. The value range is changed from [4 GB, +∞) to [1 GB, +∞).
NoteIf the OBServer node starts in arbitration service mode, you can set the minimum value to 1 GB. However, if the OBServer node starts in other modes, we recommend that you set the minimum value to 4 GB for the sake of stability. |
| _easy_memory_limit | Change | 4 GB | The maximum memory for the easy module of an OBServer node. The default value is 4 GB. This is a cluster-level parameter. The value range is changed from [1 GB, +∞) to [256 MB, +∞).
NoteIf the OBServer node starts in arbitration service mode, you can set the minimum value to 256 MB. However, if the OBServer node starts in other modes, we recommend that you set the minimum value to 1 GB for the sake of stability. |
Fixed issues
- In some data distribution architectures, if the amount of encoded data written to the memory buffer exceeds 2 MB during a major compaction, the major compaction is blocked.
- When columns are deleted in parallel tasks, the
macro data seqcommand generates duplicate data and causes the backup to fail. - When a stored procedure calls another stored procedure in a system package, the PL plan cache is not hit.
- If more than 16 virtual IP addresses are configured for an NIC, the operating system fails to obtain the IP address of the NIC, so that the OBServer node fails to start.
- When small-specification (2c4g) Elastic Compute Service (ECS) instances are used, the arbitration service cannot start because of insufficient resources. This issue is fixed from two aspects. In the system aspect, the lower limit of the required resources is reduced. In the O&M aspect, O&M engineers need to pay attention to the value of the
vm.min_free_kbytesparameter, so as to prevent the started process from being killed by the operating system because the remaining resources are less than the lower limit. - The
NULLABLEandIS_VISIBLEfields in theinformation_schema.STATISTICSview are incompatible with OceanBase Database V4.0.0. - No error is reported or the error is not reported in the same form as MySQL when the REGEXP function contains both binary and non-binary parameters.
- The use of an index created on a generated column deteriorates the performance.