OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Best Practices

All Versions

  • Deploy
    • Configuration guide for read-write splitting in AP scenarios
    • Best practices for read-write splitting
  • Migrate
    • Data transfer solutions in OceanBase Database
    • Overview on data migration
    • Best practices for importing data files to OceanBase Database
    • Best practice for migrating data from other databases to OceanBase Database
    • Massive data migration strategy
    • Best practices for migrating data from MyCat to OceanBase Database
    • Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode
  • Route
    • ODP routing best practices
  • Table Design
    • Best practices for table design and index optimization
    • Best practices for creating indexes on large tables
    • Best practices for database development
  • Develop
    • Best practices for connecting Java applications to OceanBase Database
    • Best practices for integrating Spark Catalog with OceanBase Database
    • Best practices for achieving optimal performance in batch DML using JDBC and OBServer
    • Best practices for bulk data cleanup in OceanBase Database
    • Best practices for PDML processing in OceanBase Database
    • Best practices for hot tables in OceanBase Database
    • Best practices for auto-increment columns and sequences in OceanBase Database
  • Manage
    • Best practices for resource throttling
    • Best practices for data load balancing
    • Best practices for security certification
    • Best practices for access control
    • Best practices for data encryption
  • Diagnose
    • Best practices for log interpretation in common scenarios
    • Best practices for end-to-end tracing
    • Best practices for using obdiag to collect performance information
    • Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements
    • Best practices for troubleshooting OceanBase Database performance issues
  • Performance Tuning
    • Best practices for handling slow queries
    • Best practices for collecting statistics to generate an efficient execution plan
    • Best practices for updating hotspot rows
    • Best practices for large object storage performance
    • Best practices for semi-structured storage performance
    • Best practices for OceanBase materialized views
  • Cloud Database
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through primary and standby databases across clouds
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud

Download PDF

Configuration guide for read-write splitting in AP scenarios Best practices for read-write splitting Data transfer solutions in OceanBase Database Overview on data migration Best practices for importing data files to OceanBase Database Best practice for migrating data from other databases to OceanBase Database Massive data migration strategy Best practices for migrating data from MyCat to OceanBase Database Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode ODP routing best practices Best practices for table design and index optimization Best practices for creating indexes on large tables Best practices for database development Best practices for connecting Java applications to OceanBase Database Best practices for integrating Spark Catalog with OceanBase Database Best practices for achieving optimal performance in batch DML using JDBC and OBServer Best practices for bulk data cleanup in OceanBase Database Best practices for PDML processing in OceanBase Database Best practices for hot tables in OceanBase Database Best practices for auto-increment columns and sequences in OceanBase Database Best practices for resource throttling Best practices for data load balancing Best practices for security certification Best practices for access control Best practices for data encryption Best practices for log interpretation in common scenarios Best practices for end-to-end tracing Best practices for using obdiag to collect performance information Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements Best practices for troubleshooting OceanBase Database performance issues Best practices for handling slow queries Best practices for collecting statistics to generate an efficient execution plan Best practices for updating hotspot rows Best practices for large object storage performance Best practices for semi-structured storage performance Best practices for OceanBase materialized views Best practices for achieving high availability through cross-cloud active-active deployment High availability through primary and standby databases across clouds High host CPU usage Best practices for read/write splitting in OceanBase Cloud
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Best Practices
  3. master
iconOceanBase Best Practices
master
  • master

Best practices for connecting Java applications to OceanBase Database

Last Updated:2024-12-16 08:07:35  Updated
share
What is on this page
Applicable version
Key concepts
Application and database driver settings
JDBC connection settings
Connection example
Timeout settings
Session status variables
Batch execution parameters
Choose appropriate configurations in different scenarios
Batch execution in OceanBase Database
INSERT
UPDATE
DELETE
Check status of batch execution tasks in OceanBase Database
Return value of the executeBatch() method during batch execution
Result sets
Comparison of different result set types
Configure different types of result sets
Suggestions for selecting a result set type
Considerations and limitations
Configuration examples
Configure connection pool parameters
Basic connection pool parameters
Suggestions on configuring a connection pool
Configure a Druid connection pool
Configuration examples of other connection pools
Summary
JDBC parameter settings
Connection pool parameter settings
Keepalive settings
Settings related to the application and database driver

folded

share

This topic describes how to configure Java Database Connectivity (JDBC) parameters and Druid connection pool parameters to ensure smooth connections between Java applications and OceanBase Database.

Applicable version

This topic applies to OceanBase Database V2.x, V3.x, and V4.x.

Key concepts

To ensure that the connection between your application and OceanBase Database is both stable and efficient while maintaining security, it's important to understand the following key concepts:

  • JDBC

    JDBC is an API that enables Java applications to interact with databases. It provides a set of standard interfaces for executing SQL statements, retrieving query results, and managing database transactions.

  • Connection pool

    A connection pool is a database connection management technique that maintains a predefined set of database connections. Applications can request a connection from the pool when needed and return it after use. This approach allows connections to be reused, reducing the overhead of creating and destroying connections. When an application starts, multiple database connections are created and stored in memory. When a database connection is required, an idle connection is retrieved from the pool, and after use, it is returned to the pool. This mechanism significantly reduces the time required to establish database connections and enhances application performance and resource efficiency. Common connection pool implementations include Druid, Database Connection Pool (DBCP), and c3p0.

  • Keepalive

    A connection keepalive mechanism regularly checks the validity of database connections, preventing the application from using invalid connections and improving system stability. You can enable this mechanism by configuring parameters such as testWhileIdle, testOnBorrow, and testOnReturn.

The following sections outline the specific parameters required to connect your application to OceanBase Database. Proper configuration of these parameters is essential to ensure system stability and performance.

Application and database driver settings

When your application interacts with OceanBase Database, properly configuring the driver can significantly enhance system stability and performance. Below are some general recommendations:

  • Connection timeout settings: Configure parameters such as connectTimeout and socketTimeout to define the timeout period for connecting to the database. Additionally, set an appropriate retry interval to enable quick reconnection after failures, reducing network congestion and system load.

  • Logging: During runtime, your application needs to record OceanBase Database error codes and connection details, such as the IP address, port number, and username. This information helps database administrators (DBAs) quickly diagnose and resolve issues.

  • Version compatibility: Ensure that the client library (such as .so or .jar files) is compatible with the database server version by setting the appropriate parameters. This ensures smooth interaction between components.

  • Database switching method: We recommend that you use the Connection.setCatalog(dbname) method for database switching instead of executing the SQL statement use <dbname>. This improves code readability and maintainability.

  • Session status variables: Use JDBC interfaces like setAutoCommit, setReadOnly, and setTransactionIsolation to configure session status variables. This reduces the need for SQL statements and database interactions, thereby improving performance.

  • Transaction processing: Your application needs to call the getConnection method to request a database connection before executing a single transaction (which may include one or more SQL statements). After the transaction is executed, call the closeConnection method to close the connection. This ensures that each transaction is processed independently, thereby maintaining data consistency and isolation.

By configuring these settings appropriately, you can significantly improve the efficiency and stability of interactions between your application and OceanBase Database. The following sections provide specific parameters and examples to help you design an optimal system architecture.

JDBC connection settings

This section explains how to configure JDBC connection settings. These settings can be applied in the connection pool's ConnectionProperties or directly within the JdbcUrl. For better management, we recommend that you configure these settings in the connection pool.

Connection example

Below is an example of a JDBC connection URL that you can use to connect to a database:

conn = jdbc:mysql://x.x.x.x(ip):xx(port)/xxxx(dbname)?rewriteBatchedStatements=true&allowMultiQueries=true&useLocalSessionState=true&useUnicode=true&characterEncoding=utf-8&socketTimeout=3000000&connectTimeout=60000

Description of parameter settings

Next, we will provide a detailed explanation of each parameter in the JDBC connection example to help you better understand and use them.

  • jdbc:mysql://x.x.x.x(ip):xx(port)/xxxx(dbname)

    • x.x.x.x (ip): the actual IP address of the database server. Ensure this is the actual address of the server.
    • xx(port): the listening port of the database service. The default value for MySQL is 3306.
    • xxxx(dbname): the name of the database to connect. Ensure the database exists and the user has sufficient permissions to access it.
  • rewriteBatchedStatements=true

    This parameter specifies whether to rewrite statements for batch execution. If set to true, the JDBC driver rewrites the statements for batch execution into a one-off SQL statement to reduce interactions with the database, thereby improving the performance.

  • allowMultiQueries=true

    This parameter specifies whether to execute multiple SQL queries in a single statement. You can set the parameter to true to improve the execution efficiency, but SQL injection risks may arise. Proceed with caution. User input must be strictly verified and cleared if with the preceding risks.

  • useLocalSessionState=true

    If you set the parameter to true, the MySQL JDBC driver uses the local session status variables. In this case, sending status changes is not synchronized to the database in real time. This speeds up the execution of some operations.

  • useUnicode=true

    If you set this parameter to true, the JDBC driver uses the Unicode character set for string input and output to ensure that non-ASCII characters, such as characters in Chinese or other languages, can be correctly processed.

  • characterEncoding=utf-8

    This parameter specifies the character encoding format used for database connections. You can set it to utf-8. UTF-8 is a general character set that supports characters in multiple languages. You can use it to avoid garbled characters during data transmission.

  • socketTimeout=3000000

    This parameter specifies the socket timeout period, in milliseconds. A connection is automatically closed if no data is transmitted within the specified period.

  • connectTimeout=60000

    This parameter specifies the timeout period for the client to establish a connection with the database server, in milliseconds. If the client fails to establish a database connection within the specified period, an exception is thrown.

The following sections detail the parameters and variables in the JDBC connection example.

Timeout settings

You can configure the following parameters in the JDBC connection example.

Parameter Description Recommended value
socketTimeout The network socket timeout period. 5000ms
connectTimeout The connection timeout period. 500ms
  • We recommend that you specify the socketTimeout parameter based on the characteristics of your application. If you do not specify this parameter, the default value 0ms is used, which specifies to never time out.
  • We recommend that you specify the connectTimeout parameter based on the characteristics of your application. If you do not specify this parameter, the default value 30000ms is used.
  • You can configure the session variables ob_query_timeout and ob_trx_timeout of OceanBase Database to specify the SQL query timeout period and transaction timeout period, respectively.

Session status variables

You can configure the following parameter in the JDBC connection example.

Parameter Description Recommended value
useLocalSessionState Specifies whether to use the local session status variables. The default value is false. You can specify this parameter when you use MySQL Connector/J to connect to OceanBase Database. OceanBase Connector/J does not support this parameter. true
  • We recommend that you set useLocalSessionState to true to use local session status variables. This can reduce the interactions between your application and the database, thereby improving the transaction performance. You can specify this parameter when you use MySQL Connector/J to connect to OceanBase Database. OceanBase Connector/J does not support this parameter.
  • Local session status variables are autocommit, read_only, and transaction isolation.
  • Note that if you want to use the local session status variables, you must set these variables by using the JDBC interfaces instead of SQL statements.
Variable SQL statement JDBC interface
autocommit set autocommit=0 setAutoCommit(false)
transaction isolation set tx_isolation='read-committed' setTransactionIsolation(2)
read_only set tx_read_only=0 setReadOnly(false)

Batch execution parameters

In a cloud or distributed database environment, the network link between the application server and the database server is long. Therefore, network latency is a decisive factor in improving transaction performance. To this end, batch coding is recommended for your application to reduce interactions with the database.

The following table describes the important JDBC parameters that affect the batch execution behavior and the interaction efficiency between your application and the database.

Parameter Default value Description
allowMultiQueries FALSE Specifies whether to concatenate multiple requests with semicolons (;) in a statement. Batch execution does not depend on this parameter but on rewriteBatchedStatements. Note that allowMultiQueries must be set to TRUE if the JDBC version is earlier than 1.1.9, and is optional if the JDBC version is 2.2.6 or later.
rewriteBatchedStatements FALSE Specifies whether to rewrite the INSERT statement in batch execution. For a PreparedStatement object, multiple values are combined in one statement. For a Statement object, multiple INSERT statements are concatenated with semicolons(;).
useServerPrepStmts FALSE Specifies whether to use prepared statements on the server. This parameter is valid only for PreparedStatement objects. The value TRUE specifies to use prepared statements on the server. In this case, _ob_enable_prepared_statement must be set to TRUE for the OBServer node, and the binary protocol must be used for communication. The value FALSE specifies to use prepared statements on the client. In this case, the text protocol must be used for communication.
cachePrepStmts FALSE/TRUE Specifies whether the JDBC driver caches prepared statements. The cached content on the client is different from that on the server. Note that the default value is FALSE for OceanBase Connector/J V1.x, and is TRUE for OceanBase Connector/J V2.x.
prepStmtCacheSize 25/250 The number of prepared statements that can be cached. This parameter is valid only when cachePrepStmts is set to TRUE. Note that the default value is 25 for OceanBase Connector/J V1.x, and is 250 for OceanBase Connector/J V2.x.
prepStmtCacheSqlLimit 256/2048 The maximum size of SQL statements that can be cached. This parameter is valid only when cachePrepStmts is set to TRUE. Note that the default value is 256 for OceanBase Connector/J V1.x, and is 2048 for OceanBase Connector/J V2.x.
maxBatchTotalParamsNum 30000 The maximum number of parameters that can be passed in the executeBatch method. Note that this parameter is valid only for OceanBase Connector/J V2.2.7 and later.

You can use either a Statement object or a PreparedStatement object to implement batch execution. The following sections describe how to use the two types of objects for batch execution.

PreparedStatement object

This section describes how to use a PreparedStatement object to batch insert multiple records into a transaction. Here is the sample code:

conn = DriverManager.getConnection(obUrl);
conn.setAutoCommit(false);
String SQL = "INSERT INTO TEST1 (C1, C2) VALUES (?, ?)";
PreparedStatement pstmt = conn.prepareStatement(SQL);
int rowCount = 5, batchCount = 10;
for (int k = 1; k <= batchCount; k++) {
    for (int i = 1; i <= rowCount; i++) {
        pstmt.setInt(1, (k * 100 + i));
        pstmt.setString(2, "test value");
        pstmt.addBatch();
    }
    int[] count = pstmt.executeBatch();
    pstmt.clearBatch();
}
conn.commit();
pstmt.close();

The following table describes the batch execution behavior that varies depending on the value of useServerPrepStmts when a PreparedStatement object is used.

useServerPrepStmts INSERT UPDATE Remarks
TRUE The values in multiple INSERT statements are concatenated as question marks (?) in one INSERT statement. The format is INSERT INTO TEST1 VALUES (?), (?),..., (?). The variables In multiple independent UPDATE statements are replaced with question marks (?). Scenario 1
FALSE The values in multiple INSERT statements are directly used and concatenated in one INSERT statement. The format is INSERT INTO TEST1 VALUES (1), (2), ...,(10). Multiple independent UPDATE statements are concatenated with semicolons (;). Scenario 2

Statement object

This section describes how to use a Statement object to batch insert multiple records. Here is the sample code:

conn = DriverManager.getConnection(obUrl);
conn.setAutoCommit(false);
Statement stmt = conn.createStatement();
String SQL1 = "INSERT INTO test1 (c1, c2) VALUES (1, 'test11')";
stmt.addBatch(SQL1);
String SQL2 = "INSERT INTO test1 (c1, c2) VALUES (2, 'test12')";
stmt.addBatch(SQL2);
String SQL3 = "INSERT INTO test1 (c1, c2) VALUES (3, 'test13')";
stmt.addBatch(SQL3);
int[] count = stmt.executeBatch();
stmt.clearBatch();
conn.commit();

The following table describes the batch execution behavior of a Statement object.

useServerPrepStmts INSERT UPDATE Remarks
TRUE Multiple independent INSERT statements are concatenated with semicolons (;). Multiple independent UPDATE statements are concatenated with semicolons (;). Scenario 3
FALSE Multiple independent INSERT statements are concatenated with semicolons (;). Multiple independent UPDATE statements are concatenated with semicolons (;). Scenario 4

Choose appropriate configurations in different scenarios

Based on the methods of using a Statement object for batch insert and update, this section provides you with recommended configurations that can give full play to batch execution in different scenarios.

Batch insert

We recommend that you use the following configurations for batch execution in scenarios 1 and 2:

Scenario 1:

  • JDBC object:

    PreparedStatement object
    
  • Server parameter:

    _ob_enable_prepared_statement=TRUE
    
  • JDBC parameters:

    rewriteBatchedStatements=TRUE
    useServerPrepStmts=TRUE
    cachePrepStmts=TRUE
    prepStmtCacheSize=<Specify the value based on the actual situation.>
    prepStmtCacheSqlLimit=<Specify the value based on the actual situation.>
    maxBatchTotalParamsNum=<Specify the value based on the actual situation.>
    

Scenario 2:

  • JDBC object:

    PreparedStatement object
    
  • JDBC parameters:

    rewriteBatchedStatements=TRUE
    useServerPrepStmts=FALSE
    

Batch update

This section provides the recommended configurations for batch update in scenarios 2, 3, and 4. The text protocol allows you to execute multiple UPDATE statements in batches.

Scenario 2:

  • JDBC object:

    PreparedStatement object
    
  • Server parameters:

    ob_enable_batched_multi_statement=TRUE
    _enable_static_typing_engine=TRUE (required in OceanBase Database V3.2)
    
  • Server variable:

    _enable_dist_data_access_service=1 (required in OceanBase Database V3.2)
    
  • JDBC parameters:

    rewriteBatchedStatements=TRUE
    useServerPrepStmts=FALSE
    allowMultiQueries=TRUE (This parameter setting can avoid behavioral differences among different JDBC driver versions.)
    

Scenarios 3 and 4:

The recommended configurations for batch update in scenarios 3 and 4 are as follows:

  • JDBC object:

    Statement object
    
  • Server parameters:

    ob_enable_batched_multi_statement=TRUE
    _enable_static_typing_engine=TRUE (required in OceanBase Database V3.2)
    
  • Server variable:

    _enable_dist_data_access_service=1 (required in OceanBase Database V3.2)
    
  • JDBC parameters:

    rewriteBatchedStatements=TRUE
    allowMultiQueries=TRUE (This parameter setting can avoid behavioral differences among different JDBC driver versions.)
    

By using the recommended configurations in corresponding scenarios, you can fully leverage JDBC to batch execute data operations more efficiently.

Batch execution in OceanBase Database

The following table describes the OceanBase Database parameters related to batch execution.

Parameter Default value Scope Effective mode Description
ob_enable_batched_multi_statement FALSE Tenant Dynamically Specifies whether to enable batch processing of multiple statements. If this parameter is set to TRUE, OceanBase Database parses multiple UPDATE statements of the same format as one statement and generates a physical batch plan based on related parameter settings and the data distribution.
_ob_enable_prepared_statement TRUE Cluster Dynamically Specifies whether to use prepared statements on the server.
_enable_static_typing_engine TRUE Cluster Dynamically Specifies whether to use a new SQL engine. The old SQL engine supports only a batch update operation that comprises all primary key columns, whereas the new SQL engine supports a batch update operation that does not comprise all primary key columns. The new SQL engine is supported in OceanBase Database V3.2 and later.
Parameter Default value Level Description
_enable_dist_data_access_service TRUE Session/Global Specifies whether to enable the data access service (DAS). In OceanBase Database V3.2 and later, you must set this parameter to TRUE to implement batch update.

OceanBase Database executes batch INSERT, UPDATE, and DELETE statements in different ways.

INSERT

In batch execution of INSERT statements, you need to pay attention to the following scenarios:

Scenario 1: The OBServer node receives a COM_STMT_PREPARE request (request_type=5) and a COM_STMT_EXECUTE request (request_type=6) for the INSERT statements. The benefits are:

  • Communication is performed only twice for the batch execution of the INSERT statements.
  • The compilation time is reduced by using a PreparedStatement object.
  • If the executeBatch method needs to be called multiple times and parameters including cachePrepStmts are properly set, less COM_STMT_PREPARE requests (request_type=5) are received and only COM_STMT_EXECUTE requests (request_type=6) need to be executed.

Scenario 2: The OBServer node receives one COM_QUERY request (request_type=2) for the INSERT statements. The benefits are:

  • Communication is performed only once for the batch execution of the INSERT statements.

Scenario 3/4: The OBServer node receives multiple INSERT statements concatenated with semicolons (;) and executes the statements in order. The benefit is:

  • Communication is performed only once for the batch execution of the INSERT statements.

UPDATE

The case is different when it comes to the batch execution of UPDATE statements. If ob_enable_batched_multi_statement is set to FALSE, the batch UPDATE statements in scenarios, 1, 2, 3, and 4 will be executed in order on the OBServer node, without particular optimization effects. If ob_enable_batched_multi_statement is set to TRUE, OceanBase Database parses multiple UPDATE statements of the same format as one statement in scenarios, 2, 3, and 4. The generated physical batch plan can significantly improve the execution efficiency. Take note of the following points:

  • OceanBase Database V3.1 supports only optimizations whose predicate comprises all primary key columns. OceanBase Database V3.2 and later do not have this limitation but require that the _enable_dist_data_access_service parameter be set to TRUE and the new SQL engine be started.
  • UPDATE statements in the same batch cannot have updates to the same row.
  • Explicit transactions must be used.

DELETE

In OceanBase Database V3.x, batch DELETE statements are executed in order. In OceanBase Database V4.x, the DELETE statement is optimized.

Check status of batch execution tasks in OceanBase Database

You can query the gv$sql_audit view for the batch execution status. Here are some examples in different scenarios:

  • Batch execution of INSERT statements in scenario 1:

    query_sql: insert into test_multi_queries (c1, c2) values (?, ?)
    request_type: 5
    ps_stmt_id: 1
    query_sql: insert into test_multi_queries (c1, c2) values (?, ?),(?, ?),(?, ?)
    request_type: 5
    ps_stmt_id: 2
    request_type: 6
    
  • Batch execution of INSERT statements in scenario 2:

    query_sql: insert into test_multi_queries (c1, c2) values (1, 'PreparedStatement; rewriteBatchedStatements=true&allowMultiQueries=true&useLocalSessionState=true'),(2, ...),(3, ...)
    
  • Batch execution of UPDATE statements in scenario 2:

    query_sql: update test2 set c2='batch update1' where c1=1; update test2 set c2='batch update2' where c1=2; ...
    ret_code: 0
    is_batched_multi_stmt: 1
    

Notice: If ret_code is 5787, batch update does not take effect. In this case, you need to find out the cause based on the preceding description.

Return value of the executeBatch() method during batch execution

After the executeBatch() method is called, it returns an int[] array. For batch INSERT statements and batch UPDATE statements:

  • If the statements are finally executed one by one in OceanBase Database, an array of the quantity of rows updated by each operation is returned.
  • If the statements are executed as a whole, for example, multiple INSERT statements are combined or multiple UPDATE statements are executed as a physical batch plan, each element in the returned array is -2, indicating that the execution is successful but the IDs of updated rows are unknown.

Result sets

OceanBase Database supports returning the following types of result sets: static result set, streaming result set, and cursor result set.

  • Static result set (RowDataStatic): applies to general queries. This is the default result set type.
  • Streaming result set (RowDataDynamic): applies to row-by-row queries involving large amounts of data.
  • Cursor result set (RowDataCursor): applies to cursor queries on multiple rows of data.

Comparison of different result set types

The following table compares the pros and cons of different result set types.

Result set type Query mode Data read mode Advantage Disadvantage
Static result set General query All results are read at a time. The application code is simple. When the data volume is small, result sets of this type are read faster than result sets of the other two types. The out-of-memory (OOM) error may occur when the data volume is large.
Streaming result set Streaming query Data is read from the socket row by row rather than all at a time. The OOM error will not occur even when the data volume is large. The execution occupies a long DB time, which may cause network congestion. Some cursor-related methods, such as islast and absolute, cannot be used.
Cursor result set Cursor query Multiple rows of data are read at a time. All results are obtained through multiple reads. The OOM error will not occur even when the data volume is large. Compared with a streaming query, a single cursor query occupies a shorter DB time. The execution is slow and the response time is long, with more resources consumed.

Configure different types of result sets

The following table describes how to configure different types of result sets.

Result set type Configuration method Configuration description
Static result set URL No special configuration is required.
Java No special configuration is required.
Streaming result set URL No special configuration is required.
Java You need to use PrepareStatement and set fetchSize to Integer.MIN_VALUE. Example: PreparedStatement st1 = conn.prepareStatement(querySql1); st1.setFetchSize(Integer.MIN_VALUE);
Cursor result set URL You need to set both useCursorFetch and useServerPrepStmts to true.
Java You need to set fetchSize to a value greater than 0. Example: PreparedStatement st1 = conn.prepareStatement(querySql1); st1.setFetchSize(100);

JDBC parameter useCursorFetch

Parameter Description Recommended value
useCursorFetch Specifies whether to allow the database to establish cursors and distribute data based on the fetch size. true

Notice: If you set useCursorFetch to true, useServerPrepStmts is automatically set to true as well.

Suggestions for selecting a result set type

Select a result set type based on the volume of data to be processed.

  • For queries with a small amount of data, use a static result set.
  • For queries with a large amount of data, use a cursor or streaming result set to limit the result set size (for example, through pagination).

Considerations and limitations

  • If you specify ResultSet.TYPE_SCROLL_INSENSITIVE in MySQL mode, a full result set is used.
  • If you specify useCursorFetch=true in the URL, useServerPrepStmts is forcibly set to true. If batch update is required, you need to configure an independent data source because batch update depends on useServerPrepStmts=false.
  • In the cursor result set of the MySQL rental table, the number of prepared statement (PS) parameters cannot exceed 65535.
  • In a streaming result set, some cursor-related methods, such as isLast() and absolute(), cannot be used.
  • If the processing is slow on the client and the server does not perform I/O writes, a timeout may occur. When the database writes data, the timeout period is refreshed, which is specified by the net_write_timeout variable. The default value of the variable is 60s. You can specify the netTimeoutForStreamingResults property in the JDBC URL to avoid timeouts. The default value of the property is 600s.
  • If the data volume is large, the cursor result set needs to be stored on the disk of the OBServer node and the first response after execution may be slow.

Configuration examples

Use a cursor result set

PreparedStatement st1 = conn.prepareStatement(querySql1);
st1.setFetchSize(100);

Use a streaming result set

PreparedStatement st1 = conn.prepareStatement(querySql1);
st1.setFetchSize(Integer.MIN_VALUE);

Configure connection pool parameters

Basic connection pool parameters

The following table describes some important connection pool parameters for ensuring efficient management of database connections.

Parameter definition Description Druid parameter DBCP parameter C3P0 parameter
Number of initial connections The number of connections established during initialization of the connection pool. initialSize(0) initialSize(0) initialPoolSize(3)
Minimum number of connections The minimum number of available connections retained for the connection pool. minIdle(0) minIdle(0) minPoolSize(3)
Maximum number of connections The maximum number of connections allowed in the connection pool. If the number of connections in the connection pool exceeds this number, an exception is thrown, indicating that the connection pool is full. maxActive(8) maxActive(8) maxActive(8)
Connection idle timeout period The maximum idle period of connections. If the idle period of a connection exceeds the specified value, the connection is disconnected. The default value is 8 hours in MySQL. We recommend that you set the timeout period to 12 minutes in a cloud environment. minEvictableIdleTimeMillis(30min) minEvictableIdleTimeMillis(30min) (timeBetweenEvictionRunsMillis must be set to a value greater than 0.) maxIdleTime (0 indicates no timeout.)
Timeout period for obtaining a connection from the connection pool The maximum wait time to request a connection when the connection pool is full, in milliseconds. A large value indicates a longer response time of your application. maxWait (-1 indicates no timeout.) maxWaitMillis (-1 indicates no timeout.) checkoutTimeout (0 indicates no timeout.)
Timeout period before destroying a connection If a connection is not returned within the specified period, it is destroyed to avoid being leaked. However, the transaction time may be affected. removeAbandonedTimeoutMillis(300s) removeAbandonedTimeout(300s) None

Suggestions on configuring a connection pool

To achieve the optimal performance, we recommend that you:

  • Retain two connections for the console and adjust the setting based on the business concurrency and transaction time.
  • Set the timeout period of idle connections to 30 minutes.
  • Periodically check whether connections are alive by using mechanisms such as heartbeat and testOnBorrow, to ensure that a connection not used within the specified period is proactively disconnected.

Configure a Druid connection pool

We recommend that you use a Druid connection pool of V1.2.8 or later to connect a Java application to OceanBase Database.

Configuration example

maxActive: 100
initialSize: 5
maxWait: 10000
minIdle: 5
timeBetweenEvictionRunsMillis: 60000
minEvictableIdleTimeMillis: 300000
maxEvictableIdleTimeMillis: 1800000
validationQuery: select 1;
testWhileIdle: true
testOnBorrow: true
testOnReturn: false
removeAbandoned: false
removeAbandonedTimeout: 180

Parameters and description

The following table describes the parameters of the Druid connection pool.

Parameter Description
maxActive The maximum number of connections.
initialSize The number of initial connections.
maxWait The maximum wait time to obtain a connection, in milliseconds.
minIdle The minimum number of idle connections that can be retained. We recommend that you set this parameter to the same value as initalSize. When the total number of idle connections is less than this value, idle connections are not destroyed by the system, unless the idle time of the connections is greater than the maximum lifetime specified by maxEvictableIdleTimeMillis.
timeBetweenEvictionRunsMillis The interval at which DestroyThread is started to check the connections in the connection pool, in milliseconds.
minEvictableIdleTimeMillis The minimum lifetime of an idle connection in the connection pool, in milliseconds.
maxEvictableIdleTimeMillis The maximum lifetime of an idle connection in the connection pool, in milliseconds.
validationQuery The SQL statement used to verify whether a connection is valid. If validationQuery is null, none of the following three test options is valid.
testWhileIdle We recommend that you set this parameter to true. If the idle period is longer than the time specified by timeBetweenEvictionRunsMillis, validationQuery is executed. Otherwise, the connection is directly reused.
testOnBorrow We recommend that you set this parameter to true. When the value is set to true, each time when a connection is to be reused from the connection pool, validationQuery is executed to check the validity of the connection.
testOnReturn We recommend that you set this parameter to false. When a connection is returned to the connection pool, validationQuery is not executed to check the validity of the connection.
removeAbandoned We recommend that you set this parameter to false. If it is set to true, the system checks for connection leakage.
removeAbandonedTimeout If a borrowed connection is in the idle state and is not returned within time specified by removeAbandonedTimeout, the connection is forcibly returned. Long connections with infrequent SQL statement executions are considered to have been leaked and are reclaimed, resulting in transaction failures.

Procedure

  1. Add a Maven dependency.

    <dependency>
        <groupId>com.alibaba</groupId>
        <artifactId>druid</artifactId>
        <version>1.2.8</version>
    </dependency>
    
  2. Configure the connection pool as follows:

    <bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource" init-method="init" destroy-method="close">
        <!-- Basic properties such as the URL, username, and password -->
        <property name="url" value="${jdbc_url}" />
        <property name="username" value="${jdbc_user}" />
        <property name="password" value="${jdbc_password}" />
        <!-- Basic parameters of the connection pool -->
        <property name="initialSize" value="20" />
        <property name="minIdle" value="10" />
        <property name="maxActive" value="100" />
        <property name="maxWait" value="1000" />
        <property name="timeBetweenEvictionRunsMillis" value="60000" />
        <property name="minEvictableIdleTimeMillis" value="300000" />
        <property name="testOnBorrow" value="true" />
        <property name="testOnReturn" value="false" />
        <property name="testWhileIdle" value="true" />
        <property name="validationQuery" value="select 1 from dual" />
        <property name="keepAlive" value="true" />
        <!-- You can adjust the preceding parameters based on your business needs. -->
    </bean>
    

Configuration examples of other connection pools

DBCP connection pool

  1. Add a dependency.

    <dependencies>
        <dependency>
            <groupId>com.alipay.oceanbase</groupId>
            <artifactId>oceanbase-client</artifactId>
            <version>3.2.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
            <version>2.7.0</version>
        </dependency>
        <dependency>
            <groupId>commons-logging</groupId>
            <artifactId>commons-logging</artifactId>
            <version>1.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-dbcp2</artifactId>
            <version>2.7.0</version>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
            <version>5.7.0</version>
        </dependency>
    </dependencies>
    
  2. Set related parameters in the configuration file dbcp.properties.

    driverClassName=com.alipay.oceanbase.jdbc.Driver
    url=jdbc:oceanbase://10.100.xxx.xxx:18817/test?useSSL=false&useServerPrepStmts=true&serverTimezone=UTC
    username=$user_name
    password=******
    initialSize=30
    maxTotal=30
    maxIdle=10
    minIdle=5
    maxWaitMillis=1000
    removeAbandonedOnMaintenance=true
    removeAbandonedOnBorrow=true
    removeAbandonedTimeout=1
    

C3P0 connection pool

  1. Add a dependency.

    <dependency>
        <groupId>com.mchange</groupId>
        <artifactId>c3p0</artifactId>
        <version>0.9.5</version>
    </dependency>
    
  2. Configure the connection pool. Here is the sample code:

    import com.mchange.v2.c3p0.ComboPooledDataSource;
    import java.beans.PropertyVetoException;
    import java.sql.Connection;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    
    public class C3P0PoolTest {
        private static ComboPooledDataSource dataSource = new ComboPooledDataSource("oboracle");
    
        public void init() throws PropertyVetoException {
            dataSource.setDriverClass("com.oceanbase.jdbc.Driver");
            dataSource.setJdbcUrl("jdbc:oceanbase://10.100.xxx.xxx:30035/test?useSSL=false");
            dataSource.setUser("test@tt3");
            dataSource.setPassword("test");
            dataSource.setInitialPoolSize(3);
            dataSource.setMaxPoolSize(10);
            dataSource.setMinPoolSize(3);
            dataSource.setAcquireIncrement(3);
        }
    
        public void testInsert() throws SQLException, PropertyVetoException {
            init();
            try (Connection connection = dataSource.getConnection()) {
                connection.setAutoCommit(false);
                connection.prepareStatement("create table t1(c1 binary_double)").execute();
    
                try (PreparedStatement ps = connection.prepareStatement("insert into t1 values('2.0')")) {
                    ps.execute();
                }
    
                try (ResultSet resultSet = connection.createStatement().executeQuery("select * from t1")) {
                    while (resultSet.next()) {
                        int count = resultSet.getMetaData().getColumnCount();
                        for (int i = 1; i <= count; i++) {
                            System.out.println(resultSet.getMetaData().getColumnName(i) + ":" + resultSet.getString(i));
                        }
                    }
                }
            }
        }
    }
    

Tomcat connection pool

  1. Set related parameters in the configuration file.

    <Resource name="jdbc"
              auth="Container"
              type="javax.sql.DataSource"
              maxActive="100"
              maxIdle="30"
              maxWait="10000"
              username="root"
              password=""
              driverClassName="com.oceanbase.jdbc.Driver"
              url="jdbc:oceanbase://10.100.xxx.xxx:18815/test?characterEncoding=UTF-8" />
    
  2. Set related parameters in the web.xml file of the project.

    <resource-ref>
        <res-ref-name>jdbc</res-ref-name>
        <res-type>javax.sql.DataSource</res-type>
        <res-auth>Container</res-auth>
    </resource-ref>
    

Proxool connection pool

  1. Add a dependency.

    <dependency>
        <groupId>proxool</groupId>
        <artifactId>proxool-cglib</artifactId>
        <version>0.9.1</version>
    </dependency>
    <dependency>
        <groupId>proxool</groupId>
        <artifactId>proxool</artifactId>
        <version>0.9.1_20141120</version>
    </dependency>
    
  2. Set related parameters in the configuration file.

    jdbc-1.proxool.alias=test
    jdbc-1.proxool.driver-class=com.oceanbase.jdbc.Driver
    jdbc-1.proxool.driver-url=jdbc:oceanbase://10.100.xxx.xxx:30035/test
    jdbc-1.user=test@tt3
    jdbc-1.password=test
    jdbc-1.proxool.maximum-connection-count=8
    jdbc-1.proxool.minimum-connection-count=5
    jdbc-1.proxool.prototype-count=4
    jdbc-1.proxool.verbose=true
    jdbc-1.proxool.statistics=10s,1m,1d
    jdbc-1.proxool.statistics-log-level=error
    

HikariCP connection pool

  1. Add a dependency.

    <dependency>
        <groupId>com.alipay.oceanbase</groupId>
        <artifactId>oceanbase-client</artifactId>
        <version>3.2.3</version>
    </dependency>
    <dependency>
        <groupId>com.zaxxer</groupId>
        <artifactId>HikariCP</artifactId>
        <version>3.3.1</version>
    </dependency>
    
  2. Set related parameters in the configuration file jdbc.properties.

    jdbcUrl=jdbc:oceanbase://10.100.xxx.xxx:18817/test?useSSL=false&useServerPrepStmts=true&serverTimezone=UTC
    username=$user_name
    password=******
    dataSource.cachePrepStmts=true
    dataSource.prepStmtCacheSize=250
    dataSource.prepStmtCacheSqlLimit=2048
    dataSource.useServerPrepStmts=true
    dataSource.useLocalSessionState=true
    dataSource.rewriteBatchedStatements=true
    dataSource.cacheResultSetMetadata=true
    dataSource.cacheServerConfiguration=true
    dataSource.elideSetAutoCommits=true
    dataSource.maintainTimeStats=false
    

Summary

JDBC parameter settings

JDBC parameter configuration plays an important role in database performance optimization.

  • Timeout settings:

    • We recommend that you specify the socketTimeout parameter based on the characteristics of your application. If you do not specify this parameter, the default value 0ms is used, which specifies to never time out.
    • We recommend that you specify the connectTimeout parameter based on the characteristics of your application. If you do not specify this parameter, the default value 30000ms is used.
    • You can configure the session variables ob_query_timeout and ob_trx_timeout of OceanBase Database to specify the SQL query timeout period and transaction timeout period, respectively.
  • Session variables:

    • We recommend that you set useLocalSessionState to true, and set session variables such as autocommit, readonly, and isolation by using JDBC interfaces. This way, you do not need to access the database to obtain the session status by using SQL statements, thereby improving the transaction performance.
  • Batch execution parameters:

    • We recommend that you set allowMultiQueries to true. This way, multiple SQL statements are concatenated with semicolons (;) as one request to reduce interactions between your application and the database, thereby improving the performance. For batch insert operations, we recommend that you set rewriteBatchedStatements to true and useServerPrepStmts to false, and use addBatch in combination with executeBatch.
  • Large result set processing:

    • We recommend that you limit the result set size (for example, through pagination) and use the cursor result set in combination with the streaming result set. By setting useCursorFetch and useServerPrepStms to true, you can specify setFetchSize(>0) to set a cursor result set in your Java application, or use setFetchSize(Integer.MIN_VALUE) to switch to a streaming result set.

Connection pool parameter settings

Take note of the following points when you configure parameters of a connection pool:

  • Properly set the initial, minimum, and maximum numbers of connections in the connection pool based on your business needs.
  • Properly set the maximum idle period of connections to prevent long connections from being disconnected unexpectedly.
  • For long connections with infrequent SQL statement executions, we recommend that you disable idle connection recycling or set the maximum idle period to a large value. If a connection pool is frequently used, we recommend that you enable idle connection recycling for it.

Keepalive settings

  • Set testWhileIdle to true to verify the validity of idle connections, testOnBorrow to true to verify the validity of a connection to be borrowed, and testOnReturn to false to not to verify the validity of a connection to be returned to the connection pool. We recommend that you set testOnBorrow to true for an unstable network environment.

Settings related to the application and database driver

Pay more attention to the following application settings:

  • Connection timeout settings:

    • When a frontend application connects to the database, we recommend that you set the JDBC connection timeout period to 1s to quickly detect connection failures and reconnect to the database. As frequent retries may lead to network congestion and increase the system load, we recommend that you specify a retry interval in the reconnection mechanism.
  • Logging:

    • During the runtime, your application needs to record the OceanBase Database error codes and connection information, such as the IP address, port number, and username, to help the DBA quickly troubleshoot issues and maintain system stability.
  • Version compatibility:

    • Make sure that the JDBC driver used by your application is compatible with the database version to prevent potential issues and runtime faults caused by version incompatibility.
  • Database switching:

    • We recommend that you call the Connection.setCatalog(dbname) method instead of using the use <dbname> statement to specify the current database. This improves the readability and maintainability of the code.
  • Session status variables:

    • We recommend that you set session status variables such as readonly by using JDBC interfaces such as setReadOnly and setTransactionIsolation to ensure the consistency and security of the code.
  • Transaction processing:

    • Before your application executes a single transaction, which may contain one or more SQL statements, it needs to call the getConnection method to request a database connection. After the transaction execution is completed, your application needs to call the closeConnection method to close the connection. Each transaction must be processed in the following steps:
      1. Call getConnection to request a database connection.
      2. Execute the statements in the transaction.
      3. Call closeConnection to close the database connection.

Previous topic

Best practices for database development
Last

Next topic

Best practices for integrating Spark Catalog with OceanBase Database
Next
What is on this page
Applicable version
Key concepts
Application and database driver settings
JDBC connection settings
Connection example
Timeout settings
Session status variables
Batch execution parameters
Choose appropriate configurations in different scenarios
Batch execution in OceanBase Database
INSERT
UPDATE
DELETE
Check status of batch execution tasks in OceanBase Database
Return value of the executeBatch() method during batch execution
Result sets
Comparison of different result set types
Configure different types of result sets
Suggestions for selecting a result set type
Considerations and limitations
Configuration examples
Configure connection pool parameters
Basic connection pool parameters
Suggestions on configuring a connection pool
Configure a Druid connection pool
Configuration examples of other connection pools
Summary
JDBC parameter settings
Connection pool parameter settings
Keepalive settings
Settings related to the application and database driver