OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Cloud

  • Product Updates & Announcements
    • What's new
      • Release notes for 2026
      • Release notes for 2025
      • Release notes for 2024
      • Release history
    • Product announcements
      • Data development module deprecation notice
      • Optimization of Backup and Restore commercialization strategy
      • Cross-AZ data transfer billing (OceanBase Cloud on AWS)
      • Database Proxy pricing update
      • AWS instance pricing adjustment
  • Product Introduction
    • Overview
    • Management mode and scenarios
    • Core features
      • High availability with cross-cloud active-active architecture
      • High availability with cross-cloud primary-standby databases
      • Multi-level caching in shared storage
      • Multi-layer online scaling and on-demand adjustment
    • Deployment modes
    • Storage architecture
    • Product specifications
    • Product billing
      • Overview
      • Instance billing
        • Tencent Cloud instance billing
        • Alibaba Cloud instance billing
        • Huawei Cloud instance billing
        • AWS instance billing
        • GCP instance billing
      • Backup and restore billing
      • SQL audit billing
      • Migrations billing
      • Database proxy billing
      • Binlog service billing
      • Overview of OceanBase Cloud support plans
      • Read-only replica billing
    • Supported database versions
  • Get Started
    • Get started with a transactional instance
    • Get started with an analytical instance
    • Get started with a Key-Value instance
  • Work with Transactional Instances
    • Overview
    • Create an instance
      • Overview
      • Create via OceanBase Cloud official website
      • Create via AWS Marketplace
      • Create via GCP Marketplace
      • Create via Huawei Cloud Marketplace
      • Create via Alibaba Cloud Marketplace
      • Create via Azure Marketplace
    • Connect to an instance
      • MySQL compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using Alibaba Cloud VPC
          • Connect using a public IP address
          • Connect using a Huawei Cloud peering connection
        • Connect with clients
          • Connect to OceanBase Cloud by using Client ODC
          • Connect to OceanBase Cloud by using a MySQL client
          • Connect to OceanBase Cloud by using OBClient
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using SpringBoot
            • SpringBatch sample application for connecting to OceanBase Cloud
            • spring-jdbc
            • SpringDataJPA sample application for connecting to OceanBase Cloud
            • Hibernate application development with OceanBase Cloud
            • Sample program for connecting to OceanBase Cloud
            • connector-j
            • Use TestContainers to connect to and use OceanBase Cloud
          • Python
            • Connect to OceanBase Cloud using mysqlclient
            • Connect to OceanBase Cloud using PyMySQL
            • Use the MySQL-connector-python driver to connect to and use OceanBase Cloud
            • Use SQLAlchemy to connect to an OceanBase Cloud database
            • Connect to an OceanBase Cloud database using Django
            • Connect to an OceanBase Cloud database by using peewee
          • C
            • Use MySQL Connector/C to connect to OceanBase Cloud
          • Go
            • Connect to OceanBase Cloud using the Go-SQL-Driver/MySQL driver
            • Connect to OceanBase Cloud using GORM
          • PHP
            • Use the EXT driver to connect to OceanBase Cloud
            • Connect to OceanBase Cloud by using the MySQLi driver
            • Use the PDO driver to connect to OceanBase Cloud
          • Rust
            • Rust application example for connecting to OceanBase Cloud
            • SeaORM example for connecting to OceanBase Cloud
          • ruby
            • ActiveRecord sample application for OceanBase Cloud
            • Connect to OceanBase Cloud by using mysql2
            • Connect to OceanBase Cloud by using Sequel
        • Use database connection pool
          • Database connection pool configuration
          • Connect to OceanBase Cloud by using a Tomcat connection pool
          • Connect to OceanBase Cloud by using a C3P0 connection pool
          • Connect to OceanBase Cloud by using a Proxool connection pool
          • Connect to OceanBase Cloud by using a HikariCP connection pool
          • Connect to OceanBase Cloud by using a DBCP connection pool
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
      • Oracle compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using a public IP address
        • Connect with clients
          • Connect to OceanBase Cloud by using OBClient
          • Connect to OceanBase Cloud by using Client ODC
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using OceanBase Connector/J
            • Connect to OceanBase Cloud by using Spring Boot
            • SpringBatch application example for connecting to OceanBase Cloud
            • Connect to OceanBase Cloud using Spring JDBC
            • Connect to OceanBase Cloud by using Spring Data JPA
            • Connect to OceanBase Cloud by using Hibernate
            • Use MyBatis to connect to OceanBase Cloud
            • Use JFinal to connect to OceanBase Cloud
          • Python
            • Python Driver for Oracle Mode
          • C
            • Connect to OceanBase Cloud using OceanBase Connector/C
            • Connect to OceanBase Cloud using OceanBase Connector/ODBC
            • Use SqlSugar to connect to OceanBase Cloud
        • Use database connection pool
          • Database connection pool configuration
          • Sample program that uses a Tomcat connection pool to connect to OceanBase Cloud
          • C3P0 connection pool connects to OceanBase Cloud
          • Connect to OceanBase Cloud using Proxool connection pool
          • Sample program that uses HikariCP to connect to OceanBase Cloud
          • Use DBCP connection pool to connect to OceanBase Cloud
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
    • Developer guide
      • MySQL compatible mode
        • Plan database objects
          • Create a database
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use the CASE conditional operator in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
            • Use the SELECT ... LOCK IN SHARE MODE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
            • Release a savepoint
          • Commit a transaction
          • Roll back a transaction
      • Oracle compatible mode
        • Plan database objects
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use CASE functions in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
          • Commit a transaction
          • Roll back a transaction
    • Manage instances
      • Manage instances
        • View the instance list
        • Instance overview
        • Stop and restart instances
        • Unit migration
      • Manage tenants
        • Tenant overview
        • Create a tenant
        • Modify tenant specifications
        • Modify tenant names
        • Add an endpoint
        • Resource isolation
          • Overview
          • Manage resource groups
            • Create a resource group
            • View a resource group
            • Edit a resource group
            • Delete a resource group
          • Manage isolation rules
            • Create an isolation rule
            • View isolation rules
            • Edit an isolation rule
            • Delete a quarantine rule
        • Modify primary zone
        • Modify the maximum number of connections for a tenant proxy
        • Monitor tenant performance
          • Overview
          • View performance and SQL monitoring details
          • View transaction monitoring details
          • View storage and cache monitoring details
          • View Binlog service monitoring
          • Customize a monitoring dashboard for a tenant
        • Diagnostics
          • Real-time diagnostics
            • SQL diagnostics
              • Top SQL
              • Slow SQL
              • Suspicious SQL
              • High-risk SQL
            • SQL audit
        • Manage tenant parameters
          • Manage tenant parameters
          • Parameters for tenants
          • Parameter template overview
        • Delete a tenant
        • Manage databases and accounts
          • Create accounts
          • Manage accounts
          • Create a database (MySQL compatible mode)
          • Manage databases (MySQL compatible mode)
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor multidimensional metrics of an instance
        • Monitor the performance of hosts in an instance
        • Monitor database proxy
        • Monitor database proxy hosts
        • Monitor cross-cloud network performance
        • Customize a monitoring dashboard for an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Manage instance parameters
        • Manage parameters
        • Parameters for cluster instances
      • Change instance configurations
        • Enable storage auto-scaling
        • View history of configuration changes
        • Change configuration
        • Change configuration temporarily
        • Switch the deployment mode
      • Manage standby instances
        • Overview
        • Create a standby instance
        • Create a cross-cloud standby instance
        • Create a standby instance for an Alibaba Cloud primary instance
        • View details of primary and standby instances
        • Configure global endpoint
        • Enable automatic forwarding for write requests of standby databases
        • Primary-standby instance switchover
        • Initiate failover
        • Detach a standby instance
        • Release a standby instance
      • Release an instance
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert rule template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Backup and restore
        • Overview
        • Backup strategy
        • Initiate a backup immediately
        • Data backup
        • Initiate a restore
        • Data restore
        • Restore data from the instance recycle bin
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • One-click diagnostics
          • Initiate one-click diagnostics
          • View one-click diagnostic report
            • Exceptions
            • Real-time diagnostics
            • Optimization suggestions
            • Capacity management
            • Security management
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Request analysis
            • Request analysis
        • Root cause diagnostics
          • Exception handling
          • Enable system autonomy
        • SQL audit
        • Materialized view analysis
        • Optimization center
          • Optimization suggestions
          • Manage active outlines
          • SQL review
          • View the optimization history
      • Manage tags
      • Manage read-only replicas
        • Overview
        • Instance read-only replicas
          • Add a read-only replica to an instance
          • View read-only replicas of an instance
          • Manage read-only replicas of an instance
          • Delete a read-only replica of an instance
        • Tenant read-only replicas
          • Add a read-only replica to a tenant
          • View read-only replicas of a tenant
          • Manage read-only replicas of a tenant
          • Delete a read-only replica of a tenant
      • Manage JVM-dependent services
    • Data source management
      • Create a data source
      • Manage data sources
      • User privileges
        • User privileges for compatibility assessment
        • User privileges for data migration
        • User privileges for performance assessment
        • User privileges for data archiving
        • User privileges for data cleanup
      • Connect via private network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
        • Private IP address segments
      • Connect via public network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Manage recycle Bin
      • Instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance-level recycle bin
        • Tenant-level recycle bin
  • Work with Analytical Instances
    • Overview
    • Core features
    • Create an instance
    • Connect to an instance
      • Overview
      • Get connection string
        • Overview
        • Connect using AWS PrivateLink
        • Connect using a public IP address
      • Connect with clients
        • Connect to OceanBase Cloud by using Client ODC
        • Connect to OceanBase Cloud by using a MySQL client
        • Connect to OceanBase Cloud by using OBClient
      • Connect with drivers
        • Java
          • Connect to OceanBase Cloud by using Spring Boot
          • Connect to OceanBase Cloud by using Spring Batch
          • Connect to OceanBase Cloud by using Spring Data JDBC
          • Connect to OceanBase Cloud by using Spring Data JPA
          • Connect to OceanBase Cloud by using Hibernate
          • Connect to OceanBase Cloud by using MyBatis
          • Connect to OceanBase Cloud using MySQL Connector/J
        • Python
          • Connect to OceanBase Cloud by using mysqlclient
          • Connect to OceanBase Cloud by using PyMySQL
          • Connect to OceanBase Cloud using MySQL Connector/Python
        • C
          • Connect to OceanBase Cloud using MySQL Connector/C
        • Go
          • Connect to OceanBase Cloud using Go-SQL-Driver/MySQL
        • PHP
          • Connect to OceanBase Cloud using PHP
      • Use database connection pool
        • Database connection pool configuration
        • Connect to OceanBase Cloud by using a Tomcat connection pool
        • Connect to OceanBase Cloud by using a C3P0 connection pool
        • Connect to OceanBase Cloud by using a Proxool connection pool
        • Connect to OceanBase Cloud by using a HikariCP connection pool
        • Connect to OceanBase Cloud by using a DBCP connection pool
        • Connect to OceanBase Cloud by using Commons Pool
        • Connect to OceanBase Cloud by using a Druid connection pool
    • Data table design
      • Table overview
      • Best practices
        • Unit 1: Best practices for optimizing storage structures and query performance
        • Unit 2: Best practices for creating special indexes
    • Export data
    • OceanBase data processing
    • Query acceleration
      • Statistics
      • Materialized views for query acceleration
      • Select a query parallelism level
    • Manage instances
      • Instance overview
      • Change configuration
      • Modify primary zone
      • Manage parameters
      • Backup and restore
        • Backup overview
        • Backup strategies
        • Immediate backup
        • Data backup
        • Initiate restore
        • Data restore
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor the performance of hosts in an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Optimization management
            • Manage active outlines
            • View the optimization history
          • Request analysis
            • Request analysis
      • Stop and restart instances
      • Release instances
      • Manage databases and accounts
        • Create and manage accounts
        • Create a database
        • Manage databases
      • Manage tags
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Performance diagnosis and tuning
      • Use the DBMS_XPLAN package for performance diagnostics
      • Use the GV$SQL_PLAN_MONITOR view for performance analysis
      • Views related to AP performance analysis
    • Performance testing
    • Product integration
    • Manage recycle Bin
      • View instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance recycle bin
  • Work with Key-Value Instances
    • Try out Key-Value instances
      • Create an instance
      • Create a tenant
      • Create an account for a database user
      • OBKV HBase data operation examples
    • Use Table model
      • Create an instance
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-Table
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
    • Use HBase model
      • OBKV-HBase Overview
      • Create an instance
      • Develop in HBase model
        • Connect to an instance by using the OBKV-HBase client
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-HBase
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
      • Performance test
    • Connect Key-Value instances
      • Overview
      • Connect using a public IP address
  • Migrations
    • Data migration and import solutions
    • Data assessment and migration quick start
    • Assess compatibility
      • Overview
      • Perform online assessment
      • Perform offline assessment
      • Manage compatibility assessment tasks
        • View a compatibility assessment task
        • View and download a compatibility assessment report
        • Stop a compatibility assessment task
        • Delete a compatibility assessment task
      • Obtain files for upload
      • Configure PrivateLink
      • Add an IP address to an allowlist
    • Migrate data
      • Overview
      • Migrations specification
      • Purchase a data migration instance
      • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
      • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
      • Migrate data between OceanBase database tenants of the same compatibility mode
      • Migrate data between OceanBase database tenants of different compatibility modes
      • Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
      • Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
      • Configure a two-way synchronization task
      • Migrate data from an OceanBase database to a Kafka instance
      • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
      • Migrate incremental data from a MySQL-compatible tenant of OceanBase Database to a TiDB Database
      • Migrate data from a PostgreSQL database to an OceanBase database
      • Migrate incremental data from an OceanBase Database to a PostgreSQL database
      • Manage data migration tasks
        • View details of a data migration task
        • Rename a data migration task
        • View and modify migration objects
        • View and modify migration parameters
        • Configure alert monitoring
        • Manage data migration tasks by using tags
        • Start, stop, and resume a data migration task
        • Clone a data migration task
        • Terminate and release a data migration task
      • Features
        • Custom DML/DDL configurations
        • DDL synchronization scope
        • Use SQL conditions to filter data
        • Rename a migration object
        • Set an incremental synchronization timestamp
        • Instructions on schema migration
        • Configure and modify matching rules
        • Wildcard rules
        • Import migration objects
        • Download conflict data
        • Change a topic
        • Column filtering
        • Data formats
      • Authorize an Alibaba Cloud account
      • SQL statements for querying table objects
      • Online DDL tools
      • Create a trigger
      • Modify the log level of a self-managed PostgreSQL instance
      • Supported DDL statements for synchronization and their limitations
        • DDL synchronization from Aurora MySQL DB clusters to MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from MySQL-compatible tenants of OceanBase Database to Aurora MySQL DB clusters
        • DDL synchronization between MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle databases to Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle-compatible tenants of OceanBase Database to Oracle databases
        • DDL synchronization between Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from OceanBase databases to Kafka instances
    • Data subscription
      • Create a data subscription task
      • Manage data subscription tasks
        • View details of a data subscription task
        • Configure subscription information
        • Modify the name of a data subscription task
        • View and modify subscription objects
        • View data subscription parameters
        • Set up data subscription alerts
        • Start, stop, and resume data subscription tasks
        • Clone a data subscription task
        • Release a data subscription task
      • Manage private connections for data subscriptions
      • Configure consumer subscription
      • Message formats
    • Data validation
      • Overview
      • Create a data validation task
      • Manage data validation tasks
        • View details of a data validation task
        • View and modify validation objects
        • View and modify validation parameters
        • Manage data validation tasks with tags
        • Start, pause, and resume data validation tasks
        • Clone a data validation task
        • Release a data validation task
      • Features
        • Import validation objects
        • Rename the validation object
        • Filter objects by using SQL conditions
        • Configure the matching rules for the validation object
    • Assess performance
      • Overview
      • Obtain traffic files from a database instance
      • Create a full performance assessment task
      • Create an SQL file parsing task
      • Create an SQL file replay task
      • Manage performance assessment tasks
        • View the details of a performance assessment task
        • View a performance assessment report
        • Retry and stop a performance assessment task
        • Delete a performance assessment task
      • Obtain a database instance
      • Create an access key
    • Import data
      • Import data
      • Direct load
      • Supported file formats and encoding formats for Data Import
      • Sample data introduction
    • Binlog service
      • Overview
      • Purchase the Binlog service
      • Manage Binlog Service
        • View details of the Binlog service
        • Change configuration
        • Modify the auto-scaling strategy for storage space
        • Modify the elasticity strategy for compute units
        • Disable the Binlog service
  • Security
    • OceanBase Cloud account settings
      • Modify login password
      • Multi-factor authentication
      • Manage AccessKeys
      • Time zone settings
      • Manage cloud marketplace accounts
      • Account audit
    • Organizations and projects
      • Overview
      • Manage organization information
      • Project management
        • Manage projects
        • Cross-project bidirectional authorization
        • Subscribe to project messages
      • Manage members
      • Permissions for roles
      • Cost management
        • Overview
        • Cost details
        • Manage cost units
      • Operation audit
    • Database accounts and privileges
      • Account privileges
      • Authorize cloud vendor accounts
      • AWS KMS key management
      • Support access control
    • Security and encryption
      • Set allowlist groups
      • SSL encryption
      • Transparent Data Encryption (TDE)
    • Monitoring dashboard
    • Events
  • SQL Console
    • Overview
    • Access SQL Console
    • SQL editing and execution
    • PL compilation
    • Result set editing
    • Execution analysis
    • Database object management
      • Create a table
      • Create a view
      • Create a function
      • Create a stored procedure
      • Create a program package
      • Create a trigger
      • Create a type
      • Create a sequence
      • Create a synonym
    • Session variable management
    • Functional keys in SQL Console
  • Integrations
    • Overview
    • Schema evolution
      • Liquibase
      • Flyway
    • Data ingestion
      • Canal
      • dbt
      • Debezium
      • Flink
      • Glue
      • Informatica Cloud
      • Kafka
      • Maxwell
      • SeaTunnel
      • DataWorks
      • NiFi
    • SQL development
      • DataGrip
      • DBeaver
      • Navicat
      • TablePlus
    • Orchestration
      • DolphinScheduler
      • Linkis
      • Airflow
    • Visualization
      • Grafana
      • Power BI
      • Quick BI
      • Superset
      • Tableau
    • Observability
      • Datadog
      • Prometheus
    • Database management
      • Bytebase
    • AI
      • LlamaIndex
      • Dify
      • LangChain
      • Tongyi Qianwen
      • OpenAI
      • n8n
      • Trae
      • SpringAI
      • Cline
      • Cursor
      • Continue
      • Toolbox
      • CamelAI
      • Firecrawl
      • Hugging Face
      • Ollama
      • Google Gemini
      • Cloudflare Workers AI
      • Jina AI
      • Augment Code
      • Claude Code
      • Kiro
    • Development tools
      • Cloudflare Workers
      • Vercel
  • Best practices
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through cross-cloud primary-standby databases (1:1)
    • High availability through cross-cloud primary-standby databases (1:n)
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud
  • References
    • System architecture
    • System management
    • Database object management
    • Database design and specification constraints
    • SQL reference
    • System views
    • Parameters and system variables
    • Error codes
    • Performance tuning
    • Open API References
      • Overview
      • Service endpoints
      • Using API
      • Open APIs
        • Cluster management
          • DescribeInstances
          • DescribeInstance
          • CreateInstance
          • DeleteInstance
          • ModifyInstanceName
          • describe-node-options
          • StopCluster
          • StartCluster
          • ModifyInstanceSpec
          • DescribeInstanceTopology
          • DescribeReadonlyInstances
          • CreateReadonlyInstance
          • ModifyReadonlyInstanceSpec
          • ModifyReadonlyInstanceDiskSize
          • ModifyReadonlyInstanceNodeNum
          • DeleteReadonlyInstance
          • DescribeInstanceAvailableRoZones
          • DescribeInstanceParameters
          • UpdateInstanceParameters
          • DescribeInstanceParametersHistory
          • ModifyInstanceTagList
          • ModifyInstanceNodeNum
        • Tenant management
          • DescribeTenants
          • DescribeTenant
          • CreateTenants
          • DeleteTenants
          • ModifyTenantName
          • ModifyTenant
          • ModifyTenantUserDescription
          • ModifyTenantUserStatus
          • GetTenantCreateConstraints
          • ModifyTenantPrimaryZone
          • GetTenantCreateCpuConstraints
          • GetTenantCreateMemConstraints
          • GetTenantModifyCpuConstraints
          • GetTenantModifyMemConstraints
          • CreateTenantSecurityIpGroup
          • DescribeTenantSecurityIpGroups
          • ModifyTenantSecurityIpGroup
          • DeleteTenantSecurityIpGroup
          • DescribeTenantPrivateLink
          • DeletePrivatelinkConnection
          • CreatePrivatelinkService
          • ConnectPrivatelinkService
          • AddPrivatelinkServiceUser
          • BatchKillProcessList
          • DescribeProcessStatsComposition
          • DescribeTenantAvailableRoZones
          • DescribeTenantAddressInfo
          • ModifyTenantReadonlyReplica
          • DescribeTenantParameters
          • UpdateTenantParameters
          • DescribeTenantParametersHistory
          • ModifyTenantTagList
        • Tenant user management
          • CreateTenantUser
          • DescribeTenantUsers
          • DeleteTenantUsers
          • ModifyTenantUserPassword
          • ModifyTenantUserRoles
        • Database management
          • CreateDatabase
          • DescribeDatabases
          • DeleteDatabases
          • ModifyDatabaseUserRoles
        • Backup and restore
          • DescribeDataBackupSet
          • DescribeRestorableTenants
          • ModifyBackupStrategy
          • CreateTenantRestoreTask
          • CreateDataBackupTask
          • DescribeOneDataBackupSet
        • Database proxy management
          • CreateTenantAddress
          • CreateTenantSingleTunnelSLBAddress
          • DeleteTenantAddress
          • DescribeTenantAddress
          • ModifyOdpClusterSpec
          • ModifyTenantAddressPort
          • ModifyTenantAddressDomainPrefix
          • ConfirmPrivatelinkConnection
          • DescribeTenantAddressInfo
        • Monitoring management
          • DescribeTenantMetrics
          • DescribeMetricsData
          • DescribeNodeMetrics
        • Diagnostic management
          • DescribeOasTopSQLList
          • DescribeOasAnomalySQLList
          • DescribeOasSlowSQLList
          • DescribeOasSQLText
          • DescribeSqlAudits
          • DescribeOutlineBinding
          • DescribeSampleSqlRawTexts
          • DescribeSQLTuningAdvices
          • DescribeOasSlowSQLSamples
          • DescribeOasSQLTrends
          • DescribeOasSQLPlanGroup
        • Security management
          • CreateSecurityIpGroup
          • DescribeInstanceSSL
          • ModifyInstanceSSL
          • DescribeTenantEncryption
          • ModifyTenantEncryption
          • ModifySecurityIps
          • DeleteSecurityIpGroup
          • DescribeTenantSecurityConfigs
          • DescribeInstanceSecurityConfigs
        • Tag management
          • DescribeTags
          • CreateTags
          • UpdateTag
          • DeleteTag
        • Historical event management
          • DescribeOperationEvents
      • Differences between ApsaraDB for OceanBase APIs and OceanBase Cloud APIs
    • Download OBClient
      • Download OBClient
      • Download OceanBase Connector/J
      • Download client ODC
      • Download OceanBase Connector/ODBC
      • Download OBClient Libs
    • Metrics References
      • Cluster database
      • Cluster hosts
      • Binlog service
      • Cross-cloud network channel connection
      • Performance and SQL
      • Transactions
      • Storage and caching
      • Proxy database
      • Proxy host
    • ODC User Guide
      • What is ODC?
        • What is ODC?
        • Limitations
      • Quick Start
        • Client ODC
          • Overview
          • Install Client ODC
          • Use Client ODC
        • Web ODC
          • Overview
          • Use Web ODC
      • Data Source Management
        • Create a data source
        • Data sources and project collaboration
        • Database O&M
          • Session management
          • Global variable management
          • Recycle bin management
      • SQL Development
        • Edit and execute SQL statements
        • Perform PL compilation and debugging
        • Edit and export the result set of an SQL statement
        • Execution analysis
        • Generate test data
        • System settings
        • Database objects
          • Table objects
            • Overview
            • Create a table
          • View objects
            • Overview
            • Create a view
            • Manage views
          • Materialized view objects
            • Overview
            • Create a materialized view
            • Manage materialized views
          • Function objects
            • Overview
            • Create a function
            • Manage functions
          • Stored procedure objects
            • Overview
            • Create a stored procedure
            • Manage stored procedures
          • Sequence objects
            • Overview
            • Create a sequence
            • Manage sequences
          • Package objects
            • Overview
            • Create a program package
            • Manage program packages
          • Trigger objects
            • Overview
            • Create a trigger
            • Manage triggers
          • Type objects
            • Overview
            • Create a type
            • Manage types
          • Synonym objects
            • Overview
            • Create a synonym
            • Manage synonyms
      • Import and Export
        • Import schemas and data
        • Export schemas and data
      • Database Change Management
        • User Permission Management
          • Users and roles
          • Automatic authorization
          • User permission management
        • Project collaboration management
        • Risk levels, risk identification rules, and approval processes
        • SQL check specifications
        • SQL window specification
        • Database change management
        • Batch database change management
        • Online schema changes
        • Synchronize shadow tables
        • Schema comparison
      • Data Lifecycle Management
        • Partitioning Plan Management
          • Manage partitioning plans
          • Set partitioning strategies
          • Examples
        • SQL plan task
      • Data Desensitization and Auditing
        • Desensitize data
        • Operation records
      • Notification Management
        • Overview
        • View notification records
        • Manage Notification Channel
          • Create a notification channel
          • View, edit, and delete a notification channel
          • Configure a custom channel
        • Manage notification rules
      • Best Practices
        • Tips for SQL development
        • Explore ODC team workspaces
        • Understanding real-time SQL diagnostics for OceanBase AP
        • OceanBase historical database solutions
        • ODC SQL check for automatic identification of high-risk operations
        • Manage and modify sharded databases and tables via ODC
        • Data masking and control practices
        • Enterprise-level control and collaboration: Safeguard every database change
    • Data Development
      • Overview
      • Workspace management
      • Worksheet management
      • Compute node pool management
      • Workflow management
      • Dashboard management
      • Manage Git repositories
      • SQL development
        • SQL editing and execution
        • Result set editing
        • Execution analysis
        • Database object management
          • Create a table
          • Create a view
          • Create a function
          • Create a stored procedure
        • Session variable management
        • Git integration
      • Sample datasets
      • Data development terms
  • Manage Billing
    • Access billing
    • View monthly bills
    • View payment details
    • View orders
    • Use vouchers for payment
    • View invoices
  • Legal Agreements
    • OceanBase Cloud Services Agreement
    • Service Level Agreement
    • OceanBase Data Processing Addendum
    • Service Level Agreement for OceanBase Cloud Migration Service

Download PDF

Release notes for 2026 Release notes for 2025 Release notes for 2024 Release history Data development module deprecation notice Optimization of Backup and Restore commercialization strategy Cross-AZ data transfer billing (OceanBase Cloud on AWS) Database Proxy pricing update AWS instance pricing adjustment Overview Management mode and scenarios High availability with cross-cloud active-active architecture High availability with cross-cloud primary-standby databases Multi-level caching in shared storage Multi-layer online scaling and on-demand adjustment Deployment modes Storage architecture Product specifications Overview Backup and restore billing SQL audit billing Migrations billing Database proxy billing Binlog service billing Overview of OceanBase Cloud support plans Read-only replica billing Supported database versions Get started with a transactional instance Get started with an analytical instance Get started with a Key-Value instance Overview Overview Create via OceanBase Cloud official website Create via AWS Marketplace Create via GCP Marketplace Create via Huawei Cloud Marketplace Create via Alibaba Cloud Marketplace Create via Azure Marketplace Release an instance Manage tags Manage JVM-dependent services Create a data source Manage data sources Archive data Clean up data Instance recycle bin Overview Core features Create an instance Overview Table overview Export data OceanBase data processing Statistics Materialized views for query acceleration Select a query parallelism level Instance overview Change configuration Modify primary zone Manage parameters Stop and restart instances Release instances Manage tags Archive data Clean up data Use the DBMS_XPLAN package for performance diagnostics Use the GV$SQL_PLAN_MONITOR view for performance analysis Views related to AP performance analysis Performance testing Product integration View instance recycle bin Create an instance Create a tenant Create an account for a database user OBKV HBase data operation examples Create an instance OBKV-HBase Overview Create an instance Performance test Overview Connect using a public IP address Data migration and import solutions Data assessment and migration quick start Overview Perform online assessment Perform offline assessment Obtain files for upload Configure PrivateLink Add an IP address to an allowlist Overview Migrations specification Purchase a data migration instance Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data between OceanBase database tenants of the same compatibility mode Migrate data between OceanBase database tenants of different compatibility modes Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database Configure a two-way synchronization task Migrate data from an OceanBase database to a Kafka instance
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Cloud
iconOceanBase Cloud

    Connect to OceanBase Cloud by using a Druid connection pool

    Last Updated:2026-04-07 08:08:33  Updated
    share
    What is on this page
    Prerequisites
    Procedure
    Step 1: Import the druid-oceanbase-client project into Eclipse
    Step 2: Modify the database connection information in the druid-oceanbase-client project
    Step 3: Run the druid-oceanbase-client project
    Project code
    Introduction to the pom.xml file
    Introduction to db.properties
    Main.java code introduction
    Complete code
    References

    folded

    share

    This topic describes how to build an application by using a Druid connection pool, OceanBase Connector/J, and OceanBase Cloud. The application can perform basic database operations, including creating tables, inserting data, updating data, deleting data, querying data, and dropping tables.

    Download the druid-oceanbase-client sample project Connect to OceanBase Cloud by using a Druid connection pool (Oracle compatible mode)

    Prerequisites

    • You have registered an Alibaba Cloud account and created an instance and an Oracle-compatible tenant. For more information, see Create an instance and Create a tenant.

    • You have obtained the connection string of the target Oracle-compatible tenant. For more information, see Obtain the connection string.

    • You have installed JDK 1.8 and Maven.

    • You have installed Eclipse.

      Note

      The code examples in this topic are run in Eclipse IDE for Java Developers 2022-03. You can also use other tools that you prefer to run the code examples.

    Procedure

    Note

    The following steps describe how to compile and run the project in the Windows environment by using Eclipse IDE for Java Developers 2022-03. If you are using a different operating system or compiler, the steps may vary slightly.

    Step 1: Import the druid-oceanbase-client project into Eclipse

    1. Start Eclipse and choose File > Open Projects from File System.

    2. In the dialog box that appears, click Directory to select the project directory and then click Finish.

      Note

      When you import a Maven project into Eclipse, it will automatically detect the pom.xml file in the project, download the required dependency libraries based on the described dependencies in the file, and add them to the project.

      1

    3. View the project.

      2

    Step 2: Modify the database connection information in the druid-oceanbase-client project

    Modify the database connection information in the druid-oceanbase-client/src/main/resources/db.properties file based on the connection string obtained from the prerequisites.

    Sample code:

    ...
    url=jdbc:oceanbase://t5******.********.oceanbase.cloud:1521/test_schema001
    username=test_user
    password=******
    ...
    
    • The connection address is t5******.********.oceanbase.cloud.
    • The access port is 1521.
    • The name of the database to be accessed is test_schema001.
    • The tenant account is test_user.
    • The password is ******.

    Step 3: Run the druid-oceanbase-client project

    1. In the Project Explorer view, locate and expand the druid-oceanbase-client/src/main/java directory.

    2. Right-click the Main.java file and select Run As > Java Application.

      4

    3. View the output results in the Eclipse console window.

      5

    Project code

    Click druid-oceanbase-client to download the project code, which is a compressed file named druid-oceanbase-client.zip.

    After decompressing the file, you will find a folder named druid-oceanbase-client. The directory structure is as follows:

    druid-oceanbase-client
    ├── src
    │   └── main
    │       ├── java
    │       │   └── com
    │       │       └── example
    │       │           └── Main.java
    │       └── resources
    │           └── db.properties
    └── pom.xml
    

    File description:

    • src: the root directory of the source code.
    • main: the main code directory, containing the core logic of the application.
    • java: the directory for Java source code.
    • com: the directory for Java packages.
    • example: the directory for packages of the sample project.
    • Main.java: the main class file, containing logic for creating tables, inserting, deleting, updating, and querying data.
    • resources: the directory for resource files, including configuration files.
    • db.properties: the configuration file for the connection pool, containing relevant database connection parameters.
    • pom.xml: the configuration file for the Maven project, used to manage project dependencies and build settings.

    Introduction to the pom.xml file

    The pom.xml file is a configuration file for Maven projects. It defines project dependencies, plugins, and build rules. Maven is a Java project management tool that can automatically download dependencies, compile, and package projects.

    The code in the pom.xml file in this topic includes the following parts:

    1. File declaration statements.

      This section declares that the file is an XML file using XML version 1.0 and character encoding UTF-8.

      Sample code:

      <?xml version="1.0" encoding="UTF-8"?>
      
    2. Configure the namespaces and POM model version.

      1. Use xmlns to specify the POM namespace as http://maven.apache.org/POM/4.0.0.
      2. Use xmlns:xsi to specify the XML namespace as http://www.w3.org/2001/XMLSchema-instance.
      3. Use xsi:schemaLocation to specify the POM namespace as http://maven.apache.org/POM/4.0.0 and the location of the POM XSD file as http://maven.apache.org/xsd/maven-4.0.0.xsd.
      4. Use <modelVersion> to specify the POM model version as 4.0.0.

      Sample code:

      <project xmlns="http://maven.apache.org/POM/4.0.0"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
      
       <!-- Other configurations -->
      
      </project>
      
    3. Configure basic information.

      1. Use <groupId> to specify the project's organization as com.example.
      2. Use <artifactId> to specify the project name as druid-oceanbase-client.
      3. Use <version> to specify the project version as 1.0-SNAPSHOT.

      Sample code:

          <groupId>com.example</groupId>
          <artifactId>druid-oceanbase-client</artifactId>
          <version>1.0-SNAPSHOT</version>
      
    4. Configure the properties of the project's source files.

      Specify the Maven compiler plugin as maven-compiler-plugin and set both the source and target Java versions to 8. This means that the project's source code uses Java 8 features, and the compiled bytecode will also be compatible with the Java 8 runtime environment. This configuration ensures that the project can correctly handle Java 8 syntax and features during compilation and runtime.

      Note

      Java 1.8 and Java 8 are different names for the same version.

      Sample code:

          <build>
              <plugins>
                  <plugin>
                      <groupId>org.apache.maven.plugins</groupId>
                      <artifactId>maven-compiler-plugin</artifactId>
                      <configuration>
                          <source>8</source>
                          <target>8</target>
                      </configuration>
                  </plugin>
              </plugins>
          </build>
      
    5. Configure the components that the project depends on.

      1. Add the oceanbase-client dependency library for interacting with the database:

        1. Use <groupId> to specify the dependency's organization as com.oceanbase.
        2. Use <artifactId> to specify the dependency name as oceanbase-client.
        3. Use <version> to specify the dependency version as 2.4.2.

        Note

        This section defines the project's dependency as OceanBase Connector/J V2.4.2. For information about other versions, see OceanBase JDBC driver

        Sample code:

                <dependency>
                    <groupId>com.oceanbase</groupId>
                    <artifactId>oceanbase-client</artifactId>
                    <version>2.4.2</version>
                </dependency>
        
      2. Add the druid dependency library:

        1. Use <groupId> to specify the dependency's organization as com.alibaba.
        2. Use <artifactId> to specify the dependency name as druid.
        3. Use <version> to specify the dependency version as 1.2.8.

        Sample code:

                <dependency>
                    <groupId>com.alibaba</groupId>
                    <artifactId>druid</artifactId>
                    <version>1.2.8</version>
                </dependency>
        

    Introduction to db.properties

    db.properties is the connection pool configuration file used in this example. It contains the configuration parameters for the connection pool, including the database URL, username, password, and other optional settings.

    The db.properties file in this example primarily includes the following sections:

    1. Configure the database connection parameters.

      1. Specify the class name of the database driver program as com.oceanbase.jdbc.Driver.
      2. Specify the database connection URL, including the host IP, port number, and the schema to be accessed.
      3. Specify the username for the database.
      4. Specify the password for the database.

      Sample code:

      driverClassName=com.oceanbase.jdbc.Driver
      url=jdbc:oceanbase://$host:$port/$schema_name
      username=$user_name
      password=$password
      

      Parameter description:

      • $host: The connection address of the OceanBase Cloud database, obtained from the -h parameter in the connection string.
      • $port: The connection port of the OceanBase Cloud database, obtained from the -P parameter in the connection string.
      • $schema_name: The name of the database to be accessed, obtained from the -D parameter in the connection string.
      • $user_name: The account name, obtained from the -u parameter in the connection string.
      • $password: The account password, obtained from the -p parameter in the connection string.
    2. Configure other connection pool parameters.

      1. Specify the SQL statement for validating connections as select 1 from dual.
      2. Specify the initial number of connections in the connection pool as 3. This means that 3 initial connections will be created when the connection pool is started.
      3. Specify the maximum number of active connections in the connection pool as 30. This means that the connection pool can have a maximum of 30 active connections at the same time.
      4. Specify whether to print logs for abandoned connections as true. This means that when abandoned connections are recycled, information will be output to the error log. In a test environment, this can be set to true, while in a production environment, it should be set to false to avoid performance issues.
      5. Specify the minimum number of idle connections in the connection pool as 5. This means that when the number of idle connections in the connection pool is less than 5, the connection pool will automatically create new connections.
      6. Specify the maximum wait time for obtaining a connection as 1000 milliseconds. This means that if all connections in the connection pool are occupied and the wait time exceeds 1000 milliseconds, an exception will be thrown when attempting to obtain a connection.
      7. Specify the minimum idle time for a connection as 300000 milliseconds. This means that if a connection is idle for 300000 milliseconds (5 minutes) and has not been used, it will be recycled.
      8. Specify whether to recycle abandoned connections as true. This means that when a connection exceeds the time defined by removeAbandonedTimeout, it will be recycled.
      9. Specify the timeout time for abandoned connections as 300 seconds. This means that connections that have not been used for more than 300 seconds (5 minutes) will be recycled.
      10. Specify the interval time for the idle connection recycling thread as 10000 milliseconds. This means that the idle connection recycling thread will execute the idle connection recycling operation every 10000 milliseconds (10 seconds).
      11. Specify whether to validate the availability of a connection when obtaining it as false. Setting this to false can improve performance, but may result in obtaining an unavailable connection.
      12. Specify whether to validate the availability of a connection when returning it as false. Setting this to false can improve performance, but may result in returning an unavailable connection.
      13. Specify whether to validate a connection when it is idle as true. When set to true, the connection pool will periodically execute validationQuery to validate the availability of the connection.
      14. Specify whether to enable the keep-alive feature for long connections as false. Setting this to false means that the keep-alive feature for long connections is disabled.
      15. Specify the idle time threshold for a connection as 60000 milliseconds. This means that if the idle time of a connection exceeds the threshold of 60000 milliseconds (1 minute), the keep-alive mechanism will detect the connection to ensure its availability. If there is any operation on the connection within the threshold time, the idle time will be recalculated.

      Sample code:

      validationQuery=select 1 from dual
      initialSize=3
      maxActive=30
      logAbandoned=true
      minIdle=5
      maxWait=1000
      minEvictableIdleTimeMillis=300000
      removeAbandoned=true
      removeAbandonedTimeout=300
      timeBetweenEvictionRunsMillis=10000
      testOnBorrow=false
      testOnReturn=false
      testWhileIdle=true
      keepAlive=false
      keepAliveBetweenTimeMillis=60000
      

    Notice

    The specific configuration of parameters depends on the project requirements and the characteristics of the database. We recommend that you adjust and configure the parameters based on your actual situation.

    Common configuration parameters for the Druid connection pool:

    Parameter Description
    url The URL of the database, including the database type, host name, port number, and database name.
    username The username for connecting to the database.
    password The password for connecting to the database.
    driverClassName The name of the database driver class. If you do not explicitly configure driverClassName, the Druid connection pool automatically identifies the database type (dbType) based on url and selects the corresponding driverClassName. This automatic identification mechanism reduces the configuration workload and simplifies the configuration process. However, if url cannot be correctly resolved or a non-standard database driver class is required, you must explicitly configure the driverClassName parameter to ensure that the correct driver class is loaded.
    initialSize The number of connections created when the connection pool is initialized. When the application starts, the connection pool creates the specified number of connections and adds them to the connection pool.
    maxActive The maximum number of active connections in the connection pool. When the number of active connections reaches the maximum value, subsequent connection requests will wait until a connection is released.
    maxIdle The maximum number of idle connections in the connection pool (this parameter is deprecated). When the number of idle connections reaches the maximum value, the extra connections will be closed.
    minIdle The minimum number of idle connections in the connection pool. When the number of idle connections is less than the minimum value, the connection pool creates new connections.
    maxWait The maximum waiting time for obtaining a connection. If this value is set to a positive number, it represents the number of milliseconds to wait. If the waiting time exceeds this value, an exception will be thrown.
    poolPreparedStatements Specifies whether to enable the PreparedStatement cache (PSCache) mechanism. If set to true, PreparedStatement objects will be cached to improve performance. However, in this scenario, the memory usage of OBProxy may continuously increase. Therefore, you must configure and monitor the memory usage reasonably to avoid memory leaks or memory overflow.
    validationQuery The SQL query statement for validating connections. When a connection is taken from the connection pool, this query statement is executed to verify whether the connection is valid.
    timeBetweenEvictionRunsMillis The interval for the connection pool to detect idle connections, in milliseconds. If the idle time of a connection exceeds the value set for timeBetweenEvictionRunsMillis, the connection will be closed.
    minEvictableIdleTimeMillis The minimum idle time for connections in the connection pool, in milliseconds. If this value is set to a negative number, connections will not be recycled.
    testWhileIdle Specifies whether to test connections while they are idle. If set to true, the validationQuery will be executed to verify whether the connection is valid when the connection is idle.
    testOnBorrow Specifies whether to test connections when they are borrowed. If set to true, the validationQuery will be executed to verify whether the connection is valid when the connection is borrowed.
    testOnReturn Specifies whether to test connections when they are returned. If set to true, the validationQuery will be executed to verify whether the connection is valid when the connection is returned.
    filters The predefined set of filters in the connection pool. These filters can preprocess and postprocess connections in a specific order to provide additional features and enhance the performance of the connection pool. Common filters include:
    1. stat: used to collect performance metrics of the connection pool, such as the number of active connections, request count, and error count.
    2. wall: used for SQL firewall, which can intercept and disable unsafe SQL statements to improve database security.
    3. log4j: used to output connection pool logs to log4j, facilitating log recording and debugging.
    4. slf4j: used to output connection pool logs to slf4j, facilitating log recording and debugging.
    5. config: used to load connection pool configuration information from an external configuration file.
    6. encoding: used to set the character encoding between the connection pool and the database.
    By configuring these filters in the filters property, the connection pool will apply the filters in the specified order. You can separate the names of multiple filters with commas, for example: filters=stat,wall,log4j.

    Main.java code introduction

    The Main.java file is the main program of the sample program in this topic. This sample program shows how to interact with a database by using a data source, a connection object, and various database operation methods.

    The Main.java file in this topic contains the following parts:

    1. Import the required classes and interfaces.

      1. Declare the package name of the current code as com.example.
      2. Import the IOException class of Java, which is used to handle input and output exceptions.
      3. Import the InputStream class of Java, which is used to obtain an input stream from a file or other source.
      4. Import the Connection interface of Java, which is used to represent a connection to a database.
      5. Import the ResultSet interface of Java, which is used to represent a dataset of database query results.
      6. Import the SQLException class of Java, which is used to handle SQL exceptions.
      7. Import the Statement interface of Java, which is used to execute SQL statements.
      8. Import the PreparedStatement interface of Java, which is used to execute precompiled SQL statements.
      9. Import the Properties class of Java, which is used to handle property files.
      10. Import the DataSource interface of Java, which is used to manage database connections.
      11. Import the DruidDataSourceFactory class of Alibaba Druid connection pool, which is used to create a Druid data source.

      Sample code:

      package com.example;
      
      import java.io.IOException;
      import java.io.InputStream;
      import java.sql.Connection;
      import java.sql.ResultSet;
      import java.sql.SQLException;
      import java.sql.Statement;
      import java.sql.PreparedStatement;
      import java.util.Properties;
      import javax.sql.DataSource;
      import com.alibaba.druid.pool.DruidDataSourceFactory;
      
    2. Create a Main class and define the main method.

      Define a Main class and a main method. The main method is used to demonstrate how to use a connection pool to perform a series of operations on a database. The steps are as follows:

      1. Define a public class named Main as the entry point of the program. The class name must be consistent with the file name.

      2. Define a public static method main as the entry point of the program, which receives command-line parameters.

      3. Use the exception handling mechanism to capture and handle exceptions that may occur.

      4. Call the loadPropertiesFile method to load a property file and return a Properties object.

      5. Call the createDataSource() method to create a data source object based on the configuration in the property file.

      6. Use the try-with-resources statement to obtain a database connection and automatically close the connection after it is used.

        1. Call the createTable() method to create a table.
        2. Call the insertData() method to insert data.
        3. Call the selectData() method to query data.
        4. Call the updateData() method to update data.
        5. Call the selectData() method again to query the updated data.
        6. Call the deleteData() method to delete data.
        7. Call the selectData() method again to query the data after deletion.
        8. Call the dropTable() method to delete the table.

      Sample code:

      public class Main {
      
          public static void main(String[] args) {
              try {
                  Properties properties = loadPropertiesFile();
                  DataSource dataSource = createDataSource(properties);
                  try (Connection conn = dataSource.getConnection()) {
                      // Create table
                      createTable(conn);
                      // Insert data
                      insertData(conn);
                      // Query data
                      selectData(conn);
      
                      // Update data
                      updateData(conn);
                      // Query the updated data
                      selectData(conn);
      
                      // Delete data
                      deleteData(conn);
                      // Query the data after deletion
                      selectData(conn);
      
                      // Drop table
                      dropTable(conn);
                  }
              } catch (Exception e) {
                  e.printStackTrace();
              }
          }
      
          // Define a method for obtaining and using configuration information from the property file
          // Define a method for obtaining a data source object
          // Define a method for creating a table
          // Define a method for inserting data
          // Define a method for updating data
          // Define a method for deleting data
          // Define a method for querying data
          // Define a method for deleting a table
      }
      
    3. Define a method for obtaining and using configuration information from the property file.

      Define a private static method loadPropertiesFile() that is used to load a property file and return a Properties object. The steps are as follows:

      1. Define a private static method loadPropertiesFile() that returns a Properties object and declares that it may throw an IOException exception.
      2. Create a Properties object to store key-value pairs in the property file.
      3. Use the try-with-resources statement to obtain an input stream is of the property file db.properties by using the class loader.
      4. Use the load method to load the properties in the input stream to the properties object.
      5. Return the loaded properties object.

      Sample code:

          private static Properties loadPropertiesFile() throws IOException {
              Properties properties = new Properties();
              try (InputStream is = Main.class.getClassLoader().getResourceAsStream("db.properties")) {
                  properties.load(is);
              }
              return properties;
          }
      
    4. Define a method for obtaining a data source object.

      Define a private static method createDataSource() that is used to create a DataSource object based on the configuration in the property file. The DataSource object is used to manage and obtain database connections. The steps are as follows:

      1. Define a private static method createDataSource() that receives a Properties object as a parameter and declares that it may throw an Exception exception.
      2. Call the createDataSource() method of the DruidDataSourceFactory class and pass the properties parameter to return a DataSource object.

      Sample code:

          private static DataSource createDataSource(Properties properties) throws Exception {
              return DruidDataSourceFactory.createDataSource(properties);
          }
      
    5. Define a method for creating a table.

      Define a private static method createTable() that is used to create a data table in a database. The steps are as follows:

      1. Define a private static method createTable() that takes a Connection object as a parameter and declares that it may throw an SQLException.
      2. Use the try-with-resources statement to create a Statement object stmt by calling the createStatement() method of the connection object conn.
      3. Define a string variable sql to store the SQL statement for creating the table.
      4. Use the executeUpdate() method to execute the SQL statement and create the data table.
      5. Print a message indicating that the table was created successfully.

      Sample code:

          private static void createTable(Connection conn) throws SQLException {
              try (Statement stmt = conn.createStatement()) {
                  String sql = "CREATE TABLE test_druid (id NUMBER, name VARCHAR2(20))";
                  stmt.executeUpdate(sql);
                  System.out.println("Table created successfully.");
              }
          }
      
    6. Define a method for inserting data.

      Define a private static method insertData() for inserting data into the database. The steps are as follows:

      1. Define a private static method insertData() that takes a Connection object as a parameter and declares that it may throw an SQLException.

      2. Define a string variable insertDataSql to store the SQL statement for inserting data.

      3. Define an integer variable insertedRows initialized to 0 to record the number of rows inserted.

      4. Use the try-with-resources statement to create a PreparedStatement object insertDataStmt by calling the prepareStatement() method of the connection object conn with the SQL statement for inserting data.

      5. Use a for loop to iterate 5 times, representing the insertion of 5 data records.

        1. Use the setInt() method to set the value of the first parameter to the loop variable i.
        2. Use the setString() method to set the value of the second parameter to the string test_insert concatenated with the value of the loop variable i.
        3. Use the executeUpdate() method to execute the SQL statement for inserting data and accumulate the number of affected rows to the insertedRows variable.
      6. Print a message indicating that the data was inserted successfully, along with the total number of rows inserted.

      7. Return the total number of rows inserted.

      Sample code:

          private static int insertData(Connection conn) throws SQLException {
              String insertDataSql = "INSERT INTO test_druid (id, name) VALUES (?, ?)";
              int insertedRows = 0;
              try (PreparedStatement insertDataStmt = conn.prepareStatement(insertDataSql)) {
                  for (int i = 1; i < 6; i++) {
                      insertDataStmt.setInt(1, i);
                      insertDataStmt.setString(2, "test_insert" + i);
                      insertedRows += insertDataStmt.executeUpdate();
                  }
                  System.out.println("Data inserted successfully. Inserted rows: " + insertedRows);
              }
              return insertedRows;
          }
      
    7. Define a method for updating data.

      Define a private static method updateData() for updating data in the database. The steps are as follows:

      1. Define a private static method updateData() that takes a Connection object as a parameter and declares that it may throw an SQLException.
      2. Use the try-with-resources statement to create a PreparedStatement object pstmt by calling the prepareStatement() method of the connection object conn with the SQL statement for updating data.
      3. Use the setString() method to set the value of the first parameter to the string test_update.
      4. Use the setInt() method to set the value of the second parameter to the integer value 3.
      5. Use the executeUpdate() method to execute the SQL statement for updating data and assign the number of affected rows to the updatedRows variable.
      6. Print a message indicating that the data was updated successfully, along with the total number of rows updated.

      Sample code:

          private static void updateData(Connection conn) throws SQLException {
              try (PreparedStatement pstmt = conn.prepareStatement("UPDATE test_druid SET name = ? WHERE id = ?")) {
                  pstmt.setString(1, "test_update");
                  pstmt.setInt(2, 3);
                  int updatedRows = pstmt.executeUpdate();
                  System.out.println("Data updated successfully. Updated rows: " + updatedRows);
              }
          }
      
    8. Define a method for deleting data.

      Define a private static method deleteData() for deleting data from the database. The steps are as follows:

      1. Define a private static method deleteData() that takes a Connection object as a parameter and declares that it may throw an SQLException.
      2. Use the try-with-resources statement to create a PreparedStatement object pstmt by calling the prepareStatement() method of the connection object conn with the SQL statement for deleting data.
      3. Use the setInt() method to set the value of the first parameter to the integer value 3.
      4. Use the executeUpdate() method to execute the SQL statement for deleting data and assign the number of affected rows to the deletedRows variable.
      5. Print a message indicating that the data was deleted successfully, along with the total number of rows deleted.

      Sample code:

          private static void deleteData(Connection conn) throws SQLException {
              try (PreparedStatement pstmt = conn.prepareStatement("DELETE FROM test_druid WHERE id < ?")) {
                  pstmt.setInt(1, 3);
                  int deletedRows = pstmt.executeUpdate();
                  System.out.println("Data deleted successfully. Deleted rows: " + deletedRows);
              }
          }
      
    9. Define a method for querying data.

      Define a private static method selectData() for querying data from the database. The steps are as follows:

      1. Define a private static method selectData() that takes a Connection object as a parameter and declares that it may throw an SQLException.

      2. Use the try-with-resources statement to create a Statement object stmt by calling the createStatement() method of the connection object conn.

      3. Define a string variable sql to store the SQL statement for querying data.

      4. Use the executeQuery() method to execute the SQL statement for querying data and assign the result set to the resultSet variable.

      5. Use a while loop to traverse each row in the result set.

        1. Use the getInt() method to retrieve the integer value of the id field in the current row and assign it to the id variable.
        2. Use the getString() method to retrieve the string value of the name field in the current row and assign it to the name variable.
        3. Print the values of the id and name fields in the current row.

      Sample code:

          private static void selectData(Connection conn) throws SQLException {
              try (Statement stmt = conn.createStatement()) {
                  String sql = "SELECT * FROM test_druid";
                  ResultSet resultSet = stmt.executeQuery(sql);
                  while (resultSet.next()) {
                      int id = resultSet.getInt("id");
                      String name = resultSet.getString("name");
                      System.out.println("id: " + id + ", name: " + name);
                  }
              }
          }
      
    10. Define a method for dropping the table.

      Define a private static method dropTable() for dropping the table in the database. The steps are as follows:

      1. Define a private static method dropTable() that takes a Connection object as a parameter and declares that it may throw an SQLException.
      2. Use the try-with-resources statement to create a Statement object stmt by calling the createStatement() method of the connection object conn.
      3. Define a string variable sql to store the SQL statement for dropping the table.
      4. Use the executeUpdate() method to execute the SQL statement for dropping the table.
      5. Print a message indicating that the table was dropped successfully.

      Sample code:

          private static void dropTable(Connection conn) throws SQLException {
              try (Statement stmt = conn.createStatement()) {
                  String sql = "DROP TABLE test_druid";
                  stmt.executeUpdate(sql);
                  System.out.println("Table dropped successfully.");
              }
          }
      

    Complete code

    pom.xml
    db.properties
    Main.java
    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>com.example</groupId>
        <artifactId>druid-oceanbase-client</artifactId>
        <version>1.0-SNAPSHOT</version>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <configuration>
                        <source>8</source>
                        <target>8</target>
                    </configuration>
                </plugin>
            </plugins>
        </build>
    
        <dependencies>
            <dependency>
                <groupId>com.oceanbase</groupId>
                <artifactId>oceanbase-client</artifactId>
                <version>2.4.2</version>
            </dependency>
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>druid</artifactId>
                <version>1.2.8</version>
            </dependency>
        </dependencies>
    </project>
    
    # Database Configuration
    driverClassName=com.oceanbase.jdbc.Driver
    url=jdbc:oceanbase://$host:$port/$schema_name
    username=$user_name
    password=$password
    
    # Connection Pool Configuration
    #To check whether the database link is valid, MySQL must be configured to select 1; Oracle is select 1 from dual
    validationQuery=select 1 from dual
    #Initial number of connections
    initialSize=3
    #Maximum number of activations, that is, the maximum number of Connection pool
    maxActive=30
    #When closing the Abandoned connection, the error log is output. When the link is recycled, the console prints information. The test environment can add true, while the online environment is false. Will affect performance.
    logAbandoned=true
    #Minimum number of activations during idle time
    minIdle=5
    #The maximum waiting time for a connection, in milliseconds
    maxWait=1000
    #The maximum time to start the eviction thread is the survival time of a link (previous value: 25200000, the converted result of this time is: 2520000/1000/60/60=7 hours)
    minEvictableIdleTimeMillis=300000
    #Whether to recycle after exceeding the time limit
    removeAbandoned=true
    #Exceeding the time limit (in seconds), currently 5 minutes. If any business processing time exceeds 5 minutes, it can be adjusted appropriately.
    removeAbandonedTimeout=300
    # Run the idle connection collector Destroy thread every 10 seconds to detect the interval time between connections, based on the judgment of testWhileIdle
    timeBetweenEvictionRunsMillis=10000
    #When obtaining a link, not verifying its availability can affect performance.
    testOnBorrow=false
    #Check whether the link is available when returning the link to the Connection pool.
    testOnReturn=false
    #This configuration can be set to true, without affecting performance and ensuring security. The meaning is: Detect when applying for a connection. If the idle time is greater than timeBetweenEviceRunsMillis, execute validationQuery to check if the connection is valid.
    testWhileIdle=true
    #Default false, if configured as true, connection detection will be performed in the DestroyConnectionThread thread (timeBetweenEvaluation once)
    keepAlive=false
    #If keepAlive rule takes effect and the idle time of the connection exceeds it, the connection will only be detected
    keepAliveBetweenTimeMillis=60000
    
    package com.example;
    
    import java.io.IOException;
    import java.io.InputStream;
    import java.sql.Connection;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.sql.Statement;
    import java.sql.PreparedStatement;
    import java.util.Properties;
    import javax.sql.DataSource;
    import com.alibaba.druid.pool.DruidDataSourceFactory;
    
    public class Main {
    
        public static void main(String[] args) {
            try {
                Properties properties = loadPropertiesFile();
                DataSource dataSource = createDataSource(properties);
                try (Connection conn = dataSource.getConnection()) {
                    // Create table
                    createTable(conn);
                    // Insert data
                    insertData(conn);
                    // Query data
                    selectData(conn);
    
                    // Update data
                    updateData(conn);
                    // Query the updated data
                    selectData(conn);
    
                    // Delete data
                    deleteData(conn);
                    // Query the data after deletion
                    selectData(conn);
    
                    // Drop table
                    dropTable(conn);
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    
        private static Properties loadPropertiesFile() throws IOException {
            Properties properties = new Properties();
            try (InputStream is = Main.class.getClassLoader().getResourceAsStream("db.properties")) {
                properties.load(is);
            }
            return properties;
        }
    
        private static DataSource createDataSource(Properties properties) throws Exception {
            return DruidDataSourceFactory.createDataSource(properties);
        }
    
        private static void createTable(Connection conn) throws SQLException {
            try (Statement stmt = conn.createStatement()) {
                String sql = "CREATE TABLE test_druid (id NUMBER, name VARCHAR2(20))";
                stmt.executeUpdate(sql);
                System.out.println("Table created successfully.");
            }
        }
    
        private static int insertData(Connection conn) throws SQLException {
            String insertDataSql = "INSERT INTO test_druid (id, name) VALUES (?, ?)";
            int insertedRows = 0;
            try (PreparedStatement insertDataStmt = conn.prepareStatement(insertDataSql)) {
                for (int i = 1; i < 6; i++) {
                    insertDataStmt.setInt(1, i);
                    insertDataStmt.setString(2, "test_insert" + i);
                    insertedRows += insertDataStmt.executeUpdate();
                }
                System.out.println("Data inserted successfully. Inserted rows: " + insertedRows);
            }
            return insertedRows;
        }
    
        private static void updateData(Connection conn) throws SQLException {
            try (PreparedStatement pstmt = conn.prepareStatement("UPDATE test_druid SET name = ? WHERE id = ?")) {
                pstmt.setString(1, "test_update");
                pstmt.setInt(2, 3);
                int updatedRows = pstmt.executeUpdate();
                System.out.println("Data updated successfully. Updated rows: " + updatedRows);
            }
        }
    
        private static void deleteData(Connection conn) throws SQLException {
            try (PreparedStatement pstmt = conn.prepareStatement("DELETE FROM test_druid WHERE id < ?")) {
                pstmt.setInt(1, 3);
                int deletedRows = pstmt.executeUpdate();
                System.out.println("Data deleted successfully. Deleted rows: " + deletedRows);
            }
        }
    
        private static void selectData(Connection conn) throws SQLException {
            try (Statement stmt = conn.createStatement()) {
                String sql = "SELECT * FROM test_druid";
                ResultSet resultSet = stmt.executeQuery(sql);
                while (resultSet.next()) {
                    int id = resultSet.getInt("id");
                    String name = resultSet.getString("name");
                    System.out.println("id: " + id + ", name: " + name);
                }
            }
        }
    
        private static void dropTable(Connection conn) throws SQLException {
            try (Statement stmt = conn.createStatement()) {
                String sql = "DROP TABLE test_druid";
                stmt.executeUpdate(sql);
                System.out.println("Table dropped successfully.");
            }
        }
    }
    

    References

    For more information about OceanBase Connector/J, see OceanBase JDBC driver.

    Previous topic

    Connect to OceanBase Cloud by using Commons Pool
    Last

    Next topic

    Create a database
    Next
    What is on this page
    Prerequisites
    Procedure
    Step 1: Import the druid-oceanbase-client project into Eclipse
    Step 2: Modify the database connection information in the druid-oceanbase-client project
    Step 3: Run the druid-oceanbase-client project
    Project code
    Introduction to the pom.xml file
    Introduction to db.properties
    Main.java code introduction
    Complete code
    References