OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Cloud

  • Product Updates & Announcements
    • What's new
      • Release notes for 2026
      • Release notes for 2025
      • Release notes for 2024
      • Release history
    • Product announcements
      • Data development module deprecation notice
      • Optimization of Backup and Restore commercialization strategy
      • Cross-AZ data transfer billing (OceanBase Cloud on AWS)
      • Database Proxy pricing update
      • AWS instance pricing adjustment
  • Product Introduction
    • Overview
    • Management mode and scenarios
    • Core features
      • High availability with cross-cloud active-active architecture
      • High availability with cross-cloud primary-standby databases
      • Multi-level caching in shared storage
      • Multi-layer online scaling and on-demand adjustment
    • Deployment modes
    • Storage architecture
    • Product specifications
    • Product billing
      • Overview
      • Instance billing
        • Tencent Cloud instance billing
        • Alibaba Cloud instance billing
        • Huawei Cloud instance billing
        • AWS instance billing
        • GCP instance billing
      • Backup and restore billing
      • SQL audit billing
      • Migrations billing
      • Database proxy billing
      • Binlog service billing
      • Overview of OceanBase Cloud support plans
      • Read-only replica billing
    • Supported database versions
  • Get Started
    • Get started with a transactional instance
    • Get started with an analytical instance
    • Get started with a Key-Value instance
  • Work with Transactional Instances
    • Overview
    • Create an instance
      • Overview
      • Create via OceanBase Cloud official website
      • Create via AWS Marketplace
      • Create via GCP Marketplace
      • Create via Huawei Cloud Marketplace
      • Create via Alibaba Cloud Marketplace
      • Create via Azure Marketplace
    • Connect to an instance
      • MySQL compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using Alibaba Cloud VPC
          • Connect using a public IP address
          • Connect using a Huawei Cloud peering connection
        • Connect with clients
          • Connect to OceanBase Cloud by using Client ODC
          • Connect to OceanBase Cloud by using a MySQL client
          • Connect to OceanBase Cloud by using OBClient
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using SpringBoot
            • SpringBatch sample application for connecting to OceanBase Cloud
            • spring-jdbc
            • SpringDataJPA sample application for connecting to OceanBase Cloud
            • Hibernate application development with OceanBase Cloud
            • Sample program for connecting to OceanBase Cloud
            • connector-j
            • Use TestContainers to connect to and use OceanBase Cloud
          • Python
            • Connect to OceanBase Cloud using mysqlclient
            • Connect to OceanBase Cloud using PyMySQL
            • Use the MySQL-connector-python driver to connect to and use OceanBase Cloud
            • Use SQLAlchemy to connect to an OceanBase Cloud database
            • Connect to an OceanBase Cloud database using Django
            • Connect to an OceanBase Cloud database by using peewee
          • C
            • Use MySQL Connector/C to connect to OceanBase Cloud
          • Go
            • Connect to OceanBase Cloud using the Go-SQL-Driver/MySQL driver
            • Connect to OceanBase Cloud using GORM
          • PHP
            • Use the EXT driver to connect to OceanBase Cloud
            • Connect to OceanBase Cloud by using the MySQLi driver
            • Use the PDO driver to connect to OceanBase Cloud
          • Rust
            • Rust application example for connecting to OceanBase Cloud
            • SeaORM example for connecting to OceanBase Cloud
          • ruby
            • ActiveRecord sample application for OceanBase Cloud
            • Connect to OceanBase Cloud by using mysql2
            • Connect to OceanBase Cloud by using Sequel
        • Use database connection pool
          • Database connection pool configuration
          • Connect to OceanBase Cloud by using a Tomcat connection pool
          • Connect to OceanBase Cloud by using a C3P0 connection pool
          • Connect to OceanBase Cloud by using a Proxool connection pool
          • Connect to OceanBase Cloud by using a HikariCP connection pool
          • Connect to OceanBase Cloud by using a DBCP connection pool
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
      • Oracle compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using a public IP address
        • Connect with clients
          • Connect to OceanBase Cloud by using OBClient
          • Connect to OceanBase Cloud by using Client ODC
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using OceanBase Connector/J
            • Connect to OceanBase Cloud by using Spring Boot
            • SpringBatch application example for connecting to OceanBase Cloud
            • Connect to OceanBase Cloud using Spring JDBC
            • Connect to OceanBase Cloud by using Spring Data JPA
            • Connect to OceanBase Cloud by using Hibernate
            • Use MyBatis to connect to OceanBase Cloud
            • Use JFinal to connect to OceanBase Cloud
          • Python
            • Python Driver for Oracle Mode
          • C
            • Connect to OceanBase Cloud using OceanBase Connector/C
            • Connect to OceanBase Cloud using OceanBase Connector/ODBC
            • Use SqlSugar to connect to OceanBase Cloud
        • Use database connection pool
          • Database connection pool configuration
          • Sample program that uses a Tomcat connection pool to connect to OceanBase Cloud
          • C3P0 connection pool connects to OceanBase Cloud
          • Connect to OceanBase Cloud using Proxool connection pool
          • Sample program that uses HikariCP to connect to OceanBase Cloud
          • Use DBCP connection pool to connect to OceanBase Cloud
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
    • Developer guide
      • MySQL compatible mode
        • Plan database objects
          • Create a database
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use the CASE conditional operator in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
            • Use the SELECT ... LOCK IN SHARE MODE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
            • Release a savepoint
          • Commit a transaction
          • Roll back a transaction
      • Oracle compatible mode
        • Plan database objects
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use CASE functions in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
          • Commit a transaction
          • Roll back a transaction
    • Manage instances
      • Manage instances
        • View the instance list
        • Instance overview
        • Stop and restart instances
        • Unit migration
      • Manage tenants
        • Tenant overview
        • Create a tenant
        • Modify tenant specifications
        • Modify tenant names
        • Add an endpoint
        • Resource isolation
          • Overview
          • Manage resource groups
            • Create a resource group
            • View a resource group
            • Edit a resource group
            • Delete a resource group
          • Manage isolation rules
            • Create an isolation rule
            • View isolation rules
            • Edit an isolation rule
            • Delete a quarantine rule
        • Modify primary zone
        • Modify the maximum number of connections for a tenant proxy
        • Monitor tenant performance
          • Overview
          • View performance and SQL monitoring details
          • View transaction monitoring details
          • View storage and cache monitoring details
          • View Binlog service monitoring
          • Customize a monitoring dashboard for a tenant
        • Diagnostics
          • Real-time diagnostics
            • SQL diagnostics
              • Top SQL
              • Slow SQL
              • Suspicious SQL
              • High-risk SQL
            • SQL audit
        • Manage tenant parameters
          • Manage tenant parameters
          • Parameters for tenants
          • Parameter template overview
        • Delete a tenant
        • Manage databases and accounts
          • Create accounts
          • Manage accounts
          • Create a database (MySQL compatible mode)
          • Manage databases (MySQL compatible mode)
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor multidimensional metrics of an instance
        • Monitor the performance of hosts in an instance
        • Monitor database proxy
        • Monitor database proxy hosts
        • Monitor cross-cloud network performance
        • Customize a monitoring dashboard for an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Manage instance parameters
        • Manage parameters
        • Parameters for cluster instances
      • Change instance configurations
        • Enable storage auto-scaling
        • View history of configuration changes
        • Change configuration
        • Change configuration temporarily
        • Switch the deployment mode
      • Manage standby instances
        • Overview
        • Create a standby instance
        • Create a cross-cloud standby instance
        • Create a standby instance for an Alibaba Cloud primary instance
        • View details of primary and standby instances
        • Configure global endpoint
        • Enable automatic forwarding for write requests of standby databases
        • Primary-standby instance switchover
        • Initiate failover
        • Detach a standby instance
        • Release a standby instance
      • Release an instance
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert rule template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Backup and restore
        • Overview
        • Backup strategy
        • Initiate a backup immediately
        • Data backup
        • Initiate a restore
        • Data restore
        • Restore data from the instance recycle bin
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • One-click diagnostics
          • Initiate one-click diagnostics
          • View one-click diagnostic report
            • Exceptions
            • Real-time diagnostics
            • Optimization suggestions
            • Capacity management
            • Security management
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Request analysis
            • Request analysis
        • Root cause diagnostics
          • Exception handling
          • Enable system autonomy
        • SQL audit
        • Materialized view analysis
        • Optimization center
          • Optimization suggestions
          • Manage active outlines
          • SQL review
          • View the optimization history
      • Manage tags
      • Manage read-only replicas
        • Overview
        • Instance read-only replicas
          • Add a read-only replica to an instance
          • View read-only replicas of an instance
          • Manage read-only replicas of an instance
          • Delete a read-only replica of an instance
        • Tenant read-only replicas
          • Add a read-only replica to a tenant
          • View read-only replicas of a tenant
          • Manage read-only replicas of a tenant
          • Delete a read-only replica of a tenant
      • Manage JVM-dependent services
    • Data source management
      • Create a data source
      • Manage data sources
      • User privileges
        • User privileges for compatibility assessment
        • User privileges for data migration
        • User privileges for performance assessment
        • User privileges for data archiving
        • User privileges for data cleanup
      • Connect via private network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
        • Private IP address segments
      • Connect via public network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Manage recycle Bin
      • Instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance-level recycle bin
        • Tenant-level recycle bin
  • Work with Analytical Instances
    • Overview
    • Core features
    • Create an instance
    • Connect to an instance
      • Overview
      • Get connection string
        • Overview
        • Connect using AWS PrivateLink
        • Connect using a public IP address
      • Connect with clients
        • Connect to OceanBase Cloud by using Client ODC
        • Connect to OceanBase Cloud by using a MySQL client
        • Connect to OceanBase Cloud by using OBClient
      • Connect with drivers
        • Java
          • Connect to OceanBase Cloud by using Spring Boot
          • Connect to OceanBase Cloud by using Spring Batch
          • Connect to OceanBase Cloud by using Spring Data JDBC
          • Connect to OceanBase Cloud by using Spring Data JPA
          • Connect to OceanBase Cloud by using Hibernate
          • Connect to OceanBase Cloud by using MyBatis
          • Connect to OceanBase Cloud using MySQL Connector/J
        • Python
          • Connect to OceanBase Cloud by using mysqlclient
          • Connect to OceanBase Cloud by using PyMySQL
          • Connect to OceanBase Cloud using MySQL Connector/Python
        • C
          • Connect to OceanBase Cloud using MySQL Connector/C
        • Go
          • Connect to OceanBase Cloud using Go-SQL-Driver/MySQL
        • PHP
          • Connect to OceanBase Cloud using PHP
      • Use database connection pool
        • Database connection pool configuration
        • Connect to OceanBase Cloud by using a Tomcat connection pool
        • Connect to OceanBase Cloud by using a C3P0 connection pool
        • Connect to OceanBase Cloud by using a Proxool connection pool
        • Connect to OceanBase Cloud by using a HikariCP connection pool
        • Connect to OceanBase Cloud by using a DBCP connection pool
        • Connect to OceanBase Cloud by using Commons Pool
        • Connect to OceanBase Cloud by using a Druid connection pool
    • Data table design
      • Table overview
      • Best practices
        • Unit 1: Best practices for optimizing storage structures and query performance
        • Unit 2: Best practices for creating special indexes
    • Export data
    • OceanBase data processing
    • Query acceleration
      • Statistics
      • Materialized views for query acceleration
      • Select a query parallelism level
    • Manage instances
      • Instance overview
      • Change configuration
      • Modify primary zone
      • Manage parameters
      • Backup and restore
        • Backup overview
        • Backup strategies
        • Immediate backup
        • Data backup
        • Initiate restore
        • Data restore
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor the performance of hosts in an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Optimization management
            • Manage active outlines
            • View the optimization history
          • Request analysis
            • Request analysis
      • Stop and restart instances
      • Release instances
      • Manage databases and accounts
        • Create and manage accounts
        • Create a database
        • Manage databases
      • Manage tags
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Performance diagnosis and tuning
      • Use the DBMS_XPLAN package for performance diagnostics
      • Use the GV$SQL_PLAN_MONITOR view for performance analysis
      • Views related to AP performance analysis
    • Performance testing
    • Product integration
    • Manage recycle Bin
      • View instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance recycle bin
  • Work with Key-Value Instances
    • Try out Key-Value instances
      • Create an instance
      • Create a tenant
      • Create an account for a database user
      • OBKV HBase data operation examples
    • Use Table model
      • Create an instance
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-Table
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
    • Use HBase model
      • OBKV-HBase Overview
      • Create an instance
      • Develop in HBase model
        • Connect to an instance by using the OBKV-HBase client
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-HBase
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
      • Performance test
    • Connect Key-Value instances
      • Overview
      • Connect using a public IP address
  • Migrations
    • Data migration and import solutions
    • Data assessment and migration quick start
    • Assess compatibility
      • Overview
      • Perform online assessment
      • Perform offline assessment
      • Manage compatibility assessment tasks
        • View a compatibility assessment task
        • View and download a compatibility assessment report
        • Stop a compatibility assessment task
        • Delete a compatibility assessment task
      • Obtain files for upload
      • Configure PrivateLink
      • Add an IP address to an allowlist
    • Migrate data
      • Overview
      • Migrations specification
      • Purchase a data migration instance
      • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
      • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
      • Migrate data between OceanBase database tenants of the same compatibility mode
      • Migrate data between OceanBase database tenants of different compatibility modes
      • Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
      • Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
      • Configure a two-way synchronization task
      • Migrate data from an OceanBase database to a Kafka instance
      • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
      • Migrate incremental data from a MySQL-compatible tenant of OceanBase Database to a TiDB Database
      • Migrate data from a PostgreSQL database to an OceanBase database
      • Migrate incremental data from an OceanBase Database to a PostgreSQL database
      • Manage data migration tasks
        • View details of a data migration task
        • Rename a data migration task
        • View and modify migration objects
        • View and modify migration parameters
        • Configure alert monitoring
        • Manage data migration tasks by using tags
        • Start, stop, and resume a data migration task
        • Clone a data migration task
        • Terminate and release a data migration task
      • Features
        • Custom DML/DDL configurations
        • DDL synchronization scope
        • Use SQL conditions to filter data
        • Rename a migration object
        • Set an incremental synchronization timestamp
        • Instructions on schema migration
        • Configure and modify matching rules
        • Wildcard rules
        • Import migration objects
        • Download conflict data
        • Change a topic
        • Column filtering
        • Data formats
      • Authorize an Alibaba Cloud account
      • SQL statements for querying table objects
      • Online DDL tools
      • Create a trigger
      • Modify the log level of a self-managed PostgreSQL instance
      • Supported DDL statements for synchronization and their limitations
        • DDL synchronization from Aurora MySQL DB clusters to MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from MySQL-compatible tenants of OceanBase Database to Aurora MySQL DB clusters
        • DDL synchronization between MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle databases to Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle-compatible tenants of OceanBase Database to Oracle databases
        • DDL synchronization between Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from OceanBase databases to Kafka instances
    • Data subscription
      • Create a data subscription task
      • Manage data subscription tasks
        • View details of a data subscription task
        • Configure subscription information
        • Modify the name of a data subscription task
        • View and modify subscription objects
        • View data subscription parameters
        • Set up data subscription alerts
        • Start, stop, and resume data subscription tasks
        • Clone a data subscription task
        • Release a data subscription task
      • Manage private connections for data subscriptions
      • Configure consumer subscription
      • Message formats
    • Data validation
      • Overview
      • Create a data validation task
      • Manage data validation tasks
        • View details of a data validation task
        • View and modify validation objects
        • View and modify validation parameters
        • Manage data validation tasks with tags
        • Start, pause, and resume data validation tasks
        • Clone a data validation task
        • Release a data validation task
      • Features
        • Import validation objects
        • Rename the validation object
        • Filter objects by using SQL conditions
        • Configure the matching rules for the validation object
    • Assess performance
      • Overview
      • Obtain traffic files from a database instance
      • Create a full performance assessment task
      • Create an SQL file parsing task
      • Create an SQL file replay task
      • Manage performance assessment tasks
        • View the details of a performance assessment task
        • View a performance assessment report
        • Retry and stop a performance assessment task
        • Delete a performance assessment task
      • Obtain a database instance
      • Create an access key
    • Import data
      • Import data
      • Direct load
      • Supported file formats and encoding formats for Data Import
      • Sample data introduction
    • Binlog service
      • Overview
      • Purchase the Binlog service
      • Manage Binlog Service
        • View details of the Binlog service
        • Change configuration
        • Modify the auto-scaling strategy for storage space
        • Modify the elasticity strategy for compute units
        • Disable the Binlog service
  • Security
    • OceanBase Cloud account settings
      • Modify login password
      • Multi-factor authentication
      • Manage AccessKeys
      • Time zone settings
      • Manage cloud marketplace accounts
      • Account audit
    • Organizations and projects
      • Overview
      • Manage organization information
      • Project management
        • Manage projects
        • Cross-project bidirectional authorization
        • Subscribe to project messages
      • Manage members
      • Permissions for roles
      • Cost management
        • Overview
        • Cost details
        • Manage cost units
      • Operation audit
    • Database accounts and privileges
      • Account privileges
      • Authorize cloud vendor accounts
      • AWS KMS key management
      • Support access control
    • Security and encryption
      • Set allowlist groups
      • SSL encryption
      • Transparent Data Encryption (TDE)
    • Monitoring dashboard
    • Events
  • SQL Console
    • Overview
    • Access SQL Console
    • SQL editing and execution
    • PL compilation
    • Result set editing
    • Execution analysis
    • Database object management
      • Create a table
      • Create a view
      • Create a function
      • Create a stored procedure
      • Create a program package
      • Create a trigger
      • Create a type
      • Create a sequence
      • Create a synonym
    • Session variable management
    • Functional keys in SQL Console
  • Integrations
    • Overview
    • Schema evolution
      • Liquibase
      • Flyway
    • Data ingestion
      • Canal
      • dbt
      • Debezium
      • Flink
      • Glue
      • Informatica Cloud
      • Kafka
      • Maxwell
      • SeaTunnel
      • DataWorks
      • NiFi
    • SQL development
      • DataGrip
      • DBeaver
      • Navicat
      • TablePlus
    • Orchestration
      • DolphinScheduler
      • Linkis
      • Airflow
    • Visualization
      • Grafana
      • Power BI
      • Quick BI
      • Superset
      • Tableau
    • Observability
      • Datadog
      • Prometheus
    • Database management
      • Bytebase
    • AI
      • LlamaIndex
      • Dify
      • LangChain
      • Tongyi Qianwen
      • OpenAI
      • n8n
      • Trae
      • SpringAI
      • Cline
      • Cursor
      • Continue
      • Toolbox
      • CamelAI
      • Firecrawl
      • Hugging Face
      • Ollama
      • Google Gemini
      • Cloudflare Workers AI
      • Jina AI
      • Augment Code
      • Claude Code
      • Kiro
    • Development tools
      • Cloudflare Workers
      • Vercel
  • Best practices
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through cross-cloud primary-standby databases (1:1)
    • High availability through cross-cloud primary-standby databases (1:n)
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud
  • References
    • System architecture
    • System management
    • Database object management
    • Database design and specification constraints
    • SQL reference
    • System views
    • Parameters and system variables
    • Error codes
    • Performance tuning
    • Open API References
      • Overview
      • Service endpoints
      • Using API
      • Open API List
        • Cluster management
          • DescribeInstances
          • DescribeInstance
          • CreateInstance
          • DeleteInstance
          • ModifyInstanceName
          • DescribeNodeOptions
          • StopCluster
          • StartCluster
          • ModifyInstanceSpec
          • DescribeInstanceTopology
          • DescribeReadonlyInstances
          • CreateReadonlyInstance
          • ModifyReadonlyInstanceSpec
          • ModifyReadonlyInstanceDiskSize
          • ModifyReadonlyInstanceNodeNum
          • DeleteReadonlyInstance
          • DescribeInstanceAvailableRoZones
          • DescribeInstanceparameters
          • UpdateInstanceParameters
          • DescribeInstanceparametersHistory
          • ModifyInstanceTagList
          • ModifyInstanceNodeNum
        • Tenant management
          • DescribeTenants
          • DescribeTenant
          • CreateTenants
          • DeleteTenants
          • ModifyTenantName
          • ModifyTenant
          • ModifyTenantUserDescription
          • ModifyTenantUserStatus
          • GetTenantCreateConstraints
          • ModifyTenantPrimaryZone
          • GetTenantCreateCpuConstraints
          • GetTenantCreateMemConstraints
          • GetTenantModifyCpuConstraints
          • GetTenantModifyMemConstraints
          • CreateTenantSecurityIpGroup
          • DescribeTenantSecurityIpGroups
          • ModifyTenantSecurityIpGroup
          • DeleteTenantSecurityIpGroup
          • DescribeTenantPrivateLink
          • DeletePrivatelinkConnection
          • CreatePrivatelinkService
          • ConnectPrivatelinkService
          • AddPrivatelinkServiceUser
          • BatchKillProcessList
          • DescribeProcessStatsComposition
          • DescribeTenantAvailableRoZones
          • DescribeTenantAddressInfo
          • ModifyTenantReadonlyReplica
          • DescribeTenantParameters
          • UpdateTenantParameters
          • DescribeTenantParametersHistory
          • ModifyTenantTagList
        • Tenant user management
          • CreateTenantUser
          • DescribeTenantUsers
          • DeleteTenantUsers
          • ModifyTenantUserPassword
          • ModifyTenantUserRoles
        • Database management
          • CreateDatabase
          • DescribeDatabases
          • DeleteDatabases
          • ModifyDatabaseUserRoles
        • Backup and restore
          • DescribeDataBackupSet
          • DescribeRestorableTenants
          • ModifyBackupStrategy
          • CreateTenantRestoreTask
          • CreateDataBackupTask
          • DescribeOneDataBackupSet
        • Database proxy management
          • CreateTenantAddress
          • CreateTenantSingleTunnelSLBAddress
          • DeleteTenantAddress
          • DescribeTenantAddress
          • ModifyOdpClusterSpec
          • ModifyTenantAddressPort
          • ModifyTenantAddressDomainPrefix
          • ConfirmPrivatelinkConnection
          • DescribeTenantAddressInfo
        • Monitoring management
          • DescribeTenantMetrics
          • DescribeMetricsData
          • DescribeNodeMetrics
        • Diagnostic management
          • DescribeOasTopSQLList
          • DescribeOasAnomalySQLList
          • DescribeOasSlowSQLList
          • DescribeOasSQLText
          • DescribeSqlAudits
          • DescribeOutlineBinding
          • DescribeSampleSqlRawTexts
          • DescribeSQLTuningAdvices
          • DescribeOasSlowSQLSamples
          • DescribeOasSQLTrends
          • DescribeOasSQLPlanGroup
        • Security management
          • CreateSecurityIpGroup
          • DescribeInstanceSSL
          • ModifyInstanceSSL
          • DescribeTenantEncryption
          • ModifyTenantEncryption
          • ModifySecurityIps
          • DeleteSecurityIpGroup
          • DescribeTenantSecurityConfigs
          • DescribeInstanceSecurityConfigs
        • Tag management
          • DescribeTags
          • CreateTags
          • UpdateTag
          • DeleteTag
        • Historical event management
          • DescribeOperationEvents
      • Differences between ApsaraDB for OceanBase APIs and OceanBase Cloud APIs
    • Download OBClient
      • Download OBClient
      • Download OceanBase Connector/J
      • Download client ODC
      • Download OceanBase Connector/ODBC
      • Download OBClient Libs
    • ODC User Guide
      • What is ODC?
        • What is ODC?
        • Limitations
      • Quick Start
        • Client ODC
          • Overview
          • Install Client ODC
          • Use Client ODC
        • Web ODC
          • Overview
          • Use Web ODC
      • Data Source Management
        • Create a data source
        • Data sources and project collaboration
        • Database O&M
          • Session management
          • Global variable management
          • Recycle bin management
      • SQL Development
        • Edit and execute SQL statements
        • Perform PL compilation and debugging
        • Edit and export the result set of an SQL statement
        • Execution analysis
        • Generate test data
        • System settings
        • Database objects
          • Table objects
            • Overview
            • Create a table
          • View objects
            • Overview
            • Create a view
            • Manage views
          • Materialized view objects
            • Overview
            • Create a materialized view
            • Manage materialized views
          • Function objects
            • Overview
            • Create a function
            • Manage functions
          • Stored procedure objects
            • Overview
            • Create a stored procedure
            • Manage stored procedures
          • Sequence objects
            • Overview
            • Create a sequence
            • Manage sequences
          • Package objects
            • Overview
            • Create a program package
            • Manage program packages
          • Trigger objects
            • Overview
            • Create a trigger
            • Manage triggers
          • Type objects
            • Overview
            • Create a type
            • Manage types
          • Synonym objects
            • Overview
            • Create a synonym
            • Manage synonyms
      • Import and Export
        • Import schemas and data
        • Export schemas and data
      • Database Change Management
        • User Permission Management
          • Users and roles
          • Automatic authorization
          • User permission management
        • Project collaboration management
        • Risk levels, risk identification rules, and approval processes
        • SQL check specifications
        • SQL window specification
        • Database change management
        • Batch database change management
        • Online schema changes
        • Synchronize shadow tables
        • Schema comparison
      • Data Lifecycle Management
        • Partitioning Plan Management
          • Manage partitioning plans
          • Set partitioning strategies
          • Examples
        • SQL plan task
      • Data Desensitization and Auditing
        • Desensitize data
        • Operation records
      • Notification Management
        • Overview
        • View notification records
        • Manage Notification Channel
          • Create a notification channel
          • View, edit, and delete a notification channel
          • Configure a custom channel
        • Manage notification rules
      • Best Practices
        • Tips for SQL development
        • Explore ODC team workspaces
        • Understanding real-time SQL diagnostics for OceanBase AP
        • OceanBase historical database solutions
        • ODC SQL check for automatic identification of high-risk operations
        • Manage and modify sharded databases and tables via ODC
        • Data masking and control practices
        • Enterprise-level control and collaboration: Safeguard every database change
    • Data Development
      • Overview
      • Workspace management
      • Worksheet management
      • Compute node pool management
      • Workflow management
      • Dashboard management
      • Manage Git repositories
      • SQL development
        • SQL editing and execution
        • Result set editing
        • Execution analysis
        • Database object management
          • Create a table
          • Create a view
          • Create a function
          • Create a stored procedure
        • Session variable management
        • Git integration
      • Sample datasets
      • Data development terms
  • Manage Billing
    • Access billing
    • View monthly bills
    • View payment details
    • View orders
    • Use vouchers for payment
    • View invoices
  • Legal Agreements
    • OceanBase Cloud Services Agreement
    • Service Level Agreement
    • OceanBase Data Processing Addendum
    • Service Level Agreement for OceanBase Cloud Migration Service

Download PDF

Release notes for 2026 Release notes for 2025 Release notes for 2024 Release history Data development module deprecation notice Optimization of Backup and Restore commercialization strategy Cross-AZ data transfer billing (OceanBase Cloud on AWS) Database Proxy pricing update AWS instance pricing adjustment Overview Management mode and scenarios High availability with cross-cloud active-active architecture High availability with cross-cloud primary-standby databases Multi-level caching in shared storage Multi-layer online scaling and on-demand adjustment Deployment modes Storage architecture Product specifications Overview Backup and restore billing SQL audit billing Migrations billing Database proxy billing Binlog service billing Overview of OceanBase Cloud support plans Read-only replica billing Supported database versions Get started with a transactional instance Get started with an analytical instance Get started with a Key-Value instance Overview Overview Create via OceanBase Cloud official website Create via AWS Marketplace Create via GCP Marketplace Create via Huawei Cloud Marketplace Create via Alibaba Cloud Marketplace Create via Azure Marketplace Release an instance Manage tags Manage JVM-dependent services Create a data source Manage data sources Archive data Clean up data Instance recycle bin Overview Core features Create an instance Overview Table overview Export data OceanBase data processing Statistics Materialized views for query acceleration Select a query parallelism level Instance overview Change configuration Modify primary zone Manage parameters Stop and restart instances Release instances Manage tags Archive data Clean up data Use the DBMS_XPLAN package for performance diagnostics Use the GV$SQL_PLAN_MONITOR view for performance analysis Views related to AP performance analysis Performance testing Product integration View instance recycle bin Create an instance Create a tenant Create an account for a database user OBKV HBase data operation examples Create an instance OBKV-HBase Overview Create an instance Performance test Overview Connect using a public IP address Data migration and import solutions Data assessment and migration quick start Overview Perform online assessment Perform offline assessment Obtain files for upload Configure PrivateLink Add an IP address to an allowlist Overview Migrations specification Purchase a data migration instance Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data between OceanBase database tenants of the same compatibility mode Migrate data between OceanBase database tenants of different compatibility modes Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database Configure a two-way synchronization task Migrate data from an OceanBase database to a Kafka instance
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Cloud
iconOceanBase Cloud

    Connect to OceanBase Cloud by using a Druid connection pool

    Last Updated:2026-04-07 08:08:33  Updated
    share
    What is on this page

    folded

    share

    This topic describes how to use a Druid connection pool, MySQL Connector/J, and OceanBase Cloud to build an application for basic database operations, such as table creation, data insertion, data modification, data deletion, data query, and table deletion.

    Download the druid-mysql-client sample project Connect to OceanBase Database by using a Druid connection pool (MySQL compatible mode)

    Prerequisites

    • You have registered an OceanBase Cloud account and have created an instance. For details, refer to Create an instance.

    • You have obtained the connection string of the instance. For more information, see Obtain the connection string.

    • You have installed Java Development Kit (JDK) 1.8 and Maven.

    • You have installed Eclipse.

      Note

      This topic uses Eclipse IDE for Java Developers 2022-03 to run the sample code. You can also choose a suitable tool as needed.

    Procedure

    Note

    The following procedure uses Eclipse IDE for Java Developers 2022-03 to compile and run this project in Windows. If you use another operating system or compiler, the procedure can be slightly different.

    Step 1: Import the druid-mysql-client project to Eclipse

    1. Start Eclipse and choose File > Open Projects from File System.

    2. In the dialog box that appears, click Directory, navigate to the directory where the project is located, and click Finish to import the project.

      Note

      When you use Eclipse to import a Maven project, Eclipse automatically detects the pom.xml file in the project, downloads the required dependency libraries based on the dependencies described in the file, and adds them to the project.

      1

    3. View the project.

      2

    Step 2: Modify the database connection information in the druid-mysql-client project

    Modify the database connection information in the db.properties file in the druid-mysql-client/src/main/resources/ directory based on the obtained connection string mentioned in the "Prerequisites" section.

    Here is an example:

    • The endpoint is t5******.********.oceanbase.cloud.
    • The access port is 3306.
    • The name of the database to be accessed is test.
    • The instance account is test_user001.
    • The password is ******.

    The sample code is as follows:

    ...
    url=jdbc:mysql://t5******.********.oceanbase.cloud:3306/test?useSSL=false
    username=test_user001
    password=******
    ...
    

    Step 3: Run the druid-mysql-client project

    1. In the project navigation view, find and expand the druid-mysql-client/src/main/java directory.

    2. Right-click the Main.java file and choose Run As > Java Application.

      4

    3. In the Console window of Eclipse, view the output results.

      5

    Project code

    Click here to download the project code, which is a package named druid-mysql-client.zip.

    Decompress the package to obtain a folder named druid-mysql-client. The directory structure is as follows:

    druid-mysql-client
    ├── src
    │   └── main
    │       ├── java
    │       │   └── com
    │       │       └── example
    │       │           └── Main.java
    │       └── resources
    │           └── db.properties
    └── pom.xml
    

    The files and directories are described as follows:

    • src: the root directory that stores the source code.
    • main: a directory that stores the main code, including the major logic of the application.
    • java: a directory that stores the Java source code.
    • com: a directory that stores the Java package.
    • example: a directory that stores the packages of the sample project.
    • Main.java: a sample file of the main class that contains logic such as the table creation, data insertion, data deletion, data modification, and data query logic.
    • resources: a directory that stores resource files, including configuration files.
    • db.properties: the configuration file of the connection pool, which contains relevant database connection parameters.
    • pom.xml: the configuration file of the Maven project, which is used to manage project dependencies and build settings.

    Code in pom.xml

    pom.xml is the configuration file of the Maven project, which defines the dependencies, plug-ins, and build rules of the project. Maven is a Java project management tool that can automatically download dependencies and compile and package projects.

    Perform the following steps to configure the pom.xml file:

    1. Declare the file.

      Declare the file to be an XML file that uses XML standard 1.0 and the character encoding UTF-8.

      The sample code is as follows:

      <?xml version="1.0" encoding="UTF-8"?>
      
    2. Configure namespaces and the POM model version.

      1. xmlns: the default XML namespace for the POM, which is set to http://maven.apache.org/POM/4.0.0.
      2. xmlns:xsi: the XML namespace for XML elements prefixed with xsi, which is set to http://www.w3.org/2001/XMLSchema-instance.
      3. xsi:schemaLocation: the location of an XML schema definition (XSD) file. The value consists of two parts: the default XML namespace (http://maven.apache.org/POM/4.0.0) and the URI of the XSD file (http://maven.apache.org/xsd/maven-4.0.0.xsd).
      4. <modelVersion>: the POM model version used by the POM file, which is set to 4.0.0.

      The sample code is as follows:

      <project xmlns="http://maven.apache.org/POM/4.0.0"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
      
       <!-- Other configurations -->
      
      </project>
      
    3. Configure basic information.

      1. <groupId>: the ID of the group to which the project belongs, which is set to com.example.
      2. <artifactId>: the name of the project, which is set to druid-mysql-client.
      3. <version>: the project version, which is set to 1.0-SNAPSHOT.

      The sample code is as follows:

          <groupId>com.example</groupId>
          <artifactId>druid-mysql-client</artifactId>
          <version>1.0-SNAPSHOT</version>
      
    4. Configure the attributes of the project source file.

      Specify maven-compiler-plugin as the compiler plug-in of Maven, and set the source code version and target code version of the compiler to Java 8. This means that the project source code is compiled by using Java 8 and the compiled bytecode is compatible with the Java 8 runtime environment. This ensures that Java 8 syntax and characteristics can be correctly processed during the compilation and running of the project.

      Note

      Java 1.8 and Java 8 are different names for the same version.

      The sample code is as follows:

          <build>
              <plugins>
                  <plugin>
                      <groupId>org.apache.maven.plugins</groupId>
                      <artifactId>maven-compiler-plugin</artifactId>
                      <configuration>
                          <source>8</source>
                          <target>8</target>
                      </configuration>
                  </plugin>
              </plugins>
          </build>
      
    5. Configure the components on which the project depends.

      1. Add the mysql-connector-java dependency library for interactions with the database, and configure the following parameters:

        1. <groupId>: the ID of the group to which the dependency belongs, which is set to mysql.
        2. <artifactId>: the name of the dependency, which is set to mysql-connector-java.
        3. <version>: the version of the dependency, which is set to 5.1.40.

        The sample code is as follows:

                <dependency>
                    <groupId>mysql</groupId>
                    <artifactId>mysql-connector-java</artifactId>
                    <version>5.1.40</version>
                </dependency>
        
      2. Add the druid dependency library and configure the following parameters:

        1. <groupId>: the ID of the group to which the dependency belongs, which is set to com.alibaba.
        2. <artifactId>: the name of the dependency, which is set to druid.
        3. <version>: the version of the dependency, which is set to 1.2.8.

        The sample code is as follows:

                <dependency>
                    <groupId>com.alibaba</groupId>
                    <artifactId>druid</artifactId>
                    <version>1.2.8</version>
                </dependency>
        

    Code in db.properties

    db.properties is a sample configuration file of the connection pool. The configuration file contains the URL, username, and password for connecting to the database, and other optional parameters of the connection pool.

    Perform the following steps to configure the db.properties file:

    1. Configure database connection parameters.

      1. Specify the class name of the database driver as com.mysql.jdbc.Driver.
      2. Specify the URL for connecting to the database, including the host IP address, port number, and schema to be accessed.
      3. Specify the username for connecting to the database.
      4. Specify the password for connecting to the database.

      The sample code is as follows:

      driverClassName=com.mysql.jdbc.Driver
      url=jdbc:oceanbase://$host:$port/$database_name?useSSL=false
      username=$user_name
      password=$password
      

      The parameters are described as follows:

      • $host: the access address of OceanBase Cloud. The value is sourced from the -h parameter in the connection string.
      • $port: the access port of OceanBase Cloud. The value is sourced from the -P parameter in the connection string.
      • $database_name: the name of the database to be accessed. The value is sourced from the -D parameter in the connection string.
      • $user_name: the account name. The value is sourced from the -u parameter in the connection string.
      • $password: the account password. The value is sourced from the -p parameter in the connection string.
    2. Configure other connection pool parameters.

      1. Specify select 1 as the SQL statement for connection verification.
      2. Set the number of initial connections in the connection pool to 3. When the connection pool is started, three initial connections are created.
      3. Set the maximum number of active connections allowed in the connection pool to 30.
      4. Set the value of the parameter that specifies whether to record logs for abandoned connections to true. This way, when an abandoned connection is recycled, an error log is recorded. We recommend that you set the value to true in a test environment and to false in an online environment to avoid compromising the performance.
      5. Set the minimum number of idle connections in the connection pool to 5. When the number of idle connections is less than 5, the connection pool automatically creates new connections.
      6. Set the maximum waiting time for requesting a connection to 1,000 ms. When the application requests a connection from the connection pool, if all connections are occupied, a timeout exception is thrown when the specified waiting time expires.
      7. Set the minimum duration for which a connection can remain idle in the connection pool to 300,000 ms. A connection that has been idle for 300,000 ms (5 minutes) will be recycled.
      8. Set the value of the parameter that specifies whether to recycle abandoned connections to true. An abandoned connection is recycled after the time specified by removeAbandonedTimeout elapses.
      9. Set the timeout value of an abandoned connection to 300s. An abandoned connection that has not been used in 300s (5 minutes) will be recycled.
      10. Set the interval for scheduling the idle connection recycling thread to 10,000 ms. In other words, the idle connection recycling thread is scheduled every 10,000 ms (10s) to recycle idle connections.
      11. Set the value of the parameter that specifies whether to verify a connection when it is requested to false. This can improve the performance. However, the obtained connection may be unavailable.
      12. Set the value of the parameter that specifies whether to verify a connection when it is returned to the connection pool to false. This can improve the performance. However, the connection returned to the connection pool may be unavailable.
      13. Set the value of the parameter that specifies whether to verify idle connections to true. When the value is set to true, the connection pool periodically executes the statement specified by validationQuery to verify the availability of connections.
      14. Set the value of the parameter that specifies whether to enable connection keepalive to false. The value false specifies to disable connection keepalive.
      15. Set the maximum idle period of connections to 60,000 ms. If the idle period of a connection exceeds 60,000 ms (1 minute), the connection keepalive mechanism will check the connection to ensure its availability. If any operation is performed in the connection during the idle period, the idle period is recounted.

      The sample code is as follows:

      validationQuery=select 1
      initialSize=3
      maxActive=30
      logAbandoned=true
      minIdle=5
      maxWait=1000
      minEvictableIdleTimeMillis=300000
      removeAbandoned=true
      removeAbandonedTimeout=300
      timeBetweenEvictionRunsMillis=10000
      testOnBorrow=false
      testOnReturn=false
      testWhileIdle=true
      keepAlive=false
      keepAliveBetweenTimeMillis=60000
      

    Notice

    The actual parameter configurations depend on the project requirements and database characteristics. We recommend that you adjust and configure the parameters based on the actual situation.

    General parameters of a Druid connection pool

    Parameter Description
    url The URL for connecting to the database. The value contains the database type, host name, port number, and database name.
    username The username for connecting to the database.
    password The password for connecting to the database.
    driverClassName The class name of the database driver. If the driverClassName parameter is not explicitly configured, Druid automatically identifies the database type (dbType) based on the value of url and selects the corresponding driverClassName. The automatic identification mechanism can reduce the configuration workload and simplify the configuration process. However, if url cannot be correctly parsed or a nonstandard database driver is required, the driverClassName parameter must be explicitly configured to ensure that the correct driver class is loaded.
    initialSize The number of connections created during the initialization of the connection pool. When the application is started, the connection pool will create the specified number of connections.
    maxActive The maximum number of active connections allowed in the connection pool. When the specified value is reached, subsequent connection requests have to wait until a connection is released.
    maxIdle The maximum number of idle connections allowed in the connection pool. This parameter has been deprecated. When the specified value is reached, excess connections are closed.
    minIdle The minimum number of idle connections retained in the connection pool. When the number of idle connections in the connection pool is lower than the specified value, the connection pool will create new connections.
    maxWait The maximum waiting time for requesting a connection, in ms. If you specify a positive value, it indicates the amount of time to wait. An exception is thrown if no connection is obtained when the specified time expires.
    poolPreparedStatements Specifies whether to enable the PreparedStatement Cache (PSCache) mechanism. If you set the value to true, PreparedStatement objects are cached to improve performance. Pay attention that the memory usage of OceanBase Database Proxy (ODP) may constantly increase. In such a scenario, you need to properly configure memory parameters and monitor the memory usage to avoid memory leak or overflow.
    validationQuery The SQL query statement for verifying connections. When a connection is obtained from the connection pool, this query statement is executed to verify its validity.
    timeBetweenEvictionRunsMillis The interval for detecting idle connections in the connection pool, in ms. A connection that has been idle for a period of time exceeding the value specified by timeBetweenEvictionRunsMillis will be closed.
    minEvictableIdleTimeMillis The minimum duration for which a connection can remain idle in the connection pool, in ms. A connection that has been idle for a period of time exceeding the specified value will be recycled. If you specify a negative value, idle connections will not be recycled.
    testWhileIdle Specifies whether to verify idle connections. If you set the value to true, the statement specified by validationQuery is executed to verify idle connections.
    testOnBorrow Specifies whether to verify a connection when it is obtained. If you set the value to true, the statement specified by validationQuery is executed to verify connections when connections are requested.
    testOnReturn Specifies whether to verify a connection when it is returned to the connection pool. If you set the value to true, the statement specified by validationQuery is executed to verify connections when connections are returned.
    filters A series of predefined filters in the connection pool. These filters can be used to preprocess and postprocess connections based on a specific order to provide extra features and enhance the performance of the connection pool. General filters are described as follows:
    1. stat: collects statistics about the performance metrics of the connection pool, such as the number of active connections, number of requests, and number of errors.
    2. wall: used in the SQL firewall to block and disable insecure SQL statements, thereby improving the security of the database.
    3. log4j: outputs logs of the connection pool to log4j to facilitate log recording and debugging.
    4. slf4j: outputs logs of the connection pool to slf4j to facilitate log recording and debugging.
    5. config: loads configuration information of the connection pool from an external configuration file.
    6. encoding: sets the character encoding format used between the connection pool and the database.
    After you configure these filters in the filters parameter, the connection pool applies them based on the specified order. You can separate filter names with commas (,), for example, filters=stat,wall,log4j.

    Code in Main.java

    The Main.java file is the main program of the sample application in this topic. It demonstrates how to use the data source, connection objects, and various database operation methods to interact with the database.

    Perform the following steps to configure the Main.java file:

    1. Import the required classes and interfaces.

      1. Declare the name of the package to which the current code belongs as com.example.
      2. Import the Java class IOException for handling input/output exceptions.
      3. Import the Java class InputStream for obtaining input streams from files or other sources.
      4. Import the Java interface Connection for representing connections with the database.
      5. Import the Java interface ResultSet for representing datasets of database query results.
      6. Import the Java class SQLException for handling SQL exceptions.
      7. Import the Java interface Statement for executing SQL statements.
      8. Import the Java interface PreparedStatement for representing precompiled SQL statements.
      9. Import the Java class Properties for processing the configuration file.
      10. Import the Java interface DataSource for managing database connections.
      11. Import the DruidDataSourceFactory class of Alibaba Druid for creating Druid data sources.

      The sample code is as follows:

      package com.example;
      
      import java.io.IOException;
      import java.io.InputStream;
      import java.sql.Connection;
      import java.sql.ResultSet;
      import java.sql.SQLException;
      import java.sql.Statement;
      import java.sql.PreparedStatement;
      import java.util.Properties;
      import javax.sql.DataSource;
      import com.alibaba.druid.pool.DruidDataSourceFactory;
      
    2. Create a Main class and define a main method.

      Define a Main class and a main method. The main method demonstrates how to use the connection pool to execute a series of operations in the database. Perform the following steps:

      1. Define a public class named Main as the entry to the application. The class name must be the same as the file name.

      2. Define a public static method main as the entry to the application to receive command-line options.

      3. Capture and handle possible exceptions by using the exception handling mechanism.

      4. Call the loadPropertiesFile method to load the configuration file and return a Properties object.

      5. Call the createDataSource() method to create a data source object based on the configuration file.

      6. Use the try-with-resources block to obtain a database connection and automatically close the connection after use.

        1. Call the createTable() method to create a table.
        2. Call the insertData() method to insert data.
        3. Call the selectData() method to query data.
        4. Call the updateData() method to update data.
        5. Call the selectData() method again to query the updated data.
        6. Call the deleteData() method to delete data.
        7. Call the selectData() method again to query data after the deletion.
        8. Call the dropTable() method to drop the table.

      The sample code is as follows:

      public class Main {
      
          public static void main(String[] args) {
              try {
                  Properties properties = loadPropertiesFile();
                  DataSource dataSource = createDataSource(properties);
                  try (Connection conn = dataSource.getConnection()) {
                      // Create a table.
                      createTable(conn);
                      // Insert data.
                      insertData(conn);
                      // Query data.
                      selectData(conn);
      
                      // Update data.
                      updateData(conn);
                      // Query the updated data.
                      selectData(conn);
      
                      // Delete data.
                      deleteData(conn);
                      // Query the data after deletion.
                      selectData(conn);
      
                      // Drop the table.
                      dropTable(conn);
                  }
              } catch (Exception e) {
                  e.printStackTrace();
              }
          }
      
          // Define a method for obtaining and using configurations in the configuration file.
          // Define a method for obtaining a data source object.
          // Define a method for creating tables.
          // Define a method for inserting data.
          // Define a method for updating data.
          // Define a method for deleting data.
          // Define a method for querying data.
          // Define a method for dropping tables.
      }
      
    3. Define a method for obtaining and using configurations in the configuration file.

      Define a private static method loadPropertiesFile() for loading the configuration file and returning a Properties object. Perform the following steps:

      1. Define a private static method loadPropertiesFile() that returns a Properties object, and declare that the method can throw an IOException.
      2. Create a Properties object for storing key-value pairs in the configuration file.
      3. Use the try-with-resources block to obtain the input stream is of the db.properties configuration file by using the class loader.
      4. Call the load method to load attributes in the input stream to the properties object.
      5. Return the properties object.

      The sample code is as follows:

          private static Properties loadPropertiesFile() throws IOException {
              Properties properties = new Properties();
              try (InputStream is = Main.class.getClassLoader().getResourceAsStream("db.properties")) {
                  properties.load(is);
              }
              return properties;
          }
      
    4. Define a method for obtaining a data source object.

      Define a private static method createDataSource() for creating a DataSource object based on the configuration file. The object is used to manage and obtain database connections. Perform the following steps:

      1. Define a private static method createDataSource() that receives a Properties object as parameters, and declare that the method can throw an exception.
      2. Call the createDataSource() method of the DruidDataSourceFactory class, pass in the properties attribute, and return a DataSource object.

      The sample code is as follows:

          private static DataSource createDataSource(Properties properties) throws Exception {
              return DruidDataSourceFactory.createDataSource(properties);
          }
      
    5. Define a method for creating tables.

      Define a private static method createTable() for creating tables in the database. Perform the following steps:

      1. Define a private static method createTable() that receives a Connection object as parameters, and declare that the method can throw an SQLException.
      2. Call the createStatement() method of the Connection object conn in the try-with-resources block to create a Statement object named stmt.
      3. Define a string variable sql for storing the table creation statement.
      4. Call the executeUpdate() method to execute the SQL statement to create a data table.
      5. Return a message indicating that the table is created.

      The sample code is as follows:

          private static void createTable(Connection conn) throws SQLException {
              try (Statement stmt = conn.createStatement()) {
                  String sql = "CREATE TABLE test_druid (id INT, name VARCHAR(20))";
                  stmt.executeUpdate(sql);
                  System.out.println("Table created successfully.");
              }
          }
      
    6. Define a method for inserting data.

      Define a private static method insertData() for inserting data into the database. Perform the following steps:

      1. Define a private static method insertData() that receives a Connection object as parameters, and declare that the method can throw an SQLException.

      2. Define a string variable insertDataSql for storing the data insertion statement.

      3. Define an integer variable insertedRows with the initial value 0, which is used to record the number of inserted rows.

      4. Use the prepareStatement() method of the Connection object conn and the data insertion statement in the try-with-resources block to create a PreparedStatement object named insertDataStmt.

      5. Use the FOR loop statement to perform five rounds of iterations to insert five data records.

        1. Call the setInt() method to set the value of the first parameter to the value of the loop variable i.
        2. Call the setString() method to set the value of the second parameter to the value of the test_insert string combined with the value of the loop variable i.
        3. Call the executeUpdate() method to execute the data insertion statement and add the number of operated rows to the value of the insertedRows variable.
      6. Return a message indicating that the data is inserted, with the total number of inserted rows displayed.

      7. Return the total number of inserted rows.

      The sample code is as follows:

          private static int insertData(Connection conn) throws SQLException {
              String insertDataSql = "INSERT INTO test_druid (id, name) VALUES (?, ?) ";
              int insertedRows = 0;
              try (PreparedStatement insertDataStmt = conn.prepareStatement(insertDataSql)) {
                  for (int i = 1; i < 6; i++) {
                      insertDataStmt.setInt(1, i);
                      insertDataStmt.setString(2, "test_insert" + i);
                      insertedRows += insertDataStmt.executeUpdate();
                  }
                  System.out.println("Data inserted successfully. Inserted rows: " + insertedRows);
              }
              return insertedRows;
          }
      
    7. Define a method for updating data.

      Define a private static method updateData() for updating data in the database. Perform the following steps:

      1. Define a private static method updateData() that receives a Connection object as parameters, and declare that the method can throw an SQLException.
      2. Use the prepareStatement() method of the Connection object conn and the data update statement in the try-with-resources block to create a PreparedStatement object named pstmt.
      3. Call the setString() method to set the value of the first parameter to test_update.
      4. Call the setInt() method to set the value of the second parameter to 3.
      5. Call the executeUpdate() method to execute the data update statement and assign the number of operated rows to the updatedRows variable.
      6. Return a message indicating that the data is updated, with the total number of updated rows displayed.

      The sample code is as follows:

          private static void updateData(Connection conn) throws SQLException {
              try (PreparedStatement pstmt = conn.prepareStatement("UPDATE test_druid SET name = ? WHERE id = ?" )) {
                  pstmt.setString(1, "test_update");
                  pstmt.setInt(2, 3);
                  int updatedRows = pstmt.executeUpdate();
                  System.out.println("Data updated successfully. Updated rows: " + updatedRows);
              }
          }
      
    8. Define a method for deleting data.

      Define a private static method deleteData() for deleting data from the database. Perform the following steps:

      1. Define a private static method deleteData() that receives a Connection object as parameters, and declare that the method can throw an SQLException.
      2. Use the prepareStatement() method of the Connection object conn and the data deletion statement in the try-with-resources block to create a PreparedStatement object named pstmt.
      3. Call the setInt() method to set the value of the first parameter to 3.
      4. Call the executeUpdate() method to execute the data deletion statement and assign the number of operated rows to the deletedRows variable.
      5. Return a message indicating that the data is deleted, with the total number of deleted rows displayed.

      The sample code is as follows:

          private static void deleteData(Connection conn) throws SQLException {
              try (PreparedStatement pstmt = conn.prepareStatement("DELETE FROM test_druid WHERE id < ?" )) {
                  pstmt.setInt(1, 3);
                  int deletedRows = pstmt.executeUpdate();
                  System.out.println("Data deleted successfully. Deleted rows: " + deletedRows);
              }
          }
      
    9. Define a method for querying data.

      Define a private static method selectData() for querying data from the database. Perform the following steps:

      1. Define a private static method selectData() that receives a Connection object as parameters, and declare that the method can throw an SQLException.

      2. Call the createStatement() method of the Connection object conn in the try-with-resources block to create a Statement object named stmt.

      3. Define a string variable sql for storing the data query statement.

      4. Call the executeQuery() method to execute the data query statement and assign the returned result set to the resultSet variable.

      5. Use the WHILE loop statement to traverse each row in the result set.

        1. Call the getInt() method to obtain the integer value of the id field in the current row and assign the obtained value to the id variable.
        2. Call the getString() method to obtain the string value of the name field in the current row and assign the obtained value to the name variable.
        3. Return the values of the id and name fields in the current row.

      The sample code is as follows:

          private static void selectData(Connection conn) throws SQLException {
              try (Statement stmt = conn.createStatement()) {
                  String sql = "SELECT * FROM test_druid";
                  ResultSet resultSet = stmt.executeQuery(sql);
                  while (resultSet.next()) {
                      int id = resultSet.getInt("id");
                      String name = resultSet.getString("name");
                      System.out.println("id: " + id + ", name: " + name);
                  }
              }
          }
      
    10. Define a method for dropping tables.

      Define a private static method dropTable() for dropping tables from the database. Perform the following steps:

      1. Define a private static method dropTable() that receives a Connection object as parameters, and declare that the method can throw an SQLException.
      2. Call the createStatement() method of the Connection object conn in the try-with-resources block to create a Statement object named stmt.
      3. Define a string variable sql for storing the table dropping statement.
      4. Call the executeUpdate() method to execute the table dropping statement.
      5. Return a message indicating that the table is dropped.

      The sample code is as follows:

          private static void dropTable(Connection conn) throws SQLException {
              try (Statement stmt = conn.createStatement()) {
                  String sql = "DROP TABLE test_druid";
                  stmt.executeUpdate(sql);
                  System.out.println("Table dropped successfully.");
              }
          }
      

    Complete code

    pom.xml
    db.properties
    Main.java
    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>com.example</groupId>
        <artifactId>druid-mysql-client</artifactId>
        <version>1.0-SNAPSHOT</version>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <configuration>
                        <source>8</source>
                        <target>8</target>
                    </configuration>
                </plugin>
            </plugins>
        </build>
    
        <dependencies>
            <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
                <version>5.1.40</version>
            </dependency>
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>druid</artifactId>
                <version>1.2.8</version>
            </dependency>
        </dependencies>
    </project>
    
    # Database Configuration
    driverClassName=com.mysql.jdbc.Driver
    url=jdbc:mysql://$host:$port/$database_name?useSSL=false
    username=$user_name
    password=$password
    
    # Connection Pool Configuration
    #To check whether the database link is valid, MySQL must be configured to select 1; Oracle is select 1 from dual
    validationQuery=select 1
    #Initial number of connections
    initialSize=3
    #Maximum number of activations, that is, the maximum number of Connection pool
    maxActive=30
    #When closing the Abandoned connection, the error log is output. When the link is recycled, the console prints information. The test environment can add true, while the online environment is false. Will affect performance. When the link is recycled, the console prints information. The test environment can add true, while the online environment is false. Will affect performance.
    logAbandoned=true
    #Minimum number of activations during idle time
    minIdle=5
    #The maximum waiting time for a connection, in milliseconds
    maxWait=1000
    #The maximum time to start the eviction thread is the survival time of a link (previous value: 25200000, the converted result of this time is: 2520000/1000/60/60=7 hours)
    minEvictableIdleTimeMillis=300000
    #Whether to recycle after exceeding the time limit
    removeAbandoned=true
    #Exceeding the time limit (in seconds), currently 5 minutes. If any business processing time exceeds 5 minutes, it can be adjusted appropriately.
    removeAbandonedTimeout=300
    # Run the idle connection collector Destroy thread every 10 seconds to detect the interval time between connections, based on the judgment of testWhileIdle
    timeBetweenEvictionRunsMillis=10000
    #When obtaining a link, not verifying its availability can affect performance.
    testOnBorrow=false
    #Check whether the link is available when returning the link to the Connection pool.
    testOnReturn=false
    #This configuration can be set to true, without affecting performance and ensuring security. The meaning is: Detect when applying for a connection. If the idle time is greater than timeBetweenEviceRunsMillis, execute validationQuery to check if the connection is valid.
    testWhileIdle=true
    #Default false, if configured as true, connection detection will be performed in the DestroyConnectionThread thread (timeBetweenEvaluation once)
    keepAlive=false
    #If keepAlive rule takes effect and the idle time of the connection exceeds it, the connection will only be detected
    keepAliveBetweenTimeMillis=60000
    
    package com.example;
    
    import java.io.IOException;
    import java.io.InputStream;
    import java.sql.Connection;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.sql.Statement;
    import java.sql.PreparedStatement;
    import java.util.Properties;
    import javax.sql.DataSource;
    import com.alibaba.druid.pool.DruidDataSourceFactory;
    
    public class Main {
    
        public static void main(String[] args) {
            try {
                Properties properties = loadPropertiesFile();
                DataSource dataSource = createDataSource(properties);
                try (Connection conn = dataSource.getConnection()) {
                    // Create a table.
                    createTable(conn);
                    // Insert data.
                    insertData(conn);
                    // Query data.
                    selectData(conn);
    
                    // Update data.
                    updateData(conn);
                    // Query the updated data.
                    selectData(conn);
    
                    // Delete data.
                    deleteData(conn);
                    // Query the data after deletion.
                    selectData(conn);
    
                    // Drop the table.
                    dropTable(conn);
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    
        private static Properties loadPropertiesFile() throws IOException {
            Properties properties = new Properties();
            try (InputStream is = Main.class.getClassLoader().getResourceAsStream("db.properties")) {
                properties.load(is);
            }
            return properties;
        }
    
        private static DataSource createDataSource(Properties properties) throws Exception {
            return DruidDataSourceFactory.createDataSource(properties);
        }
    
        private static void createTable(Connection conn) throws SQLException {
            try (Statement stmt = conn.createStatement()) {
                String sql = "CREATE TABLE test_druid (id INT, name VARCHAR(20))";
                stmt.executeUpdate(sql);
                System.out.println("Table created successfully.");
            }
        }
    
        private static int insertData(Connection conn) throws SQLException {
            String insertDataSql = "INSERT INTO test_druid (id, name) VALUES (?, ?) ";
            int insertedRows = 0;
            try (PreparedStatement insertDataStmt = conn.prepareStatement(insertDataSql)) {
                for (int i = 1; i < 6; i++) {
                    insertDataStmt.setInt(1, i);
                    insertDataStmt.setString(2, "test_insert" + i);
                    insertedRows += insertDataStmt.executeUpdate();
                }
                System.out.println("Data inserted successfully. Inserted rows: " + insertedRows);
            }
            return insertedRows;
        }
    
        private static void updateData(Connection conn) throws SQLException {
            try (PreparedStatement pstmt = conn.prepareStatement("UPDATE test_druid SET name = ? WHERE id = ?" )) {
                pstmt.setString(1, "test_update");
                pstmt.setInt(2, 3);
                int updatedRows = pstmt.executeUpdate();
                System.out.println("Data updated successfully. Updated rows: " + updatedRows);
            }
        }
    
        private static void deleteData(Connection conn) throws SQLException {
            try (PreparedStatement pstmt = conn.prepareStatement("DELETE FROM test_druid WHERE id < ?" )) {
                pstmt.setInt(1, 3);
                int deletedRows = pstmt.executeUpdate();
                System.out.println("Data deleted successfully. Deleted rows: " + deletedRows);
            }
        }
    
        private static void selectData(Connection conn) throws SQLException {
            try (Statement stmt = conn.createStatement()) {
                String sql = "SELECT * FROM test_druid";
                ResultSet resultSet = stmt.executeQuery(sql);
                while (resultSet.next()) {
                    int id = resultSet.getInt("id");
                    String name = resultSet.getString("name");
                    System.out.println("id: " + id + ", name: " + name);
                }
            }
        }
    
        private static void dropTable(Connection conn) throws SQLException {
            try (Statement stmt = conn.createStatement()) {
                String sql = "DROP TABLE test_druid";
                stmt.executeUpdate(sql);
                System.out.println("Table dropped successfully.");
            }
        }
    }
    

    Previous topic

    Connect to OceanBase Cloud by using Commons Pool
    Last

    Next topic

    Table overview
    Next
    What is on this page
    Prerequisites
    Procedure
    Step 1: Import the druid-mysql-client project to Eclipse
    Step 2: Modify the database connection information in the druid-mysql-client project
    Step 3: Run the druid-mysql-client project
    Project code
    Code in pom.xml
    Code in db.properties
    Code in Main.java
    Complete code
    Prerequisites
    Procedure
    Step 1: Import the druid-mysql-client project to Eclipse
    Step 2: Modify the database connection information in the druid-mysql-client project
    Step 3: Run the druid-mysql-client project
    Project code
    Code in pom.xml
    Code in db.properties
    Code in Main.java
    Complete code