OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Cloud

  • Product Updates & Announcements
    • What's new
      • Release notes for 2026
      • Release notes for 2025
      • Release notes for 2024
      • Release history
    • Product announcements
      • |---|
      • Optimization of Backup and Restore commercialization strategy
      • Cross-AZ data transfer billing (OceanBase Cloud on AWS)
      • Database Proxy pricing update
      • AWS instance pricing adjustment
  • Product Introduction
    • Overview
    • Management mode and scenarios
    • Core features
      • High availability with cross-cloud active-active architecture
      • High availability with cross-cloud primary-standby databases
      • Multi-level caching in shared storage
      • Multi-layer online scaling and on-demand adjustment
    • Deployment modes
    • Storage architecture
    • Product specifications
    • Product billing
      • Overview
      • Instance billing
        • Tencent Cloud instance billing
        • Alibaba Cloud instance billing
        • Huawei Cloud instance billing
        • AWS instance billing
        • GCP instance billing
      • Backup and restore billing
      • SQL audit billing
      • Migrations billing
      • Database proxy billing
      • Binlog service billing
      • Overview of OceanBase Cloud support plans
      • Read-only replica billing
    • Supported database versions
  • Get Started
    • Get started with a transactional instance
    • Get started with an analytical instance
    • Get started with a Key-Value instance
  • Work with Transactional Instances
    • Overview
    • Create an instance
      • Overview
      • Create via OceanBase Cloud official website
      • Create via AWS Marketplace
      • Create via GCP Marketplace
      • Create via Huawei Cloud Marketplace
      • Create via Alibaba Cloud Marketplace
      • Create via Azure Marketplace
    • Connect to an instance
      • MySQL compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using Alibaba Cloud VPC
          • Connect using a public IP address
          • Connect using a Huawei Cloud peering connection
        • Connect with clients
          • Connect to OceanBase Cloud by using Client ODC
          • Connect to OceanBase Cloud by using a MySQL client
          • Connect to OceanBase Cloud by using OBClient
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using SpringBoot
            • SpringBatch sample application for connecting to OceanBase Cloud
            • spring-jdbc
            • SpringDataJPA sample application for connecting to OceanBase Cloud
            • Hibernate application development with OceanBase Cloud
            • Sample program for connecting to OceanBase Cloud
            • connector-j
            • Use TestContainers to connect to and use OceanBase Cloud
          • Python
            • Connect to OceanBase Cloud using mysqlclient
            • Connect to OceanBase Cloud using PyMySQL
            • Use the MySQL-connector-python driver to connect to and use OceanBase Cloud
            • Use SQLAlchemy to connect to an OceanBase Cloud database
            • Connect to an OceanBase Cloud database using Django
            • Connect to an OceanBase Cloud database by using peewee
          • C
            • Use MySQL Connector/C to connect to OceanBase Cloud
          • Go
            • Connect to OceanBase Cloud using the Go-SQL-Driver/MySQL driver
            • Connect to OceanBase Cloud using GORM
          • PHP
            • Use the EXT driver to connect to OceanBase Cloud
            • Connect to OceanBase Cloud by using the MySQLi driver
            • Use the PDO driver to connect to OceanBase Cloud
          • Rust
            • Rust application example for connecting to OceanBase Cloud
            • SeaORM example for connecting to OceanBase Cloud
          • ruby
            • ActiveRecord sample application for OceanBase Cloud
            • Connect to OceanBase Cloud by using mysql2
            • Connect to OceanBase Cloud by using Sequel
        • Use database connection pool
          • Database connection pool configuration
          • Connect to OceanBase Cloud by using a Tomcat connection pool
          • Connect to OceanBase Cloud by using a C3P0 connection pool
          • Connect to OceanBase Cloud by using a Proxool connection pool
          • Connect to OceanBase Cloud by using a HikariCP connection pool
          • Connect to OceanBase Cloud by using a DBCP connection pool
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
      • Oracle compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using a public IP address
        • Connect with clients
          • Connect to OceanBase Cloud by using OBClient
          • Connect to OceanBase Cloud by using Client ODC
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using OceanBase Connector/J
            • Connect to OceanBase Cloud by using Spring Boot
            • SpringBatch application example for connecting to OceanBase Cloud
            • Connect to OceanBase Cloud using Spring JDBC
            • Connect to OceanBase Cloud by using Spring Data JPA
            • Connect to OceanBase Cloud by using Hibernate
            • Use MyBatis to connect to OceanBase Cloud
            • Use JFinal to connect to OceanBase Cloud
          • Python
            • Python Driver for Oracle Mode
          • C
            • Connect to OceanBase Cloud using OceanBase Connector/C
            • Connect to OceanBase Cloud using OceanBase Connector/ODBC
            • Use SqlSugar to connect to OceanBase Cloud
        • Use database connection pool
          • Database connection pool configuration
          • Sample program that uses a Tomcat connection pool to connect to OceanBase Cloud
          • C3P0 connection pool connects to OceanBase Cloud
          • Connect to OceanBase Cloud using Proxool connection pool
          • Sample program that uses HikariCP to connect to OceanBase Cloud
          • Use DBCP connection pool to connect to OceanBase Cloud
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
    • Developer guide
      • MySQL compatible mode
        • Plan database objects
          • Create a database
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use the CASE conditional operator in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
            • Use the SELECT ... LOCK IN SHARE MODE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
            • Release a savepoint
          • Commit a transaction
          • Roll back a transaction
      • Oracle compatible mode
        • Plan database objects
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use CASE functions in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
          • Commit a transaction
          • Roll back a transaction
    • Manage instances
      • Manage instances
        • View the instance list
        • Instance overview
        • Stop and restart instances
        • Unit migration
      • Manage tenants
        • Tenant overview
        • Create a tenant
        • Modify tenant specifications
        • Modify tenant names
        • Add an endpoint
        • Resource isolation
          • Overview
          • Manage resource groups
            • Create a resource group
            • View a resource group
            • Edit a resource group
            • Delete a resource group
          • Manage isolation rules
            • Create an isolation rule
            • View isolation rules
            • Edit an isolation rule
            • Delete a quarantine rule
        • Modify primary zone
        • Modify the maximum number of connections for a tenant proxy
        • Monitor tenant performance
          • Overview
          • View performance and SQL monitoring details
          • View transaction monitoring details
          • View storage and cache monitoring details
          • View Binlog service monitoring
          • Customize a monitoring dashboard for a tenant
        • Diagnostics
          • Real-time diagnostics
            • SQL diagnostics
              • Top SQL
              • Slow SQL
              • Suspicious SQL
              • High-risk SQL
            • SQL audit
        • Manage tenant parameters
          • Manage tenant parameters
          • Parameters for tenants
          • Parameter template overview
        • Delete a tenant
        • Manage databases and accounts
          • Create accounts
          • Manage accounts
          • Create a database (MySQL compatible mode)
          • Manage databases (MySQL compatible mode)
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor multidimensional metrics of an instance
        • Monitor the performance of hosts in an instance
        • Monitor database proxy
        • Monitor database proxy hosts
        • Monitor cross-cloud network performance
        • Customize a monitoring dashboard for an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Manage instance parameters
        • Manage parameters
        • Parameters for cluster instances
      • Change instance configurations
        • Enable storage auto-scaling
        • View history of configuration changes
        • Change configuration
        • Change configuration temporarily
        • Switch the deployment mode
      • Manage standby instances
        • Overview
        • Create a standby instance
        • Create a cross-cloud standby instance
        • Create a standby instance for an Alibaba Cloud primary instance
        • View details of primary and standby instances
        • Configure global endpoint
        • Enable automatic forwarding for write requests of standby databases
        • Primary-standby instance switchover
        • Initiate failover
        • Detach a standby instance
        • Release a standby instance
      • Release an instance
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert rule template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Backup and restore
        • Overview
        • Backup strategy
        • Initiate a backup immediately
        • Data backup
        • Initiate a restore
        • Data restore
        • Restore data from the instance recycle bin
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • One-click diagnostics
          • Initiate one-click diagnostics
          • View one-click diagnostic report
            • Exceptions
            • Real-time diagnostics
            • Optimization suggestions
            • Capacity management
            • Security management
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Request analysis
            • Request analysis
        • Root cause diagnostics
          • Exception handling
          • Enable system autonomy
        • SQL audit
        • Materialized view analysis
        • Optimization center
          • Optimization suggestions
          • Manage active outlines
          • SQL review
          • View the optimization history
      • Manage tags
      • Manage read-only replicas
        • Overview
        • Instance read-only replicas
          • Add a read-only replica to an instance
          • View read-only replicas of an instance
          • Manage read-only replicas of an instance
          • Delete a read-only replica of an instance
        • Tenant read-only replicas
          • Add a read-only replica to a tenant
          • View read-only replicas of a tenant
          • Manage read-only replicas of a tenant
          • Delete a read-only replica of a tenant
      • Manage JVM-dependent services
    • Data source management
      • Create a data source
      • Manage data sources
      • User privileges
        • User privileges for compatibility assessment
        • User privileges for data migration
        • User privileges for performance assessment
        • User privileges for data archiving
        • User privileges for data cleanup
      • Connect via private network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
        • Private IP address segments
      • Connect via public network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Manage recycle Bin
      • Instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance-level recycle bin
        • Tenant-level recycle bin
  • Work with Analytical Instances
    • Overview
    • Core features
    • Create an instance
    • Connect to an instance
      • Overview
      • Get connection string
        • Overview
        • Connect using AWS PrivateLink
        • Connect using a public IP address
      • Connect with clients
        • Connect to OceanBase Cloud by using Client ODC
        • Connect to OceanBase Cloud by using a MySQL client
        • Connect to OceanBase Cloud by using OBClient
      • Connect with drivers
        • Java
          • Connect to OceanBase Cloud by using Spring Boot
          • Connect to OceanBase Cloud by using Spring Batch
          • Connect to OceanBase Cloud by using Spring Data JDBC
          • Connect to OceanBase Cloud by using Spring Data JPA
          • Connect to OceanBase Cloud by using Hibernate
          • Connect to OceanBase Cloud by using MyBatis
          • Connect to OceanBase Cloud using MySQL Connector/J
        • Python
          • Connect to OceanBase Cloud by using mysqlclient
          • Connect to OceanBase Cloud by using PyMySQL
          • Connect to OceanBase Cloud using MySQL Connector/Python
        • C
          • Connect to OceanBase Cloud using MySQL Connector/C
        • Go
          • Connect to OceanBase Cloud using Go-SQL-Driver/MySQL
        • PHP
          • Connect to OceanBase Cloud using PHP
      • Use database connection pool
        • Database connection pool configuration
        • Connect to OceanBase Cloud by using a Tomcat connection pool
        • Connect to OceanBase Cloud by using a C3P0 connection pool
        • Connect to OceanBase Cloud by using a Proxool connection pool
        • Connect to OceanBase Cloud by using a HikariCP connection pool
        • Connect to OceanBase Cloud by using a DBCP connection pool
        • Connect to OceanBase Cloud by using Commons Pool
        • Connect to OceanBase Cloud by using a Druid connection pool
    • Data table design
      • Table overview
      • Best practices
        • Unit 1: Best practices for optimizing storage structures and query performance
        • Unit 2: Best practices for creating special indexes
    • Export data
    • OceanBase data processing
    • Query acceleration
      • Statistics
      • Materialized views for query acceleration
      • Select a query parallelism level
    • Manage instances
      • Instance overview
      • Change configuration
      • Modify primary zone
      • Manage parameters
      • Backup and restore
        • Backup overview
        • Backup strategies
        • Immediate backup
        • Data backup
        • Initiate restore
        • Data restore
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor the performance of hosts in an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Optimization management
            • Manage active outlines
            • View the optimization history
          • Request analysis
            • Request analysis
      • Stop and restart instances
      • Release instances
      • Manage databases and accounts
        • Create and manage accounts
        • Create a database
        • Manage databases
      • Manage tags
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Performance diagnosis and tuning
      • Use the DBMS_XPLAN package for performance diagnostics
      • Use the GV$SQL_PLAN_MONITOR view for performance analysis
      • Views related to AP performance analysis
    • Performance testing
    • Product integration
    • Manage recycle Bin
      • View instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance recycle bin
  • Work with Key-Value Instances
    • Try out Key-Value instances
      • Create an instance
      • Create a tenant
      • Create an account for a database user
      • OBKV HBase data operation examples
    • Use Table model
      • Create an instance
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-Table
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
    • Use HBase model
      • OBKV-HBase Overview
      • Create an instance
      • Develop in HBase model
        • Connect to an instance by using the OBKV-HBase client
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-HBase
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
      • Performance test
    • Connect Key-Value instances
      • Overview
      • Connect using a public IP address
  • Migrations
    • Data migration and import solutions
    • Data assessment and migration quick start
    • Assess compatibility
      • Overview
      • Perform online assessment
      • Perform offline assessment
      • Manage compatibility assessment tasks
        • View a compatibility assessment task
        • View and download a compatibility assessment report
        • Stop a compatibility assessment task
        • Delete a compatibility assessment task
      • Obtain files for upload
      • Configure PrivateLink
      • Add an IP address to an allowlist
    • Migrate data
      • Overview
      • Migrations specification
      • Purchase a data migration instance
      • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
      • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
      • Migrate data between OceanBase database tenants of the same compatibility mode
      • Migrate data between OceanBase database tenants of different compatibility modes
      • Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
      • Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
      • Configure a two-way synchronization task
      • Migrate data from an OceanBase database to a Kafka instance
      • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
      • Migrate incremental data from a MySQL-compatible tenant of OceanBase Database to a TiDB Database
      • Migrate data from a PostgreSQL database to an OceanBase database
      • Migrate incremental data from an OceanBase Database to a PostgreSQL database
      • Manage data migration tasks
        • View details of a data migration task
        • Rename a data migration task
        • View and modify migration objects
        • View and modify migration parameters
        • Configure alert monitoring
        • Manage data migration tasks by using tags
        • Start, stop, and resume a data migration task
        • Clone a data migration task
        • Terminate and release a data migration task
      • Features
        • Custom DML/DDL configurations
        • DDL synchronization scope
        • Use SQL conditions to filter data
        • Rename a migration object
        • Set an incremental synchronization timestamp
        • Instructions on schema migration
        • Configure and modify matching rules
        • Wildcard rules
        • Import migration objects
        • Download conflict data
        • Change a topic
        • Column filtering
        • Data formats
      • Authorize an Alibaba Cloud account
      • SQL statements for querying table objects
      • Online DDL tools
      • Create a trigger
      • Modify the log level of a self-managed PostgreSQL instance
      • Supported DDL statements for synchronization and their limitations
        • DDL synchronization from Aurora MySQL DB clusters to MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from MySQL-compatible tenants of OceanBase Database to Aurora MySQL DB clusters
        • DDL synchronization between MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle databases to Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle-compatible tenants of OceanBase Database to Oracle databases
        • DDL synchronization between Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from OceanBase databases to Kafka instances
    • Data subscription
      • Create a data subscription task
      • Manage data subscription tasks
        • View details of a data subscription task
        • Configure subscription information
        • Modify the name of a data subscription task
        • View and modify subscription objects
        • View data subscription parameters
        • Set up data subscription alerts
        • Start, stop, and resume data subscription tasks
        • Clone a data subscription task
        • Release a data subscription task
      • Manage private connections for data subscriptions
      • Configure consumer subscription
      • Message formats
    • Data validation
      • Overview
      • Create a data validation task
      • Manage data validation tasks
        • View details of a data validation task
        • View and modify validation objects
        • View and modify validation parameters
        • Manage data validation tasks with tags
        • Start, pause, and resume data validation tasks
        • Clone a data validation task
        • Release a data validation task
      • Features
        • Import validation objects
        • Rename the validation object
        • Filter objects by using SQL conditions
        • Configure the matching rules for the validation object
    • Assess performance
      • Overview
      • Obtain traffic files from a database instance
      • Create a full performance assessment task
      • Create an SQL file parsing task
      • Create an SQL file replay task
      • Manage performance assessment tasks
        • View the details of a performance assessment task
        • View a performance assessment report
        • Retry and stop a performance assessment task
        • Delete a performance assessment task
      • Obtain a database instance
      • Create an access key
    • Import data
      • Import data
      • Direct load
      • Supported file formats and encoding formats for Data Import
      • Sample data introduction
    • Binlog service
      • Overview
      • Purchase the Binlog service
      • Manage Binlog Service
        • View details of the Binlog service
        • Change configuration
        • Modify the auto-scaling strategy for storage space
        • Modify the elasticity strategy for compute units
        • Disable the Binlog service
  • Security
    • OceanBase Cloud account settings
      • Modify login password
      • Multi-factor authentication
      • Manage AccessKeys
      • Time zone settings
      • Manage cloud marketplace accounts
      • Account audit
    • Organizations and projects
      • Overview
      • Manage organization information
      • Project management
        • Manage projects
        • Cross-project bidirectional authorization
        • Subscribe to project messages
      • Manage members
      • Permissions for roles
      • Cost management
        • Overview
        • Cost details
        • Manage cost units
      • Operation audit
    • Database accounts and privileges
      • Account privileges
      • Authorize cloud vendor accounts
      • AWS KMS key management
      • Support access control
    • Security and encryption
      • Set allowlist groups
      • SSL encryption
      • Transparent Data Encryption (TDE)
    • Monitoring dashboard
    • Events
  • SQL Console
    • Overview
    • Access SQL Console
    • SQL editing and execution
    • PL compilation
    • Result set editing
    • Execution analysis
    • Database object management
      • Create a table
      • Create a view
      • Create a function
      • Create a stored procedure
      • Create a program package
      • Create a trigger
      • Create a type
      • Create a sequence
      • Create a synonym
    • Session variable management
    • Functional keys in SQL Console
  • Integrations
    • Overview
    • Schema evolution
      • Liquibase
      • Flyway
    • Data ingestion
      • Canal
      • dbt
      • Debezium
      • Flink
      • Glue
      • Informatica Cloud
      • Kafka
      • Maxwell
      • SeaTunnel
      • DataWorks
      • NiFi
    • SQL development
      • DataGrip
      • DBeaver
      • Navicat
      • TablePlus
    • Orchestration
      • DolphinScheduler
      • Linkis
      • Airflow
    • Visualization
      • Grafana
      • Power BI
      • Quick BI
      • Superset
      • Tableau
    • Observability
      • Datadog
      • Prometheus
    • Database management
      • Bytebase
    • AI
      • LlamaIndex
      • Dify
      • LangChain
      • Tongyi Qianwen
      • OpenAI
      • n8n
      • Trae
      • SpringAI
      • Cline
      • Cursor
      • Continue
      • Toolbox
      • CamelAI
      • Firecrawl
      • Hugging Face
      • Ollama
      • Google Gemini
      • Cloudflare Workers AI
      • Jina AI
      • Augment Code
      • Claude Code
      • Kiro
    • Development tools
      • Cloudflare Workers
      • Vercel
  • Best practices
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through cross-cloud primary-standby databases (1:1)
    • High availability through cross-cloud primary-standby databases (1:n)
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud
  • References
    • System architecture
    • System management
    • Database object management
    • Database design and specification constraints
    • SQL reference
    • System views
    • Parameters and system variables
    • Error codes
    • Performance tuning
    • Open API References
      • Overview
      • Service endpoints
      • Using API
      • Open API List
        • Cluster management
          • DescribeInstances
          • DescribeInstance
          • CreateInstance
          • DeleteInstance
          • ModifyInstanceName
          • DescribeNodeOptions
          • StopCluster
          • StartCluster
          • ModifyInstanceSpec
          • DescribeInstanceTopology
          • DescribeReadonlyInstances
          • CreateReadonlyInstance
          • ModifyReadonlyInstanceSpec
          • ModifyReadonlyInstanceDiskSize
          • ModifyReadonlyInstanceNodeNum
          • DeleteReadonlyInstance
          • DescribeInstanceAvailableRoZones
          • DescribeInstanceparameters
          • UpdateInstanceParameters
          • DescribeInstanceparametersHistory
          • ModifyInstanceTagList
          • ModifyInstanceNodeNum
        • Tenant management
          • DescribeTenants
          • DescribeTenant
          • CreateTenants
          • DeleteTenants
          • ModifyTenantName
          • ModifyTenant
          • ModifyTenantUserDescription
          • ModifyTenantUserStatus
          • GetTenantCreateConstraints
          • ModifyTenantPrimaryZone
          • GetTenantCreateCpuConstraints
          • GetTenantCreateMemConstraints
          • GetTenantModifyCpuConstraints
          • GetTenantModifyMemConstraints
          • CreateTenantSecurityIpGroup
          • DescribeTenantSecurityIpGroups
          • ModifyTenantSecurityIpGroup
          • DeleteTenantSecurityIpGroup
          • DescribeTenantPrivateLink
          • DeletePrivatelinkConnection
          • CreatePrivatelinkService
          • ConnectPrivatelinkService
          • AddPrivatelinkServiceUser
          • BatchKillProcessList
          • DescribeProcessStatsComposition
          • DescribeTenantAvailableRoZones
          • DescribeTenantAddressInfo
          • ModifyTenantReadonlyReplica
          • DescribeTenantParameters
          • UpdateTenantParameters
          • DescribeTenantParametersHistory
          • ModifyTenantTagList
        • Tenant user management
          • CreateTenantUser
          • DescribeTenantUsers
          • DeleteTenantUsers
          • ModifyTenantUserPassword
          • ModifyTenantUserRoles
        • Database management
          • CreateDatabase
          • DescribeDatabases
          • DeleteDatabases
          • ModifyDatabaseUserRoles
        • Backup and restore
          • DescribeDataBackupSet
          • DescribeRestorableTenants
          • ModifyBackupStrategy
          • CreateTenantRestoreTask
          • CreateDataBackupTask
          • DescribeOneDataBackupSet
        • Database proxy management
          • CreateTenantAddress
          • CreateTenantSingleTunnelSLBAddress
          • DeleteTenantAddress
          • DescribeTenantAddress
          • ModifyOdpClusterSpec
          • ModifyTenantAddressPort
          • ModifyTenantAddressDomainPrefix
          • ConfirmPrivatelinkConnection
          • DescribeTenantAddressInfo
        • Monitoring management
          • DescribeTenantMetrics
          • DescribeMetricsData
          • DescribeNodeMetrics
        • Diagnostic management
          • DescribeOasTopSQLList
          • DescribeOasAnomalySQLList
          • DescribeOasSlowSQLList
          • DescribeOasSQLText
          • DescribeSqlAudits
          • DescribeOutlineBinding
          • DescribeSampleSqlRawTexts
          • DescribeSQLTuningAdvices
          • DescribeOasSlowSQLSamples
          • DescribeOasSQLTrends
          • DescribeOasSQLPlanGroup
        • Security management
          • CreateSecurityIpGroup
          • DescribeInstanceSSL
          • ModifyInstanceSSL
          • DescribeTenantEncryption
          • ModifyTenantEncryption
          • ModifySecurityIps
          • DeleteSecurityIpGroup
          • DescribeTenantSecurityConfigs
          • DescribeInstanceSecurityConfigs
        • Tag management
          • DescribeTags
          • CreateTags
          • UpdateTag
          • DeleteTag
        • Historical event management
          • DescribeOperationEvents
      • Differences between ApsaraDB for OceanBase APIs and OceanBase Cloud APIs
    • Download OBClient
      • Download OBClient
      • Download OceanBase Connector/J
      • Download client ODC
      • Download OceanBase Connector/ODBC
      • Download OBClient Libs
    • ODC User Guide
      • What is ODC?
        • What is ODC?
        • Limitations
      • Quick Start
        • Client ODC
          • Overview
          • Install Client ODC
          • Use Client ODC
        • Web ODC
          • Overview
          • Use Web ODC
      • Data Source Management
        • Create a data source
        • Data sources and project collaboration
        • Database O&M
          • Session management
          • Global variable management
          • Recycle bin management
      • SQL Development
        • Edit and execute SQL statements
        • Perform PL compilation and debugging
        • Edit and export the result set of an SQL statement
        • Execution analysis
        • Generate test data
        • System settings
        • Database objects
          • Table objects
            • Overview
            • Create a table
          • View objects
            • Overview
            • Create a view
            • Manage views
          • Materialized view objects
            • Overview
            • Create a materialized view
            • Manage materialized views
          • Function objects
            • Overview
            • Create a function
            • Manage functions
          • Stored procedure objects
            • Overview
            • Create a stored procedure
            • Manage stored procedures
          • Sequence objects
            • Overview
            • Create a sequence
            • Manage sequences
          • Package objects
            • Overview
            • Create a program package
            • Manage program packages
          • Trigger objects
            • Overview
            • Create a trigger
            • Manage triggers
          • Type objects
            • Overview
            • Create a type
            • Manage types
          • Synonym objects
            • Overview
            • Create a synonym
            • Manage synonyms
      • Import and Export
        • Import schemas and data
        • Export schemas and data
      • Database Change Management
        • User Permission Management
          • Users and roles
          • Automatic authorization
          • User permission management
        • Project collaboration management
        • Risk levels, risk identification rules, and approval processes
        • SQL check specifications
        • SQL window specification
        • Database change management
        • Batch database change management
        • Online schema changes
        • Synchronize shadow tables
        • Schema comparison
      • Data Lifecycle Management
        • Partitioning Plan Management
          • Manage partitioning plans
          • Set partitioning strategies
          • Examples
        • SQL plan task
      • Data Desensitization and Auditing
        • Desensitize data
        • Operation records
      • Notification Management
        • Overview
        • View notification records
        • Manage Notification Channel
          • Create a notification channel
          • View, edit, and delete a notification channel
          • Configure a custom channel
        • Manage notification rules
      • Best Practices
        • Tips for SQL development
        • Explore ODC team workspaces
        • Understanding real-time SQL diagnostics for OceanBase AP
        • OceanBase historical database solutions
        • ODC SQL check for automatic identification of high-risk operations
        • Manage and modify sharded databases and tables via ODC
        • Data masking and control practices
        • Enterprise-level control and collaboration: Safeguard every database change
    • Data Development
      • Overview
      • Workspace management
      • Worksheet management
      • Compute node pool management
      • Workflow management
      • Dashboard management
      • Manage Git repositories
      • SQL development
        • SQL editing and execution
        • Result set editing
        • Execution analysis
        • Database object management
          • Create a table
          • Create a view
          • Create a function
          • Create a stored procedure
        • Session variable management
        • Git integration
      • Sample datasets
      • Data development terms
  • Manage Billing
    • Access billing
    • View monthly bills
    • View payment details
    • View orders
    • Use vouchers for payment
    • View invoices
  • Legal Agreements
    • OceanBase Cloud Services Agreement
    • Service Level Agreement
    • OceanBase Data Processing Addendum
    • Service Level Agreement for OceanBase Cloud Migration Service

Download PDF

Release notes for 2026 Release notes for 2025 Release notes for 2024 Release history|---| Optimization of Backup and Restore commercialization strategy Cross-AZ data transfer billing (OceanBase Cloud on AWS) Database Proxy pricing update AWS instance pricing adjustment Overview Management mode and scenarios High availability with cross-cloud active-active architecture High availability with cross-cloud primary-standby databases Multi-level caching in shared storage Multi-layer online scaling and on-demand adjustment Deployment modes Storage architecture Product specifications Overview Backup and restore billing SQL audit billing Migrations billing Database proxy billing Binlog service billing Overview of OceanBase Cloud support plans Read-only replica billing Supported database versions Get started with a transactional instance Get started with an analytical instance Get started with a Key-Value instance Overview Overview Create via OceanBase Cloud official website Create via AWS Marketplace Create via GCP Marketplace Create via Huawei Cloud Marketplace Create via Alibaba Cloud Marketplace Create via Azure Marketplace Release an instance Manage tags Manage JVM-dependent services Create a data source Manage data sources Archive data Clean up data Instance recycle bin Overview Core features Create an instance Overview Table overview Export data OceanBase data processing Statistics Materialized views for query acceleration Select a query parallelism level Instance overview Change configuration Modify primary zone Manage parameters Stop and restart instances Release instances Manage tags Archive data Clean up data Use the DBMS_XPLAN package for performance diagnostics Use the GV$SQL_PLAN_MONITOR view for performance analysis Views related to AP performance analysis Performance testing Product integration View instance recycle bin Create an instance Create a tenant Create an account for a database user OBKV HBase data operation examples Create an instance OBKV-HBase Overview Create an instance Performance test Overview Connect using a public IP address Data migration and import solutions Data assessment and migration quick start Overview Perform online assessment Perform offline assessment Obtain files for upload Configure PrivateLink Add an IP address to an allowlist Overview Migrations specification Purchase a data migration instance Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data between OceanBase database tenants of the same compatibility mode Migrate data between OceanBase database tenants of different compatibility modes Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database Configure a two-way synchronization task Migrate data from an OceanBase database to a Kafka instance
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Cloud
iconOceanBase Cloud

    Connect to OceanBase Cloud by using a Proxool connection pool

    Last Updated:2026-04-07 08:08:33  Updated
    share
    What is on this page
    Prerequisites
    Procedure
    Step 1: Import the proxool-mysql-client project to IntelliJ IDEA
    Step 2: Modify the database connection information in the proxool-mysql-client project
    Step 4: Run the proxool-mysql-client project
    Project code
    Code in pom.xml
    Code in db.properties
    Code in Main.java
    Complete code
    References

    folded

    share

    This topic describes how to use a Proxool connection pool, MySQL Connector/J, and OceanBase Cloud to build an application for basic database operations, such as table creation, data insertion, data deletion, data modification, and data query.

    Download the proxool-mysql-client sample project Connect to OceanBase Database by using a Proxool connection pool (MySQL compatible mode)

    Prerequisites

    • You have registered an OceanBase Cloud account and have created an instance. For details, refer to Create an instance.

    • You have obtained the connection string of the instance. For more information, see Obtain the connection string.

    • You have installed Java Development Kit (JDK) 1.8 and Maven.

    • You have installed IntelliJ IDEA.

      Note

      This topic uses IntelliJ IDEA Community Edition 2021.3.2 to run the sample code. You can also choose a suitable tool as needed.

    Procedure

    Note

    The following procedure uses IntelliJ IDEA Community Edition 2021.3.2 to compile and run this project in Windows. If you use another operating system or compiler, the procedure can be slightly different.

    Step 1: Import the proxool-mysql-client project to IntelliJ IDEA

    1. Start IntelliJ IDEA.

    2. On the welcome page, click Open, navigate to the directory where the project is located, select the root directory of the project, and click OK.

      1

    3. IntelliJ IDEA automatically detects the project type and loads the project.

      Note

      When you use IntelliJ IDEA to import a Maven project, IntelliJ IDEA automatically detects the pom.xml file in the project, downloads the required dependency libraries based on the dependencies described in the file, and adds them to the project.

      2

    4. (Optional) Manually import the unparsed dependencies.

      If the dependencies in the pom.xml file are automatically imported to the project, ignore this step.

      The information in the Sync pane of IntelliJ IDEA shows that the proxool-cglib and proxool dependencies are not parsed. The .jar files of the proxool-cglib and proxool dependencies are located in the lib folder of the root directory of the proxool-mysql-client project. Perform the following steps to add these files to the project:

      1. In IntelliJ IDEA, choose File > Project Structure.
      2. In the left-side pane, click Modules.
      3. In the right-side pane, click the Dependencies tab. In the upper-right corner of this tab, click the plus sign (+) and select JARs or directories.
      4. In the dialog box that appears, navigate to the lib directory where the .jar files are stored, select the .jar files, and click OK.
      5. The added .jar files appear in the list on the Dependencies tab.
      6. Click Apply or OK to save the changes.

      3

    Step 2: Modify the database connection information in the proxool-mysql-client project

    Modify the database connection information in the db.properties file in the proxool-mysql-client/src/main/resources/ directory based on the obtained connection string mentioned in the "Prerequisites" section.

    Here is an example:

    • The endpoint is t5******.********.oceanbase.cloud.
    • The access port is 3306.
    • The name of the database to be accessed is test.
    • The instance account is test_user.
    • The password is ******.

    The sample code is as follows:

    ...
    jdbc-1.proxool.driver-url=jdbc:mysql://t5******.********.oceanbase.cloud:3306/test?useSSL=false
    jdbc-1.user=test_user
    jdbc-1.password=******
    ...
    

    Step 4: Run the proxool-mysql-client project

    1. In the navigation pane of the project, find and expand the src/main/java/com.example directory.

    2. Right-click the Main file and choose Run 'Main.main()'.

    3. IntelliJ IDEA automatically compiles and runs this project and displays the output results on the running panel.

      4

    4. You can also execute the following SQL statement in OceanBase Client (OBClient) to view the results:

      obclient [(none)]> SELECT * FROM test.test_proxool;
      

      The return result is as follows:

      +------+---------------+
      | c1   | c2            |
      +------+---------------+
      |    6 | test_update   |
      |    7 | test_insert7  |
      |    8 | test_insert8  |
      |    9 | test_insert9  |
      |   10 | test_insert10 |
      +------+---------------+
      5 rows in set
      

    Project code

    Click here to download the project code, which is a package named proxool-mysql-client.zip.

    Decompress the package to obtain a folder named proxool-mysql-client. The directory structure is as follows:

    proxool-mysql-client
    ├── lib
    │    ├── proxool-0.9.1.jar
    │    └── proxool-cglib.jar
    ├── src
    │   └── main
    │       ├── java
    │       │   └── com
    │       │       └── example
    │       │           └── Main.java
    │       └── resources
    │           └── db.properties
    └── pom.xml
    

    The files and directories are described as follows:

    • lib: a directory that stores the dependency libraries required by the project.
    • proxool-0.9.1.jar: library files of the Proxool connection pool.
    • proxool-cglib.jar: CGLib files for the Proxool connection pool.
    • src: the root directory that stores the source code.
    • main: a directory that stores the main code, including the major logic of the application.
    • java: a directory that stores the Java source code.
    • com: a directory that stores the Java package.
    • example: a directory that stores the packages of the sample project.
    • Main.java: a sample file of the main class that contains logic such as the table creation, data insertion, data deletion, data modification, and data query logic.
    • resources: a directory that stores resource files, including configuration files.
    • db.properties: the configuration file of the connection pool, which contains relevant database connection parameters.
    • pom.xml: the configuration file of the Maven project, which is used to manage project dependencies and build settings.

    Code in pom.xml

    pom.xml is the configuration file of the Maven project, which defines the dependencies, plug-ins, and build rules of the project. Maven is a Java project management tool that can automatically download dependencies and compile and package projects.

    Perform the following steps to configure the pom.xml file:

    1. Declare the file.

      Declare the file to be an XML file that uses XML standard 1.0 and the character encoding UTF-8.

      The sample code is as follows:

      <?xml version="1.0" encoding="UTF-8"?>
      
    2. Configure namespaces and the POM model version.

      1. xmlns: the default XML namespace for the POM, which is set to http://maven.apache.org/POM/4.0.0.
      2. xmlns:xsi: the XML namespace for XML elements prefixed with xsi, which is set to http://www.w3.org/2001/XMLSchema-instance.
      3. xsi:schemaLocation: the location of an XML schema definition (XSD) file. The value consists of two parts: the default XML namespace (http://maven.apache.org/POM/4.0.0) and the URI of the XSD file (http://maven.apache.org/xsd/maven-4.0.0.xsd).
      4. <modelVersion>: the POM model version used by the POM file, which is set to 4.0.0.

      The sample code is as follows:

      <project xmlns="http://maven.apache.org/POM/4.0.0"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
      
       <!-- Other configurations -->
      
      </project>
      
    3. Configure basic information.

      1. <groupId>: the ID of the group to which the project belongs, which is set to com.example.
      2. <artifactId>: the name of the project, which is set to proxool-mysql-client.
      3. <version>: the project version, which is set to 1.0-SNAPSHOT.

      The sample code is as follows:

          <groupId>com.example</groupId>
          <artifactId>proxool-mysql-client</artifactId>
          <version>1.0-SNAPSHOT</version>
      
    4. Configure the attributes of the project source file.

      Specify maven-compiler-plugin as the compiler plug-in of Maven, and set the source code version and target code version of the compiler to Java 8. This means that the project source code is compiled by using Java 8 and the compiled bytecode is compatible with the Java 8 runtime environment. This ensures that Java 8 syntax and characteristics can be correctly processed during the compilation and running of the project.

      Note

      Java 1.8 and Java 8 are different names for the same version.

      The sample code is as follows:

          <build>
              <plugins>
                  <plugin>
                      <groupId>org.apache.maven.plugins</groupId>
                      <artifactId>maven-compiler-plugin</artifactId>
                      <configuration>
                          <source>8</source>
                          <target>8</target>
                      </configuration>
                  </plugin>
              </plugins>
          </build>
      
    5. Configure the components on which the project depends.

      Define the components on which the project depends by using <dependency>.

      1. Add the mysql-connector-java library for connecting to and operating the database and configure the following parameters:

        1. <groupId>: the ID of the group to which the dependency belongs, which is set to mysql.
        2. <artifactId>: the name of the dependency, which is set to mysql-connector-java.
        3. <version>: the version of the dependency, which is set to 5.1.47.

        The sample code is as follows:

                <dependency>
                    <groupId>mysql</groupId>
                    <artifactId>mysql-connector-java</artifactId>
                    <version>5.1.47</version>
                </dependency>
        
      2. Add the proxool-cglib dependency library, which is a CGLib library for the Proxool connection pool, and configure the following parameters:

        1. <groupId>: the ID of the group to which the dependency belongs, which is set to proxool.
        2. <artifactId>: the name of the dependency, which is set to proxool-cglib.
        3. <version>: the version of the dependency, which is set to 0.9.1.

        The sample code is as follows:

                <dependency>
                    <groupId>proxool</groupId>
                    <artifactId>proxool-cglib</artifactId>
                    <version>0.9.1</version>
                </dependency>
        
      3. Add the proxool dependency library, which is the core library of the Proxool connection pool, and configure the following parameters:

        1. <groupId>: the ID of the group to which the dependency belongs, which is set to proxool.
        2. <artifactId>: the name of the dependency, which is set to proxool.
        3. <version>: the version of the dependency, which is set to 0.9.1.

        The sample code is as follows:

                <dependency>
                    <groupId>proxool</groupId>
                    <artifactId>proxool</artifactId>
                    <version>0.9.1</version>
                </dependency>
        
      4. Add the commons-logging dependency library, which is a general log library for recording logs in the application, and configure the following parameters:

        1. <groupId>: the ID of the group to which the dependency belongs, which is set to commons-logging.
        2. <artifactId>: the name of the dependency, which is set to commons-logging.
        3. <version>: the version of the dependency, which is set to 1.2.

        The sample code is as follows:

                <dependency>
                    <groupId>commons-logging</groupId>
                    <artifactId>commons-logging</artifactId>
                    <version>1.2</version>
                </dependency>
        

    Code in db.properties

    db.properties is a sample configuration file of the connection pool.

    Note

    When you use the .properties file to configure the Proxool connection pool, observe the following rules:

    1. Each connection pool is identified by a unique custom name prefixed with jdbc.
    2. Proxool attributes are prefixed with proxool.. You can configure the Proxool connection pool based on these attributes.
    3. Attributes that are not prefixed with jdbc will be ignored and not used by the Proxool connection pool.
    4. Attributes that are not prefixed with proxool.. are passed to actual database connections, namely, the database driver.
    For more information about how to configure the Proxool connection pool, see Configuration.

    The db.properties file in this topic is a sample configuration file used to configure the connection pool attributes of a data source named jdbc-1. Perform the following steps to configure the db.properties file:

    1. Set the alias of the data source to TEST.

      The sample code is as follows:

      jdbc-1.proxool.alias=TEST
      
    2. Configure database connection parameters.

      1. Specify the class name of the driver, which is set to the class name com.mysql.jdbc.Driver.
      2. Specify the URL for connecting to the database, including the host IP address, port number, database to be accessed, and additional connection attributes.
      3. Specify the username for connecting to the database.
      4. Specify the password for connecting to the database.

      The sample code is as follows:

      jdbc-1.proxool.driver-class=com.mysql.jdbc.Driver
      jdbc-1.proxool.driver-url=jdbc:mysql://$host:$port/$database_name?useSSL=false
      jdbc-1.user=$user_name
      jdbc-1.password=$password
      

      The parameters are described as follows:

      • $host: the access address of OceanBase Cloud. The value is sourced from the -h parameter in the connection string.
      • $port: the access port of OceanBase Cloud. The value is sourced from the -P parameter in the connection string.
      • $database_name: the name of the database to be accessed. The value is sourced from the -D parameter in the connection string.
      • $user_name: the account name. The value is sourced from the -u parameter in the connection string.
      • $password: the account password. The value is sourced from the -p parameter in the connection string.
    3. Configure other parameters of the Proxool connection pool.

      1. Set the maximum number of connections in the connection pool to 8.
      2. Set the minimum number of connections in the connection pool to 5.
      3. Set the number of available connections in the connection pool to 4.
      4. Enable the Verbose mode to show more log information.
      5. Set the statistics collection cycles of the connection pool to 10s, 1 minute, and 1 day.
      6. Set the log level of the statistics to ERROR.

      The sample code is as follows:

      jdbc-1.proxool.maximum-connection-count=8
      jdbc-1.proxool.minimum-connection-count=5
      jdbc-1.proxool.prototype-count=4
      jdbc-1.proxool.verbose=true
      jdbc-1.proxool.statistics=10s,1m,1d
      jdbc-1.proxool.statistics-log-level=error
      

    Notice

    The actual parameter configurations depend on the project requirements and database characteristics. We recommend that you adjust and configure the parameters based on the actual situation. For more information about parameters of the Proxool connection pool, see Properties.

    General parameters

    Parameter Default value Description
    alias N/A The alias of the connection pool. You can identify a connection pool by using an alias. This is helpful when multiple connection pools exist.
    driver-class N/A The class name of the database driver.
    driver-url N/A The URL for connecting to the database, including the host IP address, port number, schema to be accessed, and optional database driver parameters.
    username N/A The username for connecting to the database.
    password N/A The password for connecting to the database.
    maximum-connection-count 15 The maximum number of connections allowed in the connection pool. The default value is 15, which indicates that at most 15 connections can be created in the connection pool.
    minimum-connection-count 5 The minimum number of connections in the connection pool. The default value is 5, which indicates that the connection pool contains at least five connections.
    prototype-count 0 The number of prototype connections in the connection pool. The default value is 0, which indicates that the connection pool will not actively create extra connections.
    verbose false Specifies whether to enable the Verbose mode for the connection pool. The default value is false, which indicates the Quiet mode.
    When verbose is set to true, the connection pool returns more detailed information to facilitate debugging and monitoring for developers. The information can include the status of the connection pool, the creation and release of connections, and the usage of connections.
    Enabling the Verbose mode can help developers better understand the running status of the connection pool and check whether connections are properly allocated and recycled. This is very helpful in troubleshooting connection leaks and performance issues, as well as in system tuning.
    In a production environment, we recommend that you do not set verbose to true. This is because in Verbose mode, a large amount of information is generated, which can compromise system performance and affect the log file size. We recommend that you set verbose to false in general cases and temporarily enable the Verbose mode for debugging and monitoring when necessary.
    statistics null The sampling cycles of usage statistics of the connection pool. You can specify multiple comma-separated values in the format of time + unit. For example, 10s,15m indicates that statistics are sampled every 10 seconds and every 15 minutes. Supported units are s (seconds), m (minutes), h (hours), and d (days). The default value is null, which specifies to disable statistics collection.
    When the statistics parameter is specified, statistics of the connection pool, such as the number of active connections, number of idle connections, and number of connection requests, are periodically collected. The sampling cycles determine the granularity and sampling rate of statistics.
    statistics-log-level null The log level of statistics, namely, the trace type of log statistics. Supported log levels are DEBUG, INFO, WARN, ERROR, and FATAL. The default value is null, which specifies not to record statistics logs.
    When the statistics-log-level parameter is specified, the connection pool records generated statistics at the specified log level. The statistics can include the status of the connection pool, the creation and release of connections, and the usage of connections.
    test-after-use N/A Specifies whether to verify a connection after it is closed. If you set the value to true and specify house-keeping-test-sql, each connection is verified when it is closed, namely returned to the connection pool. A connection that fails the verification will be abandoned.
    When a connection is no longer required, it is released and returned to the connection pool. test-after-use is specified so that connections returned to the connection pool are verified, thereby ensuring the availability and validity of the connections. Generally, connections are verified by using the SQL statement specified by house-keeping-test-sql.
    After the test-after-use feature is enabled, unavailable connections can be detected and removed from the connection pool in a timely manner, thereby preventing the application from obtaining an invalid connection. This can also improve the stability and reliability of the application.
    To use the test-after-use feature, you must configure the house-keeping-test-sql parameter, which specifies the SQL statement for verifying connections. This way, the connection pool can verify connections based on the rule defined in house-keeping-test-sql.
    house-keeping-test-sql N/A The SQL statement for verifying idle connections in the connection pool. When the housekeeping thread of the connection pool detects idle connections, it executes this SQL statement to verify these connections. The verification SQL statement will quickly execute. For example, the verification SQL statement is an SQL statement for checking the current date. If this parameter is not specified, connections are not verified. The SELECT CURRENT_DATE or SELECT 1 statement can be used in the MySQL compatible mode. The SELECT sysdate FROM DUAL or SELECT 1 FROM DUAL statement can be used in the Oracle compatible mode.
    trace false Specifies whether to record each SQL call in logs. If you set the value to true, each SQL call is recorded in logs at the DEBUG level, and the execution time is also recorded. You can also register with ConnectionListener to obtain the information. For more information about ConnectionListener, see ProxoolFacade. The default value is false.
    After the trace feature is enabled, a large number of logs can be generated, especially in the case of concurrent or frequent SQL calls. In a production environment, we recommend that you do not enable the trace feature to avoid generating excessive logs and compromising system performance.
    maximum-connection-lifetime 4 hours The maximum lifetime of a connection, namely the longest time for which a connection can exist before it is destroyed, in ms. The default value is 4 hours.
    The lifetime of a connection refers to the duration from when the connection is created to when it is destroyed. You can configure the maximum-connection-lifetime parameter to limit the maximum time for which a connection can exist in the connection pool. This can avoid connections that have not been used for a long time and prevent resources from being exposed.
    maximum-active-time 5 minutes The maximum active period of a thread. When the housekeeping thread of the connection pool detects a thread whose active period exceeds the specified value, it will terminate the thread. Therefore, you must specify a value greater than the longest response time expected. The default value is 5 minutes.
    The housekeeping thread will terminate excess available connections, such as connections that are not in use and connections that have been active for a period of time longer than the specified value of this parameter. The number of connections retained must be equal to or greater than the value of minimum-connection-count. The housekeeping thread periodically checks connections at an interval specified by house-keeping-sleep-time.
    maximum-new-connections N/A The maximum number of new connections established at a time. This parameter has been deprecated. We recommend that you use the simultaneous-build-throttle parameter instead.
    simultaneous-build-throttle 10 The maximum number of connections that can be simultaneously established at any time, namely, the maximum number of new connections that are established but are unavailable. The establishment of a connection can involve multiple threads, for example, when a connection is established as needed. In addition, it takes time for an established connection to become available. Therefore, a mechanism is required to prevent a large number of threads from deciding to establish connections at the same time.
    The simultaneous-build-throttle parameter aims to limit the number of new connections established at the same time, so as to control the concurrency of the connection pool. When the maximum number of concurrent connections is reached, threads that request new connections will be blocked until a connection is available or the specified timeout value is reached.
    You can configure the simultaneous-build-throttle parameter to balance the concurrency and the resource consumption of the connection pool. The default value is 10, which indicates that at most 10 connections can be established at the same time.
    overload-without-refusal-lifetime 60 A value for determining the status of the connection pool. If the connection pool rejects a connection request within the specified time (in ms), the connection pool is overloaded. The default timeout value is 60 seconds.
    test-before-use N/A Specifies whether to verify each connection before it is provided. If you set the value to true, each connection is verified by using the SQL statement specified by house-keeping-test-sql before it is provided to the application. If a connection fails the verification, it is abandoned. The connection pool selects another available connection. If all connections fail the verification, a new connection is created. If the new connection fails the verification, an SQLException is thrown.
    For a MySQL database, the autoReconnect parameter must be added to the connection parameters and be set to true. Otherwise, reconnection is not supported even if test-before-use is set to true.
    fatal-sql-exception null A feature for detecting and handling SQLExceptions. The value is a list of message segments separated with commas (,). When an SQLException occurs, its message is compared against these message segments. If the message of the exception contains any specified message segment (case-sensitive), the exception is considered fatal. The connection is abandoned. No matter what the case, the SQLException is thrown again so that the user knows what happens. You can also configure another exception to be thrown. For more information, see the description of the fatal-sql-exception-wrapper-class parameter. The default value is null.
    If the fatal-sql-exception-wrapper-class parameter is specified, you can configure a substitute exception class to be thrown. This allows you to define the methods for handling SQLExceptions.
    fatal-sql-exception-wrapper-class null The wrapper class for fatal SQLExceptions. If the fatal-sql-exception parameter is specified, after a fatal SQLException occurs, the default behavior is to discard the exception that causes the fatal SQLException and throw the original exception to the user. By using the fatal-sql-exception-wrapper-class parameter, you can wrap the SQLException in any exception inherited from SQLException or RuntimeException. If you do not want to build an exception class, you can use the FatalSQLException or FatalRuntimeException class provided by Proxool. To use these classes, you must set fatal-sql-exception-wrapper-class to org.logicalcobwebs.proxool.FatalSQLException or org.logicalcobwebs.proxool.FatalRuntimeException. The default value is null, which indicates that fatal SQLExceptions are not wrapped. The default value is null.
    The wrapper class must be a subclass of SQLException or RuntimeException.
    house-keeping-sleep-time 30s The sleeping time of the housekeeping thread of the connection pool. The housekeeping thread checks the status of all connections and determines whether to destroy or create connections. The default value is 30s, which indicates that the housekeeping thread executes the maintenance task every 30s.
    injectable-connection-interface N/A Allows Proxool to implement the methods defined in the delegate Connection object.
    injectable-statement-interface N/A Allows Proxool to implement the methods defined in the delegate Statement object.
    injectable-prepared-statement-interface N/A Allows Proxool to implement the methods defined in the delegate PreparedStatement object.
    injectable-callable-statement-interface N/A Allows Proxool to implement the methods defined in the delegate CallableStatement object.
    jndi-name N/A The registered name of the connection pool in Java Naming and Directory Interface (JNDI).

    Code in Main.java

    The Main.java file is a part of the sample application. It demonstrates the process of obtaining a database connection from the Proxool connection pool, executing a series of database operations, such as table creation, data insertion, data deletion, data modification, and data query, and returning the query result.

    Perform the following steps to configure the Main.java file:

    1. Import the required classes and interfaces.

      Define the package where the code resides and import relevant Proxool and JDBC classes. These classes are used to configure and manage the database connection pool and execute SQL statements. The Proxool connection pool can improve the database performance and reliability. Perform the following steps:

      1. Define the package where the code resides as com.example. This package stores the current Java classes.
      2. Import the Proxool configuration class org.logicalcobwebs.proxool.configuration.PropertyConfigurator.
      3. Import the input stream class java.io.InputStream for reading configuration files.
      4. Import the JDBC Connection class java.sql.Connection.
      5. Import the JDBC DriverManager class java.sql.DriverManager.
      6. Import the JDBC ResultSet class java.sql.ResultSet.
      7. Import the JDBC Statement class java.sql.Statement.
      8. Import the Properties class java.util.Properties for loading configuration files.

      The sample code is as follows:

      package com.example;
      
      import org.logicalcobwebs.proxool.configuration.PropertyConfigurator;
      import java.io.InputStream;
      import java.sql.Connection;
      import java.sql.DriverManager;
      import java.sql.ResultSet;
      import java.sql.Statement;
      import java.util.Properties;
      
    2. Define class names and methods.

      Define an entry method for Java applications. Obtain the database connection information from the configuration file. After establishing a database connection by using the Proxool driver, call the defined methods in sequence to execute DDL, DML, and query statements. Capture and return possible exceptions. This code segment aims to execute database operations and record logs by using the log recorder. Perform the following steps:

      1. Define a public class named Main.

        1. Define a private static constant named DB_PROPERTIES_FILE to indicate the path where the database configuration file is located. This constant can be referenced in code to load and read the configuration file.

        2. Define a public static method named main, which is used as the execution start point of the application.

          1. Capture code blocks with possible exceptions.

            1. Create a Properties object for reading attribute values from the configuration file.
            2. Use the class loader of the Main class to obtain the input stream of the configuration file.
            3. Use the input stream to load the attributes in the configuration file to the Properties object.
            4. Configure the connection pool based on the loaded attribute values.
            5. Dynamically load the Proxool database driver.
            6. Establish a database connection by using the Proxool driver.
            7. Create a Statement object.
            8. Call the defined method executeDDLStatements() to execute a DDL statement to create a table.
            9. Call the defined method executeDMLStatements() to execute DML statements to insert, update, and delete data.
            10. Call the defined method executeQueryStatements() to execute a query statement to obtain data.
          2. Capture and return possible exceptions.

      2. Define methods for creating tables, executing DML statements, and querying data.

      The sample code is as follows:

      public class Main {
          private static final String DB_PROPERTIES_FILE = "/db.properties";
      
          public static void main(String[] args) {
              try {
                  Properties properties = new Properties();
                  InputStream is = Main.class.getResourceAsStream(DB_PROPERTIES_FILE);
                  properties.load(is);
                  PropertyConfigurator.configure(properties);
      
                  Class.forName("org.logicalcobwebs.proxool.ProxoolDriver");
                  try (Connection conn = DriverManager.getConnection("proxool.TEST");
                      Statement stmt = conn.createStatement()) {
                      executeDDLStatements(stmt);
                      executeDMLStatements(stmt);
                      executeQueryStatements(stmt);
                  }
              } catch (Exception e) {
                  e.printStackTrace();
              }
          }
      
          // Define a method for creating tables.
          // Define a method for executing DML statements.
          // Define a method for querying data.
      }
      
    3. Define a method for creating tables.

      Define a private static method executeDDLStatements() for executing DDL statements, including table creation statements. Perform the following steps:

      1. Define a private static method executeDDLStatements(). The method receives a Statement object as parameters and can throw an exception.
      2. Call the execute() method to execute an SQL statement to create a table named test_proxool. The table has two columns: c1 and c2, which are respectively of the INT and VARCHAR(32) types.

      The sample code is as follows:

          private static void executeDDLStatements(Statement stmt) throws Exception {
              stmt.execute("CREATE TABLE test_proxool (c1 INT, c2 VARCHAR(32))");
          }
      
    4. Define a method for executing DML statements.

      Define a private static method executeDMLStatements() for executing DML statements to insert, delete, and update data. Perform the following steps:

      1. Define a private static method executeDMLStatements(). The method receives a Statement object as parameters. If an exception occurs during the execution, the method throws an exception.
      2. Use the FOR loop to perform 10 rounds of iterations. In the loop, call the execute() method to execute an INSERT statement to insert the i variable and relevant string values into the test_proxool table.
      3. Execute a DELETE statement to delete rows whose c1 column values are smaller than or equal to 5 from the test_proxool table.
      4. Execute an UPDATE statement to update the c2 column values of the rows in the test_proxool table whose c1 column values are 6 to test_update.

      The sample code is as follows:

          private static void executeDMLStatements(Statement stmt) throws Exception {
              for (int i = 1; i <= 10; i++) {
                  stmt.execute("INSERT INTO test_proxool VALUES (" + i + ",'test_insert" + i + "')");
              }
              stmt.execute("DELETE FROM test_proxool WHERE c1 <= 5");
              stmt.execute("UPDATE test_proxool SET c2 = 'test_update' WHERE c1 = 6");
          }
      
    5. Define a method for querying data.

      Define a private static method executeQueryStatements() for executing the SELECT statement and processing the result. Perform the following steps:

      1. Define a private static method executeQueryStatements(). The method receives a Statement object as parameters. If an exception occurs during the execution, the method throws an exception.
      2. Call the executeQuery() method to execute a SELECT statement and store the result in a ResultSet object named rs. Here, all data in the test_proxool table is returned for the query. Execute the try-with-resources block and make sure that the ResultSet object is automatically closed after being used.
      3. Use the WHILE loop and the next() method to iterate each row in the ResultSet object named rs. The rs.next() method moves the pointer to the next row in the result set in each iteration. The method returns true if a next row exists, and returns false if otherwise. In a WHILE loop, if the rs.next() method returns true, more rows are available. The code in the loop is executed, and the data in the current row is processed. After the data in all rows is processed, the rs.next() method returns false, and the loop ends.
      4. Call the getInt() and getString() methods to obtain the values of specified columns in the current row and return the values to the console. Here, the values of the c1 and c2 columns are returned. Call the getInt() method to obtain integer values and call the getString() method to obtain string values.

      The sample code is as follows:

          private static void executeQueryStatements(Statement stmt) throws Exception {
              try (ResultSet rs = stmt.executeQuery("SELECT * FROM test_proxool")) {
                  while (rs.next()) {
                      System.out.println(rs.getInt("c1") + "   " + rs.getString("c2"));
                  }
              }
          }
      

    Complete code

    pom.xml
    db.properties
    Main.java
    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>com.oceanbase</groupId>
        <artifactId>proxool-mysql-client</artifactId>
        <version>1.0-SNAPSHOT</version>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <configuration>
                        <source>8</source>
                        <target>8</target>
                    </configuration>
                </plugin>
            </plugins>
        </build>
    
        <dependencies>
            <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
                <version>5.1.47</version>
            </dependency>
            <dependency>
                <groupId>proxool</groupId>
                <artifactId>proxool-cglib</artifactId>
                <version>0.9.1</version>
            </dependency>
            <dependency>
                <groupId>proxool</groupId>
                <artifactId>proxool</artifactId>
                <version>0.9.1</version>
            </dependency>
            <dependency>
                <groupId>commons-logging</groupId>
                <artifactId>commons-logging</artifactId>
                <version>1.2</version>
            </dependency>
        </dependencies>
    </project>
    
    #alias: the alias of the data source
    jdbc-1.proxool.alias=TEST
    #driver-class: driver name
    jdbc-1.proxool.driver-class=com.mysql.jdbc.Driver
    #driver-url: url connection string, username and password must be determined
    jdbc-1.proxool.driver-url=jdbc:mysql://$host:$port/$database_name?useSSL=false
    jdbc-1.user=$user_name
    jdbc-1.password=$password
    #The maximum number of database connections. The default is 15
    jdbc-1.proxool.maximum-connection-count=8
    #The minimum number of database connections, defaults to 5
    jdbc-1.proxool.minimum-connection-count=5
    #The number of available connections in the Connection pool. If the number of connections in the current Connection pool is less than this value, new connections will be established (assuming that the maximum number of available connections is not exceeded). For example, if we have three active connections and two available connections, and our prototype count is 4, the database Connection pool will try to establish another two connections. This is different from the minimum connection count Minimum connection count also counts active connections. Prototype count is the number of spare connections
    jdbc-1.proxool.prototype-count=4
    #verbose: detailed information settings. Parameter bool value
    jdbc-1.proxool.verbose=true
    #statistics: connection pool usage statistics. Parameter "10s, 1m, 1d"
    jdbc-1.proxool.statistics=10s,1m,1d
    #statistics-log-level:  log statistics tracking type. Parameter 'ERROR' or 'INFO'
    jdbc-1.proxool.statistics-log-level=error
    
    package com.example;
    
    import org.logicalcobwebs.proxool.configuration.PropertyConfigurator;
    import java.io.InputStream;
    import java.sql.Connection;
    import java.sql.DriverManager;
    import java.sql.ResultSet;
    import java.sql.Statement;
    import java.util.Properties;
    
    public class Main {
        private static final String DB_PROPERTIES_FILE = "/db.properties";
    
        public static void main(String[] args) {
            try {
                Properties properties = new Properties();
                InputStream is = Main.class.getResourceAsStream(DB_PROPERTIES_FILE);
                properties.load(is);
                PropertyConfigurator.configure(properties);
    
                Class.forName("org.logicalcobwebs.proxool.ProxoolDriver");
                try (Connection conn = DriverManager.getConnection("proxool.TEST");
                    Statement stmt = conn.createStatement()) {
                    executeDDLStatements(stmt);
                    executeDMLStatements(stmt);
                    executeQueryStatements(stmt);
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    
        private static void executeDDLStatements(Statement stmt) throws Exception {
            stmt.execute("CREATE TABLE test_proxool (c1 INT, c2 VARCHAR(32))");
        }
    
        private static void executeDMLStatements(Statement stmt) throws Exception {
            for (int i = 1; i <= 10; i++) {
                stmt.execute("INSERT INTO test_proxool VALUES (" + i + ",'test_insert" + i + "')");
            }
            stmt.execute("DELETE FROM test_proxool WHERE c1 <= 5");
            stmt.execute("UPDATE test_proxool SET c2 = 'test_update' WHERE c1 = 6");
        }
    
        private static void executeQueryStatements(Statement stmt) throws Exception {
            try (ResultSet rs = stmt.executeQuery("SELECT * FROM test_proxool")) {
                while (rs.next()) {
                    System.out.println(rs.getInt("c1") + "   " + rs.getString("c2"));
                }
            }
        }
    }
    

    References

    • For more information about MySQL Connector/J, see Overview of MySQL Connector/J.
    • For more information about the Proxool connection pool, see Introduction for Users.

    Previous topic

    Connect to OceanBase Cloud by using a C3P0 connection pool
    Last

    Next topic

    Connect to OceanBase Cloud by using a HikariCP connection pool
    Next
    What is on this page
    Prerequisites
    Procedure
    Step 1: Import the proxool-mysql-client project to IntelliJ IDEA
    Step 2: Modify the database connection information in the proxool-mysql-client project
    Step 4: Run the proxool-mysql-client project
    Project code
    Code in pom.xml
    Code in db.properties
    Code in Main.java
    Complete code
    References