OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Cloud

  • Product Updates & Announcements
    • What's new
      • Release notes for 2026
      • Release notes for 2025
      • Release notes for 2024
      • Release history
    • Product announcements
      • Data development module deprecation notice
      • Optimization of Backup and Restore commercialization strategy
      • Cross-AZ data transfer billing (OceanBase Cloud on AWS)
      • Database Proxy pricing update
      • AWS instance pricing adjustment
  • Product Introduction
    • Overview
    • Management mode and scenarios
    • Core features
      • High availability with cross-cloud active-active architecture
      • High availability with cross-cloud primary-standby databases
      • Multi-level caching in shared storage
      • Multi-layer online scaling and on-demand adjustment
    • Deployment modes
    • Storage architecture
    • Product specifications
    • Product billing
      • Overview
      • Instance billing
        • Tencent Cloud instance billing
        • Alibaba Cloud instance billing
        • Huawei Cloud instance billing
        • AWS instance billing
        • GCP instance billing
      • Backup and restore billing
      • SQL audit billing
      • Migrations billing
      • Database proxy billing
      • Binlog service billing
      • Overview of OceanBase Cloud support plans
      • Read-only replica billing
    • Supported database versions
  • Get Started
    • Get started with a transactional instance
    • Get started with an analytical instance
    • Get started with a Key-Value instance
  • Work with Transactional Instances
    • Overview
    • Create an instance
      • Overview
      • Create via OceanBase Cloud official website
      • Create via AWS Marketplace
      • Create via GCP Marketplace
      • Create via Huawei Cloud Marketplace
      • Create via Alibaba Cloud Marketplace
      • Create via Azure Marketplace
    • Connect to an instance
      • MySQL compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using Alibaba Cloud VPC
          • Connect using a public IP address
          • Connect using a Huawei Cloud peering connection
        • Connect with clients
          • Connect to OceanBase Cloud by using Client ODC
          • Connect to OceanBase Cloud by using a MySQL client
          • Connect to OceanBase Cloud by using OBClient
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using SpringBoot
            • SpringBatch sample application for connecting to OceanBase Cloud
            • spring-jdbc
            • SpringDataJPA sample application for connecting to OceanBase Cloud
            • Hibernate application development with OceanBase Cloud
            • Sample program for connecting to OceanBase Cloud
            • connector-j
            • Use TestContainers to connect to and use OceanBase Cloud
          • Python
            • Connect to OceanBase Cloud using mysqlclient
            • Connect to OceanBase Cloud using PyMySQL
            • Use the MySQL-connector-python driver to connect to and use OceanBase Cloud
            • Use SQLAlchemy to connect to an OceanBase Cloud database
            • Connect to an OceanBase Cloud database using Django
            • Connect to an OceanBase Cloud database by using peewee
          • C
            • Use MySQL Connector/C to connect to OceanBase Cloud
          • Go
            • Connect to OceanBase Cloud using the Go-SQL-Driver/MySQL driver
            • Connect to OceanBase Cloud using GORM
          • PHP
            • Use the EXT driver to connect to OceanBase Cloud
            • Connect to OceanBase Cloud by using the MySQLi driver
            • Use the PDO driver to connect to OceanBase Cloud
          • Rust
            • Rust application example for connecting to OceanBase Cloud
            • SeaORM example for connecting to OceanBase Cloud
          • ruby
            • ActiveRecord sample application for OceanBase Cloud
            • Connect to OceanBase Cloud by using mysql2
            • Connect to OceanBase Cloud by using Sequel
        • Use database connection pool
          • Database connection pool configuration
          • Connect to OceanBase Cloud by using a Tomcat connection pool
          • Connect to OceanBase Cloud by using a C3P0 connection pool
          • Connect to OceanBase Cloud by using a Proxool connection pool
          • Connect to OceanBase Cloud by using a HikariCP connection pool
          • Connect to OceanBase Cloud by using a DBCP connection pool
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
      • Oracle compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using a public IP address
        • Connect with clients
          • Connect to OceanBase Cloud by using OBClient
          • Connect to OceanBase Cloud by using Client ODC
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using OceanBase Connector/J
            • Connect to OceanBase Cloud by using Spring Boot
            • SpringBatch application example for connecting to OceanBase Cloud
            • Connect to OceanBase Cloud using Spring JDBC
            • Connect to OceanBase Cloud by using Spring Data JPA
            • Connect to OceanBase Cloud by using Hibernate
            • Use MyBatis to connect to OceanBase Cloud
            • Use JFinal to connect to OceanBase Cloud
          • Python
            • Python Driver for Oracle Mode
          • C
            • Connect to OceanBase Cloud using OceanBase Connector/C
            • Connect to OceanBase Cloud using OceanBase Connector/ODBC
            • Use SqlSugar to connect to OceanBase Cloud
        • Use database connection pool
          • Database connection pool configuration
          • Sample program that uses a Tomcat connection pool to connect to OceanBase Cloud
          • C3P0 connection pool connects to OceanBase Cloud
          • Connect to OceanBase Cloud using Proxool connection pool
          • Sample program that uses HikariCP to connect to OceanBase Cloud
          • Use DBCP connection pool to connect to OceanBase Cloud
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
    • Developer guide
      • MySQL compatible mode
        • Plan database objects
          • Create a database
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use the CASE conditional operator in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
            • Use the SELECT ... LOCK IN SHARE MODE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
            • Release a savepoint
          • Commit a transaction
          • Roll back a transaction
      • Oracle compatible mode
        • Plan database objects
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use CASE functions in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
          • Commit a transaction
          • Roll back a transaction
    • Manage instances
      • Manage instances
        • View the instance list
        • Instance overview
        • Stop and restart instances
        • Unit migration
      • Manage tenants
        • Tenant overview
        • Create a tenant
        • Modify tenant specifications
        • Modify tenant names
        • Add an endpoint
        • Resource isolation
          • Overview
          • Manage resource groups
            • Create a resource group
            • View a resource group
            • Edit a resource group
            • Delete a resource group
          • Manage isolation rules
            • Create an isolation rule
            • View isolation rules
            • Edit an isolation rule
            • Delete a quarantine rule
        • Modify primary zone
        • Modify the maximum number of connections for a tenant proxy
        • Monitor tenant performance
          • Overview
          • View performance and SQL monitoring details
          • View transaction monitoring details
          • View storage and cache monitoring details
          • View Binlog service monitoring
          • Customize a monitoring dashboard for a tenant
        • Diagnostics
          • Real-time diagnostics
            • SQL diagnostics
              • Top SQL
              • Slow SQL
              • Suspicious SQL
              • High-risk SQL
            • SQL audit
        • Manage tenant parameters
          • Manage tenant parameters
          • Parameters for tenants
          • Parameter template overview
        • Delete a tenant
        • Manage databases and accounts
          • Create accounts
          • Manage accounts
          • Create a database (MySQL compatible mode)
          • Manage databases (MySQL compatible mode)
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor multidimensional metrics of an instance
        • Monitor the performance of hosts in an instance
        • Monitor database proxy
        • Monitor database proxy hosts
        • Monitor cross-cloud network performance
        • Customize a monitoring dashboard for an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Manage instance parameters
        • Manage parameters
        • Parameters for cluster instances
      • Change instance configurations
        • Enable storage auto-scaling
        • View history of configuration changes
        • Change configuration
        • Change configuration temporarily
        • Switch the deployment mode
      • Manage standby instances
        • Overview
        • Create a standby instance
        • Create a cross-cloud standby instance
        • Create a standby instance for an Alibaba Cloud primary instance
        • View details of primary and standby instances
        • Configure global endpoint
        • Enable automatic forwarding for write requests of standby databases
        • Primary-standby instance switchover
        • Initiate failover
        • Detach a standby instance
        • Release a standby instance
      • Release an instance
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert rule template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Backup and restore
        • Overview
        • Backup strategy
        • Initiate a backup immediately
        • Data backup
        • Initiate a restore
        • Data restore
        • Restore data from the instance recycle bin
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • One-click diagnostics
          • Initiate one-click diagnostics
          • View one-click diagnostic report
            • Exceptions
            • Real-time diagnostics
            • Optimization suggestions
            • Capacity management
            • Security management
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Request analysis
            • Request analysis
        • Root cause diagnostics
          • Exception handling
          • Enable system autonomy
        • SQL audit
        • Materialized view analysis
        • Optimization center
          • Optimization suggestions
          • Manage active outlines
          • SQL review
          • View the optimization history
      • Manage tags
      • Manage read-only replicas
        • Overview
        • Instance read-only replicas
          • Add a read-only replica to an instance
          • View read-only replicas of an instance
          • Manage read-only replicas of an instance
          • Delete a read-only replica of an instance
        • Tenant read-only replicas
          • Add a read-only replica to a tenant
          • View read-only replicas of a tenant
          • Manage read-only replicas of a tenant
          • Delete a read-only replica of a tenant
      • Manage JVM-dependent services
    • Data source management
      • Create a data source
      • Manage data sources
      • User privileges
        • User privileges for compatibility assessment
        • User privileges for data migration
        • User privileges for performance assessment
        • User privileges for data archiving
        • User privileges for data cleanup
      • Connect via private network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
        • Private IP address segments
      • Connect via public network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Manage recycle Bin
      • Instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance-level recycle bin
        • Tenant-level recycle bin
  • Work with Analytical Instances
    • Overview
    • Core features
    • Create an instance
    • Connect to an instance
      • Overview
      • Get connection string
        • Overview
        • Connect using AWS PrivateLink
        • Connect using a public IP address
      • Connect with clients
        • Connect to OceanBase Cloud by using Client ODC
        • Connect to OceanBase Cloud by using a MySQL client
        • Connect to OceanBase Cloud by using OBClient
      • Connect with drivers
        • Java
          • Connect to OceanBase Cloud by using Spring Boot
          • Connect to OceanBase Cloud by using Spring Batch
          • Connect to OceanBase Cloud by using Spring Data JDBC
          • Connect to OceanBase Cloud by using Spring Data JPA
          • Connect to OceanBase Cloud by using Hibernate
          • Connect to OceanBase Cloud by using MyBatis
          • Connect to OceanBase Cloud using MySQL Connector/J
        • Python
          • Connect to OceanBase Cloud by using mysqlclient
          • Connect to OceanBase Cloud by using PyMySQL
          • Connect to OceanBase Cloud using MySQL Connector/Python
        • C
          • Connect to OceanBase Cloud using MySQL Connector/C
        • Go
          • Connect to OceanBase Cloud using Go-SQL-Driver/MySQL
        • PHP
          • Connect to OceanBase Cloud using PHP
      • Use database connection pool
        • Database connection pool configuration
        • Connect to OceanBase Cloud by using a Tomcat connection pool
        • Connect to OceanBase Cloud by using a C3P0 connection pool
        • Connect to OceanBase Cloud by using a Proxool connection pool
        • Connect to OceanBase Cloud by using a HikariCP connection pool
        • Connect to OceanBase Cloud by using a DBCP connection pool
        • Connect to OceanBase Cloud by using Commons Pool
        • Connect to OceanBase Cloud by using a Druid connection pool
    • Data table design
      • Table overview
      • Best practices
        • Unit 1: Best practices for optimizing storage structures and query performance
        • Unit 2: Best practices for creating special indexes
    • Export data
    • OceanBase data processing
    • Query acceleration
      • Statistics
      • Materialized views for query acceleration
      • Select a query parallelism level
    • Manage instances
      • Instance overview
      • Change configuration
      • Modify primary zone
      • Manage parameters
      • Backup and restore
        • Backup overview
        • Backup strategies
        • Immediate backup
        • Data backup
        • Initiate restore
        • Data restore
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor the performance of hosts in an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Optimization management
            • Manage active outlines
            • View the optimization history
          • Request analysis
            • Request analysis
      • Stop and restart instances
      • Release instances
      • Manage databases and accounts
        • Create and manage accounts
        • Create a database
        • Manage databases
      • Manage tags
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Performance diagnosis and tuning
      • Use the DBMS_XPLAN package for performance diagnostics
      • Use the GV$SQL_PLAN_MONITOR view for performance analysis
      • Views related to AP performance analysis
    • Performance testing
    • Product integration
    • Manage recycle Bin
      • View instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance recycle bin
  • Work with Key-Value Instances
    • Try out Key-Value instances
      • Create an instance
      • Create a tenant
      • Create an account for a database user
      • OBKV HBase data operation examples
    • Use Table model
      • Create an instance
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-Table
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
    • Use HBase model
      • OBKV-HBase Overview
      • Create an instance
      • Develop in HBase model
        • Connect to an instance by using the OBKV-HBase client
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-HBase
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
      • Performance test
    • Connect Key-Value instances
      • Overview
      • Connect using a public IP address
  • Migrations
    • Data migration and import solutions
    • Data assessment and migration quick start
    • Assess compatibility
      • Overview
      • Perform online assessment
      • Perform offline assessment
      • Manage compatibility assessment tasks
        • View a compatibility assessment task
        • View and download a compatibility assessment report
        • Stop a compatibility assessment task
        • Delete a compatibility assessment task
      • Obtain files for upload
      • Configure PrivateLink
      • Add an IP address to an allowlist
    • Migrate data
      • Overview
      • Migrations specification
      • Purchase a data migration instance
      • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
      • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
      • Migrate data between OceanBase database tenants of the same compatibility mode
      • Migrate data between OceanBase database tenants of different compatibility modes
      • Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
      • Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
      • Configure a two-way synchronization task
      • Migrate data from an OceanBase database to a Kafka instance
      • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
      • Migrate incremental data from a MySQL-compatible tenant of OceanBase Database to a TiDB Database
      • Migrate data from a PostgreSQL database to an OceanBase database
      • Migrate incremental data from an OceanBase Database to a PostgreSQL database
      • Manage data migration tasks
        • View details of a data migration task
        • Rename a data migration task
        • View and modify migration objects
        • View and modify migration parameters
        • Configure alert monitoring
        • Manage data migration tasks by using tags
        • Start, stop, and resume a data migration task
        • Clone a data migration task
        • Terminate and release a data migration task
      • Features
        • Custom DML/DDL configurations
        • DDL synchronization scope
        • Use SQL conditions to filter data
        • Rename a migration object
        • Set an incremental synchronization timestamp
        • Instructions on schema migration
        • Configure and modify matching rules
        • Wildcard rules
        • Import migration objects
        • Download conflict data
        • Change a topic
        • Column filtering
        • Data formats
      • Authorize an Alibaba Cloud account
      • SQL statements for querying table objects
      • Online DDL tools
      • Create a trigger
      • Modify the log level of a self-managed PostgreSQL instance
      • Supported DDL statements for synchronization and their limitations
        • DDL synchronization from Aurora MySQL DB clusters to MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from MySQL-compatible tenants of OceanBase Database to Aurora MySQL DB clusters
        • DDL synchronization between MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle databases to Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle-compatible tenants of OceanBase Database to Oracle databases
        • DDL synchronization between Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from OceanBase databases to Kafka instances
    • Data subscription
      • Create a data subscription task
      • Manage data subscription tasks
        • View details of a data subscription task
        • Configure subscription information
        • Modify the name of a data subscription task
        • View and modify subscription objects
        • View data subscription parameters
        • Set up data subscription alerts
        • Start, stop, and resume data subscription tasks
        • Clone a data subscription task
        • Release a data subscription task
      • Manage private connections for data subscriptions
      • Configure consumer subscription
      • Message formats
    • Data validation
      • Overview
      • Create a data validation task
      • Manage data validation tasks
        • View details of a data validation task
        • View and modify validation objects
        • View and modify validation parameters
        • Manage data validation tasks with tags
        • Start, pause, and resume data validation tasks
        • Clone a data validation task
        • Release a data validation task
      • Features
        • Import validation objects
        • Rename the validation object
        • Filter objects by using SQL conditions
        • Configure the matching rules for the validation object
    • Assess performance
      • Overview
      • Obtain traffic files from a database instance
      • Create a full performance assessment task
      • Create an SQL file parsing task
      • Create an SQL file replay task
      • Manage performance assessment tasks
        • View the details of a performance assessment task
        • View a performance assessment report
        • Retry and stop a performance assessment task
        • Delete a performance assessment task
      • Obtain a database instance
      • Create an access key
    • Import data
      • Import data
      • Direct load
      • Supported file formats and encoding formats for Data Import
      • Sample data introduction
    • Binlog service
      • Overview
      • Purchase the Binlog service
      • Manage Binlog Service
        • View details of the Binlog service
        • Change configuration
        • Modify the auto-scaling strategy for storage space
        • Modify the elasticity strategy for compute units
        • Disable the Binlog service
  • Security
    • OceanBase Cloud account settings
      • Modify login password
      • Multi-factor authentication
      • Manage AccessKeys
      • Time zone settings
      • Manage cloud marketplace accounts
      • Account audit
    • Organizations and projects
      • Overview
      • Manage organization information
      • Project management
        • Manage projects
        • Cross-project bidirectional authorization
        • Subscribe to project messages
      • Manage members
      • Permissions for roles
      • Cost management
        • Overview
        • Cost details
        • Manage cost units
      • Operation audit
    • Database accounts and privileges
      • Account privileges
      • Authorize cloud vendor accounts
      • AWS KMS key management
      • Support access control
    • Security and encryption
      • Set allowlist groups
      • SSL encryption
      • Transparent Data Encryption (TDE)
    • Monitoring dashboard
    • Events
  • SQL Console
    • Overview
    • Access SQL Console
    • SQL editing and execution
    • PL compilation
    • Result set editing
    • Execution analysis
    • Database object management
      • Create a table
      • Create a view
      • Create a function
      • Create a stored procedure
      • Create a program package
      • Create a trigger
      • Create a type
      • Create a sequence
      • Create a synonym
    • Session variable management
    • Functional keys in SQL Console
  • Integrations
    • Overview
    • Schema evolution
      • Liquibase
      • Flyway
    • Data ingestion
      • Canal
      • dbt
      • Debezium
      • Flink
      • Glue
      • Informatica Cloud
      • Kafka
      • Maxwell
      • SeaTunnel
      • DataWorks
      • NiFi
    • SQL development
      • DataGrip
      • DBeaver
      • Navicat
      • TablePlus
    • Orchestration
      • DolphinScheduler
      • Linkis
      • Airflow
    • Visualization
      • Grafana
      • Power BI
      • Quick BI
      • Superset
      • Tableau
    • Observability
      • Datadog
      • Prometheus
    • Database management
      • Bytebase
    • AI
      • LlamaIndex
      • Dify
      • LangChain
      • Tongyi Qianwen
      • OpenAI
      • n8n
      • Trae
      • SpringAI
      • Cline
      • Cursor
      • Continue
      • Toolbox
      • CamelAI
      • Firecrawl
      • Hugging Face
      • Ollama
      • Google Gemini
      • Cloudflare Workers AI
      • Jina AI
      • Augment Code
      • Claude Code
      • Kiro
    • Development tools
      • Cloudflare Workers
      • Vercel
  • Best practices
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through cross-cloud primary-standby databases (1:1)
    • High availability through cross-cloud primary-standby databases (1:n)
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud
  • References
    • System architecture
    • System management
    • Database object management
    • Database design and specification constraints
    • SQL reference
    • System views
    • Parameters and system variables
    • Error codes
    • Performance tuning
    • Open API References
      • Overview
      • Service endpoints
      • Using API
      • Open APIs
        • Cluster management
          • DescribeInstances
          • DescribeInstance
          • CreateInstance
          • DeleteInstance
          • ModifyInstanceName
          • describe-node-options
          • StopCluster
          • StartCluster
          • ModifyInstanceSpec
          • DescribeInstanceTopology
          • DescribeReadonlyInstances
          • CreateReadonlyInstance
          • ModifyReadonlyInstanceSpec
          • ModifyReadonlyInstanceDiskSize
          • ModifyReadonlyInstanceNodeNum
          • DeleteReadonlyInstance
          • DescribeInstanceAvailableRoZones
          • DescribeInstanceParameters
          • UpdateInstanceParameters
          • DescribeInstanceParametersHistory
          • ModifyInstanceTagList
          • ModifyInstanceNodeNum
        • Tenant management
          • DescribeTenants
          • DescribeTenant
          • CreateTenants
          • DeleteTenants
          • ModifyTenantName
          • ModifyTenant
          • ModifyTenantUserDescription
          • ModifyTenantUserStatus
          • GetTenantCreateConstraints
          • ModifyTenantPrimaryZone
          • GetTenantCreateCpuConstraints
          • GetTenantCreateMemConstraints
          • GetTenantModifyCpuConstraints
          • GetTenantModifyMemConstraints
          • CreateTenantSecurityIpGroup
          • DescribeTenantSecurityIpGroups
          • ModifyTenantSecurityIpGroup
          • DeleteTenantSecurityIpGroup
          • DescribeTenantPrivateLink
          • DeletePrivatelinkConnection
          • CreatePrivatelinkService
          • ConnectPrivatelinkService
          • AddPrivatelinkServiceUser
          • BatchKillProcessList
          • DescribeProcessStatsComposition
          • DescribeTenantAvailableRoZones
          • DescribeTenantAddressInfo
          • ModifyTenantReadonlyReplica
          • DescribeTenantParameters
          • UpdateTenantParameters
          • DescribeTenantParametersHistory
          • ModifyTenantTagList
        • Tenant user management
          • CreateTenantUser
          • DescribeTenantUsers
          • DeleteTenantUsers
          • ModifyTenantUserPassword
          • ModifyTenantUserRoles
        • Database management
          • CreateDatabase
          • DescribeDatabases
          • DeleteDatabases
          • ModifyDatabaseUserRoles
        • Backup and restore
          • DescribeDataBackupSet
          • DescribeRestorableTenants
          • ModifyBackupStrategy
          • CreateTenantRestoreTask
          • CreateDataBackupTask
          • DescribeOneDataBackupSet
        • Database proxy management
          • CreateTenantAddress
          • CreateTenantSingleTunnelSLBAddress
          • DeleteTenantAddress
          • DescribeTenantAddress
          • ModifyOdpClusterSpec
          • ModifyTenantAddressPort
          • ModifyTenantAddressDomainPrefix
          • ConfirmPrivatelinkConnection
          • DescribeTenantAddressInfo
        • Monitoring management
          • DescribeTenantMetrics
          • DescribeMetricsData
          • DescribeNodeMetrics
        • Diagnostic management
          • DescribeOasTopSQLList
          • DescribeOasAnomalySQLList
          • DescribeOasSlowSQLList
          • DescribeOasSQLText
          • DescribeSqlAudits
          • DescribeOutlineBinding
          • DescribeSampleSqlRawTexts
          • DescribeSQLTuningAdvices
          • DescribeOasSlowSQLSamples
          • DescribeOasSQLTrends
          • DescribeOasSQLPlanGroup
        • Security management
          • CreateSecurityIpGroup
          • DescribeInstanceSSL
          • ModifyInstanceSSL
          • DescribeTenantEncryption
          • ModifyTenantEncryption
          • ModifySecurityIps
          • DeleteSecurityIpGroup
          • DescribeTenantSecurityConfigs
          • DescribeInstanceSecurityConfigs
        • Tag management
          • DescribeTags
          • CreateTags
          • UpdateTag
          • DeleteTag
        • Historical event management
          • DescribeOperationEvents
      • Differences between ApsaraDB for OceanBase APIs and OceanBase Cloud APIs
    • Download OBClient
      • Download OBClient
      • Download OceanBase Connector/J
      • Download client ODC
      • Download OceanBase Connector/ODBC
      • Download OBClient Libs
    • Metrics References
      • Cluster database
      • Cluster hosts
      • Binlog service
      • Cross-cloud network channel connection
      • Performance and SQL
      • Transactions
      • Storage and caching
      • Proxy database
      • Proxy host
    • ODC User Guide
      • What is ODC?
        • What is ODC?
        • Limitations
      • Quick Start
        • Client ODC
          • Overview
          • Install Client ODC
          • Use Client ODC
        • Web ODC
          • Overview
          • Use Web ODC
      • Data Source Management
        • Create a data source
        • Data sources and project collaboration
        • Database O&M
          • Session management
          • Global variable management
          • Recycle bin management
      • SQL Development
        • Edit and execute SQL statements
        • Perform PL compilation and debugging
        • Edit and export the result set of an SQL statement
        • Execution analysis
        • Generate test data
        • System settings
        • Database objects
          • Table objects
            • Overview
            • Create a table
          • View objects
            • Overview
            • Create a view
            • Manage views
          • Materialized view objects
            • Overview
            • Create a materialized view
            • Manage materialized views
          • Function objects
            • Overview
            • Create a function
            • Manage functions
          • Stored procedure objects
            • Overview
            • Create a stored procedure
            • Manage stored procedures
          • Sequence objects
            • Overview
            • Create a sequence
            • Manage sequences
          • Package objects
            • Overview
            • Create a program package
            • Manage program packages
          • Trigger objects
            • Overview
            • Create a trigger
            • Manage triggers
          • Type objects
            • Overview
            • Create a type
            • Manage types
          • Synonym objects
            • Overview
            • Create a synonym
            • Manage synonyms
      • Import and Export
        • Import schemas and data
        • Export schemas and data
      • Database Change Management
        • User Permission Management
          • Users and roles
          • Automatic authorization
          • User permission management
        • Project collaboration management
        • Risk levels, risk identification rules, and approval processes
        • SQL check specifications
        • SQL window specification
        • Database change management
        • Batch database change management
        • Online schema changes
        • Synchronize shadow tables
        • Schema comparison
      • Data Lifecycle Management
        • Partitioning Plan Management
          • Manage partitioning plans
          • Set partitioning strategies
          • Examples
        • SQL plan task
      • Data Desensitization and Auditing
        • Desensitize data
        • Operation records
      • Notification Management
        • Overview
        • View notification records
        • Manage Notification Channel
          • Create a notification channel
          • View, edit, and delete a notification channel
          • Configure a custom channel
        • Manage notification rules
      • Best Practices
        • Tips for SQL development
        • Explore ODC team workspaces
        • Understanding real-time SQL diagnostics for OceanBase AP
        • OceanBase historical database solutions
        • ODC SQL check for automatic identification of high-risk operations
        • Manage and modify sharded databases and tables via ODC
        • Data masking and control practices
        • Enterprise-level control and collaboration: Safeguard every database change
    • Data Development
      • Overview
      • Workspace management
      • Worksheet management
      • Compute node pool management
      • Workflow management
      • Dashboard management
      • Manage Git repositories
      • SQL development
        • SQL editing and execution
        • Result set editing
        • Execution analysis
        • Database object management
          • Create a table
          • Create a view
          • Create a function
          • Create a stored procedure
        • Session variable management
        • Git integration
      • Sample datasets
      • Data development terms
  • Manage Billing
    • Access billing
    • View monthly bills
    • View payment details
    • View orders
    • Use vouchers for payment
    • View invoices
  • Legal Agreements
    • OceanBase Cloud Services Agreement
    • Service Level Agreement
    • OceanBase Data Processing Addendum
    • Service Level Agreement for OceanBase Cloud Migration Service

Download PDF

Release notes for 2026 Release notes for 2025 Release notes for 2024 Release history Data development module deprecation notice Optimization of Backup and Restore commercialization strategy Cross-AZ data transfer billing (OceanBase Cloud on AWS) Database Proxy pricing update AWS instance pricing adjustment Overview Management mode and scenarios High availability with cross-cloud active-active architecture High availability with cross-cloud primary-standby databases Multi-level caching in shared storage Multi-layer online scaling and on-demand adjustment Deployment modes Storage architecture Product specifications Overview Backup and restore billing SQL audit billing Migrations billing Database proxy billing Binlog service billing Overview of OceanBase Cloud support plans Read-only replica billing Supported database versions Get started with a transactional instance Get started with an analytical instance Get started with a Key-Value instance Overview Overview Create via OceanBase Cloud official website Create via AWS Marketplace Create via GCP Marketplace Create via Huawei Cloud Marketplace Create via Alibaba Cloud Marketplace Create via Azure Marketplace Release an instance Manage tags Manage JVM-dependent services Create a data source Manage data sources Archive data Clean up data Instance recycle bin Overview Core features Create an instance Overview Table overview Export data OceanBase data processing Statistics Materialized views for query acceleration Select a query parallelism level Instance overview Change configuration Modify primary zone Manage parameters Stop and restart instances Release instances Manage tags Archive data Clean up data Use the DBMS_XPLAN package for performance diagnostics Use the GV$SQL_PLAN_MONITOR view for performance analysis Views related to AP performance analysis Performance testing Product integration View instance recycle bin Create an instance Create a tenant Create an account for a database user OBKV HBase data operation examples Create an instance OBKV-HBase Overview Create an instance Performance test Overview Connect using a public IP address Data migration and import solutions Data assessment and migration quick start Overview Perform online assessment Perform offline assessment Obtain files for upload Configure PrivateLink Add an IP address to an allowlist Overview Migrations specification Purchase a data migration instance Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data between OceanBase database tenants of the same compatibility mode Migrate data between OceanBase database tenants of different compatibility modes Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database Configure a two-way synchronization task Migrate data from an OceanBase database to a Kafka instance
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Cloud
iconOceanBase Cloud

    Connect to OceanBase Cloud using Proxool connection pool

    Last Updated:2026-04-07 08:08:33  Updated
    share
    What is on this page
    Prerequisites
    Procedure
    Step 1: Import the proxool-oceanbase-client project into IntelliJ IDEA
    Step 2: Modify the database connection information in the proxool-oceanbase-client project
    Step 3: Run the proxool-oceanbase-client project
    Project code introduction
    Code of pom.xml
    db.properties code introduction
    Introduction to Main.java
    Complete code
    References

    folded

    share

    This topic describes how to build an application by using Proxool connection pool, OceanBase Connector/J, and OceanBase Cloud. The application can perform basic database operations, such as creating tables, inserting, deleting, updating, and querying data.

    Download the proxool-oceanbase-client sample project Connect to OceanBase Cloud using Proxool connection pool (Oracle compatible mode)

    Prerequisites

    • You have registered an OceanBase Cloud account, created an instance and an Oracle-compatible tenant. For more information, see Create an instance and Create a tenant.

    • You have obtained the connection string of the target Oracle-compatible tenant. For more information, see Obtain the connection string.

    • You have installed JDK 1.8 and Maven.

    • You have installed IntelliJ IDEA.

      Note

      The code examples in this topic are run in IntelliJ IDEA Community Edition 2021.3.2. You can also choose any other tool that you prefer to run the code examples.

    Procedure

    Note

    The following steps describe how to compile and run the proxool-oceanbase-client project in the Windows environment by using IntelliJ IDEA Community Edition 2021.3.2. If you use a different operating system or compiler, the steps may vary.

    Step 1: Import the proxool-oceanbase-client project into IntelliJ IDEA

    1. Start IntelliJ IDEA.

    2. On the welcome page, click Open. Navigate to the project directory, select the root directory of the project, and click OK.

      1

    3. IntelliJ IDEA automatically detects the project type and loads the project.

      Note

      When you import a Maven project by using IntelliJ IDEA, it automatically detects the pom.xml file in the project, downloads the required dependency libraries based on the described dependencies in the file, and adds them to the project.

      2

    4. (Optional) Manually import unresolved dependencies.

      If all the dependencies in the pom.xml file are automatically imported to the project, you can skip this step.

      Based on the prompt in the Sync window of IntelliJ IDEA, you can find that the proxool-cglib and proxool dependencies are unresolved. The proxool-cglib and proxool jar files are stored in the lib folder under the root directory of the proxool-oceanbase-client project. Perform the following steps to add the jar files to the project:

      1. In IntelliJ IDEA, choose File > Project Structure.
      2. In the left-side navigation pane, choose Modules.
      3. In the right-side pane, select the Dependencies tab. Click the + icon and choose JARs or directories.
      4. In the dialog box that appears, navigate to the lib directory that stores the jar files, select the jar files, and click OK.
      5. On the Dependencies tab, you can see the newly added jar files in the list.
      6. Click Apply or OK to save the changes.

      3

    Step 2: Modify the database connection information in the proxool-oceanbase-client project

    Modify the database connection information in the proxool-oceanbase-client/src/main/resources/db.properties file based on the connection string information obtained in the prerequisites.

    Here is an example:

    ...
    jdbc-1.proxool.driver-url=jdbc:oceanbase://t5******.********.oceanbase.cloud:1521/test_schema001
    jdbc-1.user=test_user
    jdbc-1.password=******
    ...
    
    • The connection address is t5******.********.oceanbase.cloud.
    • The access port is 1521.
    • The name of the database to be accessed is test_schema001.
    • The tenant account is test_user.
    • The password is ******.

    Step 3: Run the proxool-oceanbase-client project

    1. In the project navigation pane, find and expand the src/main/java/com.example directory.

    2. Right-click the Main file and choose Run 'Main.main()'.

    3. IntelliJ IDEA automatically compiles and runs the project, and displays the output result in the Run panel.

      4

    4. You can also execute the following SQL statement in OceanBase Client (OBClient) to view the result.

      obclient [TEST_USER001]> SELECT * FROM test_schema001.test_proxool;
      

      The return result is as follows:

      +------+---------------+
      | C1   | C2            |
      +------+---------------+
      |    6 | test_update   |
      |    7 | test_insert7  |
      |    8 | test_insert8  |
      |    9 | test_insert9  |
      |   10 | test_insert10 |
      +------+---------------+
      5 rows in set
      

    Project code introduction

    Click proxool-oceanbase-client to download the project code, which is a compressed file named proxool-oceanbase-client.zip.

    After decompressing it, you will find a folder named proxool-oceanbase-client. The directory structure is as follows:

    proxool-oceanbase-client
    ├── lib
    │    ├── proxool-0.9.1.jar
    │    └── proxool-cglib.jar
    ├── src
    │   └── main
    │       ├── java
    │       │   └── com
    │       │       └── example
    │       │           └── Main.java
    │       └── resources
    │           └── db.properties
    └── pom.xml
    

    File description:

    • lib: stores the dependency library files required by the project.
    • proxool-0.9.1.jar: the Proxool connection pool library file.
    • proxool-cglib.jar: the CGLib library file used to support the Proxool connection pool.
    • src: the root directory for source code.
    • main: the main code directory, containing the core logic of the application.
    • java: the Java source code directory.
    • com: the Java package directory.
    • example: the package directory for the sample project.
    • Main.java: the main class program file, containing logic for creating tables, inserting, deleting, updating, and querying data.
    • resources: the resource file directory, containing configuration files.
    • db.properties: the configuration file for the connection pool, containing relevant database connection parameters.
    • pom.xml: the configuration file for the Maven project, used to manage project dependencies and build settings.

    Code of pom.xml

    The pom.xml file is a configuration file for Maven projects. It defines the dependencies, plugins, and build rules of the project. Maven is a Java project management tool that can automatically download dependencies, compile, and package projects.

    The code in this topic's pom.xml file mainly includes the following parts:

    1. The file declaration statement.

      This statement declares that the file is an XML file, the XML version is 1.0, and the character encoding is UTF-8.

      Sample code:

      <?xml version="1.0" encoding="UTF-8"?>
      
    2. The POM namespace and POM model version.

      1. The xmlns attribute specifies the POM namespace as http://maven.apache.org/POM/4.0.0.
      2. The xmlns:xsi attribute specifies the XML namespace as http://www.w3.org/2001/XMLSchema-instance.
      3. The xsi:schemaLocation attribute specifies the POM namespace as http://maven.apache.org/POM/4.0.0 and the location of the POM XSD file as http://maven.apache.org/xsd/maven-4.0.0.xsd.
      4. The <modelVersion> element specifies the POM model version used by the POM file as 4.0.0.

      Sample code:

      <project xmlns="http://maven.apache.org/POM/4.0.0"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
      
       <!-- Other configurations -->
      
      </project>
      
    3. Basic information.

      1. The <groupId> element specifies the project's group ID as com.example.
      2. The <artifactId> element specifies the project's name as proxool-oceanbase-client.
      3. The <version> element specifies the project's version as 1.0-SNAPSHOT.

      Sample code:

          <groupId>com.example</groupId>
          <artifactId>proxool-oceanbase-client</artifactId>
          <version>1.0-SNAPSHOT</version>
      
    4. Project source file attributes.

      The maven-compiler-plugin compiler plugin is specified for Maven, and the source code and target Java versions are set to 8. This means that the project's source code is written using Java 8 features, and the compiled bytecode will also be compatible with the Java 8 runtime environment. This setting ensures that the project can correctly handle Java 8 syntax and features during compilation and runtime.

      Note

      Java 1.8 and Java 8 are different names for the same version.

      Sample code:

          <build>
              <plugins>
                  <plugin>
                      <groupId>org.apache.maven.plugins</groupId>
                      <artifactId>maven-compiler-plugin</artifactId>
                      <configuration>
                          <source>8</source>
                          <target>8</target>
                      </configuration>
                  </plugin>
              </plugins>
          </build>
      
    5. Project dependencies.

      1. The oceanbase-client dependency library is added to connect to and operate on the database:

        1. The <groupId> element specifies the dependency's group ID as com.oceanbase.
        2. The <artifactId> element specifies the dependency's name as oceanbase-client.
        3. The <version> element specifies the dependency's version as 2.4.2.

        Note

        This part of the code defines the project's dependency on OceanBase Connector/J V2.4.2. For information about other versions, see OceanBase JDBC Driver.

        Sample code:

                <dependency>
                    <groupId>com.oceanbase</groupId>
                    <artifactId>oceanbase-client</artifactId>
                    <version>2.4.2</version>
                </dependency>
        
      2. The proxool-cglib dependency library is added to support the CGLib library for the Proxool connection pool:

        1. The <groupId> element specifies the dependency's group ID as proxool.
        2. The <artifactId> element specifies the dependency's name as proxool-cglib.
        3. The <version> element specifies the dependency's version as 0.9.1.

        Sample code:

                <dependency>
                    <groupId>proxool</groupId>
                    <artifactId>proxool-cglib</artifactId>
                    <version>0.9.1</version>
                </dependency>
        
      3. The proxool dependency library is added as the core library for the Proxool connection pool:

        1. The <groupId> element specifies the dependency's group ID as proxool.
        2. The <artifactId> element specifies the dependency's name as proxool.
        3. The <version> element specifies the dependency's version as 0.9.1.

        Sample code:

                <dependency>
                    <groupId>proxool</groupId>
                    <artifactId>proxool</artifactId>
                    <version>0.9.1</version>
                </dependency>
        
      4. The commons-logging dependency library is added as a general-purpose logging library for logging in applications:

        1. The <groupId> element specifies the dependency's group ID as commons-logging.
        2. The <artifactId> element specifies the dependency's name as commons-logging.
        3. The <version> element specifies the dependency's version as 1.2.

        Sample code:

                <dependency>
                    <groupId>commons-logging</groupId>
                    <artifactId>commons-logging</artifactId>
                    <version>1.2</version>
                </dependency>
        

    db.properties code introduction

    db.properties is the connection pool configuration file used in the example in this topic. It contains the configuration attributes of the connection pool.

    Note

    When you configure the Proxool connection pool by using a .properties file, you must follow these rules:

    1. Use a custom name with jdbc as the prefix to identify each connection pool. You can customize this name to uniquely identify each connection pool.
    2. Proxool connection pool attributes should be prefixed with proxool.. These attributes are used to configure the properties of the Proxool connection pool itself.
    3. Attributes without the jdbc prefix will be ignored and will not be used by Proxool.
    4. Attributes without the proxool. prefix will be passed to the actual database connection, that is, these attributes will be passed to the actual database driver.
    For more information about how to configure the Proxool connection pool, see Configuration.

    The db.properties file in this topic is an example of a properties file used to configure the connection pool attributes of a data source named jdbc-1. It mainly contains the following parts:

    1. Sets the alias of the data source to TEST.

      Sample code:

      jdbc-1.proxool.alias=TEST
      
    2. Configures the database connection parameters.

      1. Sets the class name of the driver to com.oceanbase.jdbc.Driver, which is the class name of the OceanBase Database JDBC driver.
      2. Sets the database connection URL, including the host IP address, port number, and schema to be accessed.
      3. Sets the username of the database.
      4. Sets the password of the database.

      Sample code:

      jdbc-1.proxool.driver-class=com.oceanbase.jdbc.Driver
      jdbc-1.proxool.driver-url=jdbc:oceanbase://$host:$port/$schema_name
      jdbc-1.user=$user_name
      jdbc-1.password=$password
      

      Parameter description:

      • $host: the connection address of the cloud database of OceanBase Database, which is specified by the -h parameter in the connection string.
      • $port: the connection port of the cloud database of OceanBase Database, which is specified by the -P parameter in the connection string.
      • $schema_name: the name of the database to be accessed, which is specified by the -D parameter in the connection string.
      • $user_name: the username, which is specified by the -u parameter in the connection string.
      • $password: the password, which is specified by the -p parameter in the connection string.
    3. Configures other Proxool connection pool parameters.

      1. Sets the maximum number of connections in the connection pool to 8.
      2. Sets the minimum number of connections in the connection pool to 5.
      3. Sets the number of available connections in the connection pool to 4.
      4. Enables the detailed mode of the connection pool to display more log information.
      5. Sets the statistics recording cycle of the connection pool to 10 seconds, 1 minute, and 1 day.
      6. Sets the log level of the connection pool statistics recording to the error level.

      Sample code:

      jdbc-1.proxool.maximum-connection-count=8
      jdbc-1.proxool.minimum-connection-count=5
      jdbc-1.proxool.prototype-count=4
      jdbc-1.proxool.verbose=true
      jdbc-1.proxool.statistics=10s,1m,1d
      jdbc-1.proxool.statistics-log-level=error
      

    Notice

    The specific attribute (parameter) configurations depend on the project requirements and the characteristics of the database. We recommend that you adjust and configure the parameters based on your actual situation. For more information about Proxool connection pool parameters, see Properties.

    Common configuration parameters:

    Configuration Default value Description
    alias N/A Specifies the alias of the connection pool. This parameter is useful for identifying a connection pool among multiple connection pools.
    driver-class N/A Specifies the class name of the database driver.
    driver-url N/A Specifies the database connection URL, including the host IP address, port number, schema to be accessed, and optional database driver parameters.
    username N/A Specifies the database username.
    password N/A Specifies the database password.
    maximum-connection-count 15 Specifies the maximum number of connections in the connection pool. The default value is 15, which means the connection pool can create up to 15 connections.
    minimum-connection-count 5 Specifies the minimum number of connections in the connection pool. The default value is 5, which means the connection pool will always maintain at least 5 connections.
    prototype-count 0 Specifies the number of prototype connections in the connection pool. The default value is 0, which means the connection pool will not actively create additional connections.
    verbose false Specifies whether to enable detailed output mode for the connection pool. The default value is false, which means quiet mode.
    When the verbose attribute is set to true, the connection pool will output more detailed information for debugging and monitoring. This information may include the status of the connection pool, connection creation and release, and connection usage.
    Enabling verbose mode helps developers better understand the operation of the connection pool and check whether connection allocation and recycling are normal. This is very useful for troubleshooting connection leaks, performance issues, and tuning.
    In a production environment, it is generally not recommended to set verbose to true because it generates a large amount of output information, which may affect system performance and the size of log files. Typically, it is recommended to set verbose to false and only temporarily enable it when debugging and monitoring are needed.
    statistics null Specifies the sampling interval for statistics, which tracks the usage status of the connection pool. The sampling interval can be set to a comma-separated list of time units, such as 10s,15m, indicating sampling every 10 seconds and every 15 minutes. Valid units are s (seconds), m (minutes), h (hours), and d (days). The default value is null, indicating that no statistics will be collected.
    When the statistics attribute is set, the connection pool will periodically sample statistics such as the number of active connections, idle connections, and connection requests. The sampling interval determines the granularity and frequency of the statistics.
    statistics-log-level null Specifies the log level for statistics, which determines the type of log tracking. Valid log levels are DEBUG, INFO, WARN, ERROR, and FATAL. The default value is null, indicating that no statistics logs will be recorded.
    When the statistics-log-level attribute is set, the connection pool will record the generated statistics at the specified log level. These statistics may include the status of the connection pool, connection creation and release, and connection usage.
    test-after-use N/A Specifies whether to test a connection after it is closed. If this attribute is set to true and the house-keeping-test-sql attribute is defined, each connection will be tested after it is closed (i.e., returned to the connection pool). If the connection test fails, the connection will be discarded.
    After a connection is used, it is typically returned to the connection pool for reuse. The test-after-use attribute ensures that a connection is tested after it is returned to the connection pool to verify its availability and validity. Connection testing usually uses the SQL statement specified by the house-keeping-test-sql attribute.
    Enabling the test-after-use feature allows the connection pool to promptly detect and remove unavailable connections, preventing the application from obtaining invalid connections. This improves the stability and reliability of the application.
    Note that to use the test-after-use feature, the house-keeping-test-sql attribute must be set. This attribute defines the SQL statement used for connection testing. The connection pool then uses the rules defined by house-keeping-test-sql to test and evaluate connections.
    house-keeping-test-sql N/A Specifies the SQL statement for testing idle connections in the connection pool. When the housekeeping thread of the connection pool detects idle connections, it uses this SQL statement to test them. The test SQL statement should be very quick to execute, such as checking the current date. If this attribute is not defined, no connection testing will be performed. For MySQL compatible mode, you can use SELECT CURRENT_DATE or SELECT 1. For Oracle compatible mode, you can use SELECT sysdate FROM DUAL or SELECT 1 FROM DUAL.
    trace false Specifies whether to record log information for each SQL call. When set to true, each SQL call will be recorded in the log at the DEBUG level, along with the execution time. You can also register a ConnectionListener (see ProxoolFacade) to obtain this information. The default value is false.
    Enabling the trace feature may generate a large amount of log output, especially in high-concurrency and frequent SQL call scenarios. In a production environment, it is recommended to use this feature cautiously to avoid excessive logs and unnecessary impact on system performance.
    maximum-connection-lifetime 4 hours Specifies the maximum lifetime of a connection. This is the maximum time (in milliseconds) a connection can exist before it is destroyed. The default value is 4 hours.
    The lifetime of a connection refers to the period from its creation to its destruction. By setting the maximum-connection-lifetime attribute, you can limit the maximum time a connection can remain in the connection pool, preventing resource leaks and connections from being unused for too long.
    maximum-active-time 5 minutes Specifies the maximum active time for a thread. If the housekeeping thread of the connection pool detects that a thread's active time exceeds this setting, it will terminate the thread. Therefore, ensure that this attribute is set to a value greater than the expected slowest response time. The default value is 5 minutes.
    The housekeeping thread terminates idle threads in the connection pool that have been unused for longer than this time, ultimately retaining the number of connections specified by minimum-connection-count. The housekeeping thread checks periodically based on the house-keeping-sleep-time parameter.
    maximum-new-connections N/A Specifies the maximum number of new connections that can be established simultaneously by the connection pool. This attribute is deprecated. It is recommended to use the simultaneous-build-throttle attribute instead.
    simultaneous-build-throttle 10 Specifies the maximum number of connections that can be established simultaneously by the connection pool. This is the upper limit for new connections that are being established but not yet available. Since connection establishment may involve multiple threads (e.g., when connections are established on demand), and there is a delay between deciding to establish a connection and the connection becoming available, we need a way to ensure that a large number of threads do not simultaneously decide to establish connections.
    The simultaneous-build-throttle attribute limits the number of new connections that can be established simultaneously by the connection pool to control its concurrency. When the maximum number of concurrent connections is reached, further requests for new connections will be blocked until an available connection is available or the specified timeout is exceeded.
    By setting an appropriate simultaneous-build-throttle value, you can balance the concurrency and resource consumption of the connection pool. The default value is 10, meaning the connection pool can establish up to 10 connections simultaneously.
    overload-without-refusal-lifetime 60 Helps determine the status of the connection pool. If connection requests are rejected within the specified time threshold (in milliseconds), it indicates that the connection pool is overloaded. The default value is 60 seconds.
    test-before-use N/A Specifies whether to test each connection before it is provided to an application. If this property is set to true, each connection is tested by executing the predefined test SQL (defined by the house-keeping-test-sql property) before it is provided to an application. If the connection test fails, the connection is discarded, and the connection pool selects another available connection. If all connections fail the test, a new connection is created. If the new connection also fails the test, a SQLException is thrown.
    Note that for MySQL databases, you must also include the autoReconnect=true parameter in the connection parameters. Otherwise, even if test-before-use is set to true, reconnection will not be possible.
    fatal-sql-exception null Specifies the messages to detect and handle SQL exceptions. It is a list of message fragments separated by commas. When a SQLException occurs, its message is compared with these message fragments. If it contains any of the message fragments (case-sensitive), it is considered a fatal SQL exception. This will result in the connection being discarded. Regardless of the situation, the exception is rethrown so that the user is informed of what occurred. You can also configure different exceptions to be thrown (see the fatal-sql-exception-wrapper-class property). The default value is null.
    Note that if the fatal-sql-exception-wrapper-class property is set, you can configure an alternative exception class to throw. This allows you to customize how SQL exceptions are handled.
    fatal-sql-exception-wrapper-class null Specifies the exception wrapper for fatal SQL exceptions. When the fatal-sql-exception property is configured, the default behavior is to discard the exception that caused the fatal SQLException and directly throw the original exception to the user. Using this property, you can wrap the SQLException in another exception. This exception can be any class you choose, as long as it extends SQLException or RuntimeException. Proxool provides two classes for your use if you do not want to build your own exception class: FatalSQLException and FatalRuntimeException. To use these classes, set this property to org.logicalcobwebs.proxool.FatalSQLException or org.logicalcobwebs.proxool.FatalRuntimeException as needed. The default value is null, indicating that fatal SQLException is not wrapped. The default value is null.
    Note that the exception wrapper class must be a subclass of SQLException or RuntimeException.
    house-keeping-sleep-time 30 seconds Specifies the sleep time for the maintenance thread (house keeping thread) of the connection pool. The maintenance thread is responsible for checking the status of all connections and determining whether to destroy or create connections. The default value is 30 seconds, meaning the maintenance thread will execute maintenance tasks every 30 seconds.
    injectable-connection-interface N/A Allows Proxool to implement the methods defined in the delegated Connection object.
    injectable-statement-interface N/A Allows Proxool to implement the methods defined in the delegated Statement object.
    injectable-prepared-statement-interface N/A Allows Proxool to implement the methods defined in the delegated PreparedStatement object.
    injectable-callable-statement-interface N/A Allows Proxool to implement the methods defined in the delegated CallableStatement object.
    jndi-name N/A Specifies the name under which the connection pool is registered in JNDI (Java Naming and Directory Interface).

    Introduction to Main.java

    The Main.java file is part of the sample program, which demonstrates how to obtain a database connection through a Proxool connection pool and perform a series of database operations, including creating tables, inserting data, deleting data, updating data, querying data, and printing the query results.

    The code in the Main.java file in this topic mainly includes the following parts:

    1. Import the required classes and interfaces.

      Define the package where the code is located and import the classes and interfaces related to Proxool and JDBC. These classes are used to implement the configuration and management of the database connection pool and to execute SQL statements. By using the Proxool connection pool, you can improve the performance and reliability of database operations. The specific steps are as follows:

      1. Define the package where the code is located as com.example, which is used to store the current Java class.
      2. Import the org.logicalcobwebs.proxool.configuration.PropertyConfigurator class, which is used to configure Proxool.
      3. Import the java.io.InputStream class, which is used to read the configuration file.
      4. Import the java.sql.Connection class, which is used to obtain a database connection.
      5. Import the java.sql.DriverManager class, which is used to obtain a database connection.
      6. Import the java.sql.ResultSet class, which is used to obtain query results.
      7. Import the java.sql.Statement class, which is used to execute SQL statements.
      8. Import the java.util.Properties class, which is used to load and read the configuration file.

      Sample code:

      package com.example;
      
      import org.logicalcobwebs.proxool.configuration.PropertyConfigurator;
      import java.io.InputStream;
      import java.sql.Connection;
      import java.sql.DriverManager;
      import java.sql.ResultSet;
      import java.sql.Statement;
      import java.util.Properties;
      
    2. Define the class name and method.

      Define the entry method of the Java program. In this method, the database connection information is obtained by reading the configuration file. After the database connection is established by using the Proxool driver, the defined methods are called in sequence to execute DDL statements, DML statements, and query statements. The exceptions that may occur are captured and printed. The purpose of this code is to execute database-related operations and record logs by using a logger. The specific steps are as follows:

      1. Define a public class named Main.

        1. Define a private static constant named DB_PROPERTIES_FILE, which indicates the path of the database configuration (property) file. This constant can be referenced in the code to load and read the property file.

        2. Define a public static method main, which is the starting point of the program.

          1. Define a code block for capturing exceptions that may occur.

            1. Create a Properties object to read the properties in the configuration file.
            2. Use the class loader of the Main class to obtain the input stream of the configuration file.
            3. Load the configuration file by using the loaded input stream and load the properties into the Properties object.
            4. Configure the connection pool by using the loaded properties.
            5. Dynamically load the Proxool database driver.
            6. Establish a database connection by using the Proxool driver.
            7. Create a Statement object.
            8. Call the defined method executeDDLStatements() to execute DDL statements, which are statements for creating tables.
            9. Call the defined method executeDMLStatements() to execute DML statements, which are statements for inserting, updating, and deleting data.
            10. Call the defined method executeQueryStatements() to execute query statements and obtain data.
          2. Capture and print the exception information that may occur.

      2. Define methods for creating tables, executing DML statements, and querying data.

      Sample code:

      public class Main {
          private static final String DB_PROPERTIES_FILE = "/db.properties";
      
          public static void main(String[] args) {
              try {
                  Properties properties = new Properties();
                  InputStream is = Main.class.getResourceAsStream(DB_PROPERTIES_FILE);
                  properties.load(is);
                  PropertyConfigurator.configure(properties);
      
                  Class.forName("org.logicalcobwebs.proxool.ProxoolDriver");
                  try (Connection conn = DriverManager.getConnection("proxool.TEST");
                      Statement stmt = conn.createStatement()) {
                      executeDDLStatements(stmt);
                      executeDMLStatements(stmt);
                      executeQueryStatements(stmt);
                  }
              } catch (Exception e) {
                  e.printStackTrace();
              }
          }
      
          // Define the method for creating tables.
          // Define the method for executing DML statements.
          // Define the method for querying data.
      }
      
    3. Define the method for creating tables.

      Define a private static method executeDDLStatements() to execute DDL (data definition language) statements, including statements for creating tables. The specific steps are as follows:

      1. Define a private static method executeDDLStatements() that receives a Statement object as a parameter and may throw an Exception exception.
      2. Use the execute() method to execute SQL statements and create a table named test_proxool, which has two columns, c1 and c2, of the NUMBER and VARCHAR2(32) types, respectively.

      Sample code:

          private static void executeDDLStatements(Statement stmt) throws Exception {
              stmt.execute("CREATE TABLE test_proxool (c1 NUMBER, c2 VARCHAR2(32))");
          }
      
    4. Define the method for executing DML statements.

      Define a private static method executeDMLStatements() to execute DML (data manipulation language) statements, including statements for inserting, deleting, and updating data. The specific steps are as follows:

      1. Define a private static method executeDMLStatements() that receives a Statement object as a parameter and throws an Exception exception if an exception occurs during execution.
      2. Use a for loop to iterate from 1 to 10. In the loop, use the execute() method to execute SQL insert statements and insert the variable i and the corresponding string value into the test_proxool table.
      3. Execute an SQL delete statement to delete rows from the test_proxool table where the value of the c1 column is less than or equal to 5.
      4. Execute an SQL update statement to update the value of the c2 column to test_update for rows in the test_proxool table where the value of the c1 column is 6.

      Sample code:

          private static void executeDMLStatements(Statement stmt) throws Exception {
              for (int i = 1; i <= 10; i++) {
                  stmt.execute("INSERT INTO test_proxool VALUES (" + i + ",'test_insert" + i + "')");
              }
              stmt.execute("DELETE FROM test_proxool WHERE c1 <= 5");
              stmt.execute("UPDATE test_proxool SET c2 = 'test_update' WHERE c1 = 6");
          }
      
    5. Define a method for querying data.

      Define a private static method executeQueryStatements() to execute SELECT queries and process the results. The steps are as follows:

      1. Define a private static method executeQueryStatements() that receives a Statement object as a parameter. If an exception occurs during execution, the method will throw an Exception.
      2. Use the executeQuery() method to execute the SELECT query statement and store the results in a ResultSet object rs. In this case, the query returns all data from the test_proxool table. Use the try-with-resources statement to ensure that the ResultSet is automatically closed after it is no longer needed.
      3. Use a while loop and the next() method to iterate through each row of data in the ResultSet object rs. In each iteration, the rs.next() method moves the pointer to the next row in the result set. If there is another row of data available, this method returns true; otherwise, it returns false. In the while loop, as long as rs.next() returns true, it indicates that there are more rows of data available. The code within the loop will execute and process the data of the current row. Once all rows of data have been processed, rs.next() will return false, and the loop will end.
      4. Use the getInt() and getString() methods to retrieve the values of specified columns in the current row and print them to the console. In this case, the values of the c1 and c2 columns are printed. The getInt() method is used to retrieve integer values, and the getString() method is used to retrieve string values.

      Code:

          private static void executeQueryStatements(Statement stmt) throws Exception {
              try (ResultSet rs = stmt.executeQuery("SELECT * FROM test_proxool")) {
                  while (rs.next()) {
                      System.out.println(rs.getInt("c1") + "   " + rs.getString("c2"));
                  }
              }
          }
      

    Complete code

    pom.xml
    db.properties
    Main.java
    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>com.oceanbase</groupId>
        <artifactId>proxool-oceanbase-client</artifactId>
        <version>1.0-SNAPSHOT</version>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <configuration>
                        <source>8</source>
                        <target>8</target>
                    </configuration>
                </plugin>
            </plugins>
        </build>
    
        <dependencies>
            <dependency>
                <groupId>com.oceanbase</groupId>
                <artifactId>oceanbase-client</artifactId>
                <version>2.4.2</version>
            </dependency>
            <dependency>
                <groupId>proxool</groupId>
                <artifactId>proxool-cglib</artifactId>
                <version>0.9.1</version>
            </dependency>
            <dependency>
                <groupId>proxool</groupId>
                <artifactId>proxool</artifactId>
                <version>0.9.1</version>
            </dependency>
            <dependency>
                <groupId>commons-logging</groupId>
                <artifactId>commons-logging</artifactId>
                <version>1.2</version>
            </dependency>
        </dependencies>
    </project>
    
    #alias: the alias of the data source
    jdbc-1.proxool.alias=TEST
    #driver-class: driver name
    jdbc-1.proxool.driver-class=com.oceanbase.jdbc.Driver
    #driver-url: url connection string, username and password must be determined
    jdbc-1.proxool.driver-url=jdbc:oceanbase://$host:$port/$schema_name
    jdbc-1.user=$user_name
    jdbc-1.password=$password
    #The maximum number of database connections. The default is 15
    jdbc-1.proxool.maximum-connection-count=8
    #The minimum number of database connections, defaults to 5
    jdbc-1.proxool.minimum-connection-count=5
    #The number of available connections in the Connection pool. If the number of connections in the current Connection pool is less than this value, new connections will be established (assuming that the maximum number of available connections is not exceeded). For example, if we have three active connections and two available connections, and our prototype count is 4, the database Connection pool will try to establish another two connections. This is different from the minimum connection count Minimum connection count also counts active connections. Prototype count is the number of spare connections
    jdbc-1.proxool.prototype-count=4
    #verbose: detailed information settings. Parameter bool value
    jdbc-1.proxool.verbose=true
    #statistics: connection pool usage statistics. Parameter "10s, 1m, 1d"
    jdbc-1.proxool.statistics=10s,1m,1d
    #statistics-log-level:  log statistics tracking type. Parameter 'ERROR' or 'INFO'
    jdbc-1.proxool.statistics-log-level=error
    
    package com.example;
    
    import org.logicalcobwebs.proxool.configuration.PropertyConfigurator;
    import java.io.InputStream;
    import java.sql.Connection;
    import java.sql.DriverManager;
    import java.sql.ResultSet;
    import java.sql.Statement;
    import java.util.Properties;
    
    public class Main {
        private static final String DB_PROPERTIES_FILE = "/db.properties";
    
        public static void main(String[] args) {
            try {
                Properties properties = new Properties();
                InputStream is = Main.class.getResourceAsStream(DB_PROPERTIES_FILE);
                properties.load(is);
                PropertyConfigurator.configure(properties);
    
                Class.forName("org.logicalcobwebs.proxool.ProxoolDriver");
                try (Connection conn = DriverManager.getConnection("proxool.TEST");
                    Statement stmt = conn.createStatement()) {
                    executeDDLStatements(stmt);
                    executeDMLStatements(stmt);
                    executeQueryStatements(stmt);
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    
        private static void executeDDLStatements(Statement stmt) throws Exception {
            stmt.execute("CREATE TABLE test_proxool (c1 NUMBER, c2 VARCHAR2(32))");
        }
    
        private static void executeDMLStatements(Statement stmt) throws Exception {
            for (int i = 1; i <= 10; i++) {
                stmt.execute("INSERT INTO test_proxool VALUES ("+ i +",'test_insert" + i + "')");
            }
            stmt.execute("DELETE FROM test_proxool WHERE c1 <= 5");
            stmt.execute("UPDATE test_proxool SET c2 = 'test_update' WHERE c1 = 6");
        }
    
        private static void executeQueryStatements(Statement stmt) throws Exception {
            try (ResultSet rs = stmt.executeQuery("SELECT * FROM test_proxool")) {
                while (rs.next()) {
                    System.out.println(rs.getInt("c1") + "   " + rs.getString("c2"));
                }
            }
        }
    }
    

    References

    • For more information about OceanBase Connector/J, see OceanBase JDBC driver.
    • For more information about using the Proxool connection pool, see Introduction for Users.

    Previous topic

    C3P0 connection pool connects to OceanBase Cloud
    Last

    Next topic

    Sample program that uses HikariCP to connect to OceanBase Cloud
    Next
    What is on this page
    Prerequisites
    Procedure
    Step 1: Import the proxool-oceanbase-client project into IntelliJ IDEA
    Step 2: Modify the database connection information in the proxool-oceanbase-client project
    Step 3: Run the proxool-oceanbase-client project
    Project code introduction
    Code of pom.xml
    db.properties code introduction
    Introduction to Main.java
    Complete code
    References