OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Cloud

  • Product Updates & Announcements
    • What's new
      • Release notes for 2026
      • Release notes for 2025
      • Release notes for 2024
      • Release history
    • Product announcements
      • Data development module deprecation notice
      • Optimization of Backup and Restore commercialization strategy
      • Cross-AZ data transfer billing (OceanBase Cloud on AWS)
      • Database Proxy pricing update
      • AWS instance pricing adjustment
  • Product Introduction
    • Overview
    • Management mode and scenarios
    • Core features
      • High availability with cross-cloud active-active architecture
      • High availability with cross-cloud primary-standby databases
      • Multi-level caching in shared storage
      • Multi-layer online scaling and on-demand adjustment
    • Deployment modes
    • Storage architecture
    • Product specifications
    • Product billing
      • Overview
      • Instance billing
        • Tencent Cloud instance billing
        • Alibaba Cloud instance billing
        • Huawei Cloud instance billing
        • AWS instance billing
        • GCP instance billing
      • Backup and restore billing
      • SQL audit billing
      • Migrations billing
      • Database proxy billing
      • Binlog service billing
      • Overview of OceanBase Cloud support plans
      • Read-only replica billing
    • Supported database versions
  • Get Started
    • Get started with a transactional instance
    • Get started with an analytical instance
    • Get started with a Key-Value instance
  • Work with Transactional Instances
    • Overview
    • Create an instance
      • Overview
      • Create via OceanBase Cloud official website
      • Create via AWS Marketplace
      • Create via GCP Marketplace
      • Create via Huawei Cloud Marketplace
      • Create via Alibaba Cloud Marketplace
      • Create via Azure Marketplace
    • Connect to an instance
      • MySQL compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using Alibaba Cloud VPC
          • Connect using a public IP address
          • Connect using a Huawei Cloud peering connection
        • Connect with clients
          • Connect to OceanBase Cloud by using Client ODC
          • Connect to OceanBase Cloud by using a MySQL client
          • Connect to OceanBase Cloud by using OBClient
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using SpringBoot
            • SpringBatch sample application for connecting to OceanBase Cloud
            • spring-jdbc
            • SpringDataJPA sample application for connecting to OceanBase Cloud
            • Hibernate application development with OceanBase Cloud
            • Sample program for connecting to OceanBase Cloud
            • connector-j
            • Use TestContainers to connect to and use OceanBase Cloud
          • Python
            • Connect to OceanBase Cloud using mysqlclient
            • Connect to OceanBase Cloud using PyMySQL
            • Use the MySQL-connector-python driver to connect to and use OceanBase Cloud
            • Use SQLAlchemy to connect to an OceanBase Cloud database
            • Connect to an OceanBase Cloud database using Django
            • Connect to an OceanBase Cloud database by using peewee
          • C
            • Use MySQL Connector/C to connect to OceanBase Cloud
          • Go
            • Connect to OceanBase Cloud using the Go-SQL-Driver/MySQL driver
            • Connect to OceanBase Cloud using GORM
          • PHP
            • Use the EXT driver to connect to OceanBase Cloud
            • Connect to OceanBase Cloud by using the MySQLi driver
            • Use the PDO driver to connect to OceanBase Cloud
          • Rust
            • Rust application example for connecting to OceanBase Cloud
            • SeaORM example for connecting to OceanBase Cloud
          • ruby
            • ActiveRecord sample application for OceanBase Cloud
            • Connect to OceanBase Cloud by using mysql2
            • Connect to OceanBase Cloud by using Sequel
        • Use database connection pool
          • Database connection pool configuration
          • Connect to OceanBase Cloud by using a Tomcat connection pool
          • Connect to OceanBase Cloud by using a C3P0 connection pool
          • Connect to OceanBase Cloud by using a Proxool connection pool
          • Connect to OceanBase Cloud by using a HikariCP connection pool
          • Connect to OceanBase Cloud by using a DBCP connection pool
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
      • Oracle compatible mode
        • Overview
        • Get connection string
          • Overview
          • Connect using AWS PrivateLink
          • Connect using Azure Private Link
          • Connect using Google Cloud Private Service Connect
          • Connect using Huawei Cloud VPC Endpoint
          • Connect using a public IP address
        • Connect with clients
          • Connect to OceanBase Cloud by using OBClient
          • Connect to OceanBase Cloud by using Client ODC
        • Connect with drivers
          • Java
            • Connect to OceanBase Cloud using OceanBase Connector/J
            • Connect to OceanBase Cloud by using Spring Boot
            • SpringBatch application example for connecting to OceanBase Cloud
            • Connect to OceanBase Cloud using Spring JDBC
            • Connect to OceanBase Cloud by using Spring Data JPA
            • Connect to OceanBase Cloud by using Hibernate
            • Use MyBatis to connect to OceanBase Cloud
            • Use JFinal to connect to OceanBase Cloud
          • Python
            • Python Driver for Oracle Mode
          • C
            • Connect to OceanBase Cloud using OceanBase Connector/C
            • Connect to OceanBase Cloud using OceanBase Connector/ODBC
            • Use SqlSugar to connect to OceanBase Cloud
        • Use database connection pool
          • Database connection pool configuration
          • Sample program that uses a Tomcat connection pool to connect to OceanBase Cloud
          • C3P0 connection pool connects to OceanBase Cloud
          • Connect to OceanBase Cloud using Proxool connection pool
          • Sample program that uses HikariCP to connect to OceanBase Cloud
          • Use DBCP connection pool to connect to OceanBase Cloud
          • Connect to OceanBase Cloud by using Commons Pool
          • Connect to OceanBase Cloud by using a Druid connection pool
    • Developer guide
      • MySQL compatible mode
        • Plan database objects
          • Create a database
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use the CASE conditional operator in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
            • Use the SELECT ... LOCK IN SHARE MODE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
            • Release a savepoint
          • Commit a transaction
          • Roll back a transaction
      • Oracle compatible mode
        • Plan database objects
          • Create a table group
          • Create a table
          • Create an index
          • Create an external table
        • Write data
          • Insert data
          • Update data
          • Delete data
          • Replace data
          • Generate test data in batches
        • Read data
          • Single-table queries
          • Join tables
            • INNER JOIN queries
            • FULL JOIN queries
            • LEFT JOIN queries
            • RIGHT JOIN queries
            • Subqueries
            • Lateral derived tables
          • Use operators and functions in queries
            • Use arithmetic operators in queries
            • Use numerical functions in queries
            • Use string concatenation operators in queries
            • Use string functions in queries
            • Use datetime functions in queries
            • Use type conversion functions in queries
            • Use aggregate functions in queries
            • Use NULL-related functions in queries
            • Use CASE functions in queries
            • Use the SELECT ... FOR UPDATE statement to lock query results
          • Use a DBLink in queries
          • Set operations
        • Manage transactions
          • Overview
          • Start a transaction
          • Savepoints
            • Mark a savepoint
            • Roll back a transaction to a savepoint
          • Commit a transaction
          • Roll back a transaction
    • Manage instances
      • Manage instances
        • View the instance list
        • Instance overview
        • Stop and restart instances
        • Unit migration
      • Manage tenants
        • Tenant overview
        • Create a tenant
        • Modify tenant specifications
        • Modify tenant names
        • Add an endpoint
        • Resource isolation
          • Overview
          • Manage resource groups
            • Create a resource group
            • View a resource group
            • Edit a resource group
            • Delete a resource group
          • Manage isolation rules
            • Create an isolation rule
            • View isolation rules
            • Edit an isolation rule
            • Delete a quarantine rule
        • Modify primary zone
        • Modify the maximum number of connections for a tenant proxy
        • Monitor tenant performance
          • Overview
          • View performance and SQL monitoring details
          • View transaction monitoring details
          • View storage and cache monitoring details
          • View Binlog service monitoring
          • Customize a monitoring dashboard for a tenant
        • Diagnostics
          • Real-time diagnostics
            • SQL diagnostics
              • Top SQL
              • Slow SQL
              • Suspicious SQL
              • High-risk SQL
            • SQL audit
        • Manage tenant parameters
          • Manage tenant parameters
          • Parameters for tenants
          • Parameter template overview
        • Delete a tenant
        • Manage databases and accounts
          • Create accounts
          • Manage accounts
          • Create a database (MySQL compatible mode)
          • Manage databases (MySQL compatible mode)
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor multidimensional metrics of an instance
        • Monitor the performance of hosts in an instance
        • Monitor database proxy
        • Monitor database proxy hosts
        • Monitor cross-cloud network performance
        • Customize a monitoring dashboard for an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Manage instance parameters
        • Manage parameters
        • Parameters for cluster instances
      • Change instance configurations
        • Enable storage auto-scaling
        • View history of configuration changes
        • Change configuration
        • Change configuration temporarily
        • Switch the deployment mode
      • Manage standby instances
        • Overview
        • Create a standby instance
        • Create a cross-cloud standby instance
        • Create a standby instance for an Alibaba Cloud primary instance
        • View details of primary and standby instances
        • Configure global endpoint
        • Enable automatic forwarding for write requests of standby databases
        • Primary-standby instance switchover
        • Initiate failover
        • Detach a standby instance
        • Release a standby instance
      • Release an instance
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert rule template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Backup and restore
        • Overview
        • Backup strategy
        • Initiate a backup immediately
        • Data backup
        • Initiate a restore
        • Data restore
        • Restore data from the instance recycle bin
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • One-click diagnostics
          • Initiate one-click diagnostics
          • View one-click diagnostic report
            • Exceptions
            • Real-time diagnostics
            • Optimization suggestions
            • Capacity management
            • Security management
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Request analysis
            • Request analysis
        • Root cause diagnostics
          • Exception handling
          • Enable system autonomy
        • SQL audit
        • Materialized view analysis
        • Optimization center
          • Optimization suggestions
          • Manage active outlines
          • SQL review
          • View the optimization history
      • Manage tags
      • Manage read-only replicas
        • Overview
        • Instance read-only replicas
          • Add a read-only replica to an instance
          • View read-only replicas of an instance
          • Manage read-only replicas of an instance
          • Delete a read-only replica of an instance
        • Tenant read-only replicas
          • Add a read-only replica to a tenant
          • View read-only replicas of a tenant
          • Manage read-only replicas of a tenant
          • Delete a read-only replica of a tenant
      • Manage JVM-dependent services
    • Data source management
      • Create a data source
      • Manage data sources
      • User privileges
        • User privileges for compatibility assessment
        • User privileges for data migration
        • User privileges for performance assessment
        • User privileges for data archiving
        • User privileges for data cleanup
      • Connect via private network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
        • Private IP address segments
      • Connect via public network
        • AWS
        • Huawei Cloud
        • Alibaba Cloud
        • Google Cloud
        • Azure
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Manage recycle Bin
      • Instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance-level recycle bin
        • Tenant-level recycle bin
  • Work with Analytical Instances
    • Overview
    • Core features
    • Create an instance
    • Connect to an instance
      • Overview
      • Get connection string
        • Overview
        • Connect using AWS PrivateLink
        • Connect using a public IP address
      • Connect with clients
        • Connect to OceanBase Cloud by using Client ODC
        • Connect to OceanBase Cloud by using a MySQL client
        • Connect to OceanBase Cloud by using OBClient
      • Connect with drivers
        • Java
          • Connect to OceanBase Cloud by using Spring Boot
          • Connect to OceanBase Cloud by using Spring Batch
          • Connect to OceanBase Cloud by using Spring Data JDBC
          • Connect to OceanBase Cloud by using Spring Data JPA
          • Connect to OceanBase Cloud by using Hibernate
          • Connect to OceanBase Cloud by using MyBatis
          • Connect to OceanBase Cloud using MySQL Connector/J
        • Python
          • Connect to OceanBase Cloud by using mysqlclient
          • Connect to OceanBase Cloud by using PyMySQL
          • Connect to OceanBase Cloud using MySQL Connector/Python
        • C
          • Connect to OceanBase Cloud using MySQL Connector/C
        • Go
          • Connect to OceanBase Cloud using Go-SQL-Driver/MySQL
        • PHP
          • Connect to OceanBase Cloud using PHP
      • Use database connection pool
        • Database connection pool configuration
        • Connect to OceanBase Cloud by using a Tomcat connection pool
        • Connect to OceanBase Cloud by using a C3P0 connection pool
        • Connect to OceanBase Cloud by using a Proxool connection pool
        • Connect to OceanBase Cloud by using a HikariCP connection pool
        • Connect to OceanBase Cloud by using a DBCP connection pool
        • Connect to OceanBase Cloud by using Commons Pool
        • Connect to OceanBase Cloud by using a Druid connection pool
    • Data table design
      • Table overview
      • Best practices
        • Unit 1: Best practices for optimizing storage structures and query performance
        • Unit 2: Best practices for creating special indexes
    • Export data
    • OceanBase data processing
    • Query acceleration
      • Statistics
      • Materialized views for query acceleration
      • Select a query parallelism level
    • Manage instances
      • Instance overview
      • Change configuration
      • Modify primary zone
      • Manage parameters
      • Backup and restore
        • Backup overview
        • Backup strategies
        • Immediate backup
        • Data backup
        • Initiate restore
        • Data restore
      • Monitor instance performance
        • Overview
        • Monitor the performance of databases in an instance
        • Monitor the performance of hosts in an instance
      • Manage major compactions
        • Initiate a major compaction
        • View compaction records
        • Update time for compactions
      • Database proxy
        • Overview
        • Manage database proxy
        • Direct load
      • Manage alerts
        • Overview
        • Manage alert rules
          • Create an alert rule
          • View an alert rule
          • Edit an alert rule
          • Delete an alert rule
        • View alert history
        • Manage alert templates
          • Create an alert template
          • View an alert template
          • Edit an alert template
          • Copy an alert template
          • Delete an alert template
        • Manage muting rules
          • Create an alert muting rule
          • View an alert muting rule
          • Edit an alert muting rule
          • Delete an alert muting rule
        • Manage alert notification templates
          • Create an alert notification template
          • View an alert notification template
          • Edit an alert notification template
          • Copy an alert notification template
          • Delete an alert notification template
        • Manage alert contacts
          • Add an alert contact
          • Add an alert contact group
          • View an alert contact
          • Edit an alert contact
          • Delete an alert contact
          • Obtain a webhook URL
        • Monitoring metrics for alerts
      • Diagnostics
        • View performance monitoring data
        • Capacity diagnostics
        • Real-time diagnostics
          • SQL diagnostics
            • Top SQL
            • Slow SQL
            • Suspicious SQL
            • High-risk SQL
            • SQL details
            • SQL monitoring metrics list
          • Session management
            • Session management
          • Optimization management
            • Manage active outlines
            • View the optimization history
          • Request analysis
            • Request analysis
      • Stop and restart instances
      • Release instances
      • Manage databases and accounts
        • Create and manage accounts
        • Create a database
        • Manage databases
      • Manage tags
    • Data lifecycle management
      • Archive data
      • Clean up data
    • Performance diagnosis and tuning
      • Use the DBMS_XPLAN package for performance diagnostics
      • Use the GV$SQL_PLAN_MONITOR view for performance analysis
      • Views related to AP performance analysis
    • Performance testing
    • Product integration
    • Manage recycle Bin
      • View instance recycle bin
      • Manage databases and tables in recycle bin
        • Overview
        • Instance recycle bin
  • Work with Key-Value Instances
    • Try out Key-Value instances
      • Create an instance
      • Create a tenant
      • Create an account for a database user
      • OBKV HBase data operation examples
    • Use Table model
      • Create an instance
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-Table
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
    • Use HBase model
      • OBKV-HBase Overview
      • Create an instance
      • Develop in HBase model
        • Connect to an instance by using the OBKV-HBase client
      • Manage instances
        • Manage instances
          • View the instance list
          • Instance overview
          • Stop and restart instances
          • Release an instance
        • Manage tenants
          • Create a tenant
          • Modify tenant specifications
          • Modify tenant names
          • Delete a tenant
          • Tenant overview
          • Resource isolation
            • Overview
            • Manage resource groups
              • Create a resource group
              • View a resource group
              • Edit a resource group
              • Delete a resource group
            • Manage isolation rules
              • Create an isolation rule
              • View isolation rules
              • Edit an isolation rule
              • Delete a quarantine rule
          • Monitor tenant performance
            • Overview
            • View performance and SQL monitoring details
            • View transaction monitoring details
            • View storage and cache monitoring details
            • OBKV-HBase
            • Customize a monitoring dashboard for a tenant
          • Diagnostics
            • Top SQL
          • Manage tenant parameters
            • Manage tenant parameters
            • Parameters for tenants
          • Manage databases and accounts
            • Create and manage accounts
            • Create a database
            • Manage databases
          • Switch primary zone
        • Monitor instance performance
          • Overview
          • Monitor the performance of databases in an instance
          • Monitor multi-dimensional metrics of an instance
          • Monitor the performance of hosts in a cluster
          • Customize monitoring dashboards for an instance
        • Manage major compactions
          • Initiate major compactions
          • View compaction records
          • Update time for compactions
        • Manage instance parameters
          • Parameter management overview
          • Parameters for cluster instances
        • Change instance configurations
          • View history of configuration changes
          • Change configuration
          • Switch the deployment mode
        • Database proxy
          • Overview
          • Manage database proxy
        • Manage alerts
          • Overview
          • Manage alert rules
            • Create an alert rule
            • View an alert rule
            • Edit an alert rule
            • Delete an alert rule
          • View alert history
          • Manage alert templates
            • Create an alert template
            • View an alert template
            • Edit an alert template
            • Copy an alert template
            • Delete an alert template
          • Manage muting rules
            • Create an alert muting rule
            • View an alert muting rule
            • Edit an alert muting rule
            • Delete an alert muting rule
          • Manage alert contacts
            • Add an alert contact
            • Add an alert contact group
            • View an alert contact
            • Edit an alert contact
            • Delete an alert contact
            • Obtain a webhook URL
          • Monitoring metrics for alerts
        • Backup and restore
          • Backup overview
          • Backup strategies
          • Immediate backup
          • Data backup
          • Initiate restore
          • Data restore
        • Diagnostics
          • View performance monitoring data
          • Top SQL
          • Capacity diagnostics
          • Request analysis
        • Manage tags
        • Manage recycle Bin
          • View instance recycle bin
          • Manage databases and tables in recycle bin
            • Overview
            • Instance-level recycle bin
            • Tenant-level recycle bin
      • Performance test
    • Connect Key-Value instances
      • Overview
      • Connect using a public IP address
  • Migrations
    • Data migration and import solutions
    • Data assessment and migration quick start
    • Assess compatibility
      • Overview
      • Perform online assessment
      • Perform offline assessment
      • Manage compatibility assessment tasks
        • View a compatibility assessment task
        • View and download a compatibility assessment report
        • Stop a compatibility assessment task
        • Delete a compatibility assessment task
      • Obtain files for upload
      • Configure PrivateLink
      • Add an IP address to an allowlist
    • Migrate data
      • Overview
      • Migrations specification
      • Purchase a data migration instance
      • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
      • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
      • Migrate data between OceanBase database tenants of the same compatibility mode
      • Migrate data between OceanBase database tenants of different compatibility modes
      • Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
      • Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
      • Configure a two-way synchronization task
      • Migrate data from an OceanBase database to a Kafka instance
      • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
      • Migrate incremental data from a MySQL-compatible tenant of OceanBase Database to a TiDB Database
      • Migrate data from a PostgreSQL database to an OceanBase database
      • Migrate incremental data from an OceanBase Database to a PostgreSQL database
      • Manage data migration tasks
        • View details of a data migration task
        • Rename a data migration task
        • View and modify migration objects
        • View and modify migration parameters
        • Configure alert monitoring
        • Manage data migration tasks by using tags
        • Start, stop, and resume a data migration task
        • Clone a data migration task
        • Terminate and release a data migration task
      • Features
        • Custom DML/DDL configurations
        • DDL synchronization scope
        • Use SQL conditions to filter data
        • Rename a migration object
        • Set an incremental synchronization timestamp
        • Instructions on schema migration
        • Configure and modify matching rules
        • Wildcard rules
        • Import migration objects
        • Download conflict data
        • Change a topic
        • Column filtering
        • Data formats
      • Authorize an Alibaba Cloud account
      • SQL statements for querying table objects
      • Online DDL tools
      • Create a trigger
      • Modify the log level of a self-managed PostgreSQL instance
      • Supported DDL statements for synchronization and their limitations
        • DDL synchronization from Aurora MySQL DB clusters to MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from MySQL-compatible tenants of OceanBase Database to Aurora MySQL DB clusters
        • DDL synchronization between MySQL-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle databases to Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from Oracle-compatible tenants of OceanBase Database to Oracle databases
        • DDL synchronization between Oracle-compatible tenants of OceanBase Database
        • DDL synchronization from OceanBase databases to Kafka instances
    • Data subscription
      • Create a data subscription task
      • Manage data subscription tasks
        • View details of a data subscription task
        • Configure subscription information
        • Modify the name of a data subscription task
        • View and modify subscription objects
        • View data subscription parameters
        • Set up data subscription alerts
        • Start, stop, and resume data subscription tasks
        • Clone a data subscription task
        • Release a data subscription task
      • Manage private connections for data subscriptions
      • Configure consumer subscription
      • Message formats
    • Data validation
      • Overview
      • Create a data validation task
      • Manage data validation tasks
        • View details of a data validation task
        • View and modify validation objects
        • View and modify validation parameters
        • Manage data validation tasks with tags
        • Start, pause, and resume data validation tasks
        • Clone a data validation task
        • Release a data validation task
      • Features
        • Import validation objects
        • Rename the validation object
        • Filter objects by using SQL conditions
        • Configure the matching rules for the validation object
    • Assess performance
      • Overview
      • Obtain traffic files from a database instance
      • Create a full performance assessment task
      • Create an SQL file parsing task
      • Create an SQL file replay task
      • Manage performance assessment tasks
        • View the details of a performance assessment task
        • View a performance assessment report
        • Retry and stop a performance assessment task
        • Delete a performance assessment task
      • Obtain a database instance
      • Create an access key
    • Import data
      • Import data
      • Direct load
      • Supported file formats and encoding formats for Data Import
      • Sample data introduction
    • Binlog service
      • Overview
      • Purchase the Binlog service
      • Manage Binlog Service
        • View details of the Binlog service
        • Change configuration
        • Modify the auto-scaling strategy for storage space
        • Modify the elasticity strategy for compute units
        • Disable the Binlog service
  • Security
    • OceanBase Cloud account settings
      • Modify login password
      • Multi-factor authentication
      • Manage AccessKeys
      • Time zone settings
      • Manage cloud marketplace accounts
      • Account audit
    • Organizations and projects
      • Overview
      • Manage organization information
      • Project management
        • Manage projects
        • Cross-project bidirectional authorization
        • Subscribe to project messages
      • Manage members
      • Permissions for roles
      • Cost management
        • Overview
        • Cost details
        • Manage cost units
      • Operation audit
    • Database accounts and privileges
      • Account privileges
      • Authorize cloud vendor accounts
      • AWS KMS key management
      • Support access control
    • Security and encryption
      • Set allowlist groups
      • SSL encryption
      • Transparent Data Encryption (TDE)
    • Monitoring dashboard
    • Events
  • SQL Console
    • Overview
    • Access SQL Console
    • SQL editing and execution
    • PL compilation
    • Result set editing
    • Execution analysis
    • Database object management
      • Create a table
      • Create a view
      • Create a function
      • Create a stored procedure
      • Create a program package
      • Create a trigger
      • Create a type
      • Create a sequence
      • Create a synonym
    • Session variable management
    • Functional keys in SQL Console
  • Integrations
    • Overview
    • Schema evolution
      • Liquibase
      • Flyway
    • Data ingestion
      • Canal
      • dbt
      • Debezium
      • Flink
      • Glue
      • Informatica Cloud
      • Kafka
      • Maxwell
      • SeaTunnel
      • DataWorks
      • NiFi
    • SQL development
      • DataGrip
      • DBeaver
      • Navicat
      • TablePlus
    • Orchestration
      • DolphinScheduler
      • Linkis
      • Airflow
    • Visualization
      • Grafana
      • Power BI
      • Quick BI
      • Superset
      • Tableau
    • Observability
      • Datadog
      • Prometheus
    • Database management
      • Bytebase
    • AI
      • LlamaIndex
      • Dify
      • LangChain
      • Tongyi Qianwen
      • OpenAI
      • n8n
      • Trae
      • SpringAI
      • Cline
      • Cursor
      • Continue
      • Toolbox
      • CamelAI
      • Firecrawl
      • Hugging Face
      • Ollama
      • Google Gemini
      • Cloudflare Workers AI
      • Jina AI
      • Augment Code
      • Claude Code
      • Kiro
    • Development tools
      • Cloudflare Workers
      • Vercel
  • Best practices
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through cross-cloud primary-standby databases (1:1)
    • High availability through cross-cloud primary-standby databases (1:n)
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud
  • References
    • System architecture
    • System management
    • Database object management
    • Database design and specification constraints
    • SQL reference
    • System views
    • Parameters and system variables
    • Error codes
    • Performance tuning
    • Open API References
      • Overview
      • Service endpoints
      • Using API
      • Open APIs
        • Cluster management
          • DescribeInstances
          • DescribeInstance
          • CreateInstance
          • DeleteInstance
          • ModifyInstanceName
          • describe-node-options
          • StopCluster
          • StartCluster
          • ModifyInstanceSpec
          • DescribeInstanceTopology
          • DescribeReadonlyInstances
          • CreateReadonlyInstance
          • ModifyReadonlyInstanceSpec
          • ModifyReadonlyInstanceDiskSize
          • ModifyReadonlyInstanceNodeNum
          • DeleteReadonlyInstance
          • DescribeInstanceAvailableRoZones
          • DescribeInstanceParameters
          • UpdateInstanceParameters
          • DescribeInstanceParametersHistory
          • ModifyInstanceTagList
          • ModifyInstanceNodeNum
        • Tenant management
          • DescribeTenants
          • DescribeTenant
          • CreateTenants
          • DeleteTenants
          • ModifyTenantName
          • ModifyTenant
          • ModifyTenantUserDescription
          • ModifyTenantUserStatus
          • GetTenantCreateConstraints
          • ModifyTenantPrimaryZone
          • GetTenantCreateCpuConstraints
          • GetTenantCreateMemConstraints
          • GetTenantModifyCpuConstraints
          • GetTenantModifyMemConstraints
          • CreateTenantSecurityIpGroup
          • DescribeTenantSecurityIpGroups
          • ModifyTenantSecurityIpGroup
          • DeleteTenantSecurityIpGroup
          • DescribeTenantPrivateLink
          • DeletePrivatelinkConnection
          • CreatePrivatelinkService
          • ConnectPrivatelinkService
          • AddPrivatelinkServiceUser
          • BatchKillProcessList
          • DescribeProcessStatsComposition
          • DescribeTenantAvailableRoZones
          • DescribeTenantAddressInfo
          • ModifyTenantReadonlyReplica
          • DescribeTenantParameters
          • UpdateTenantParameters
          • DescribeTenantParametersHistory
          • ModifyTenantTagList
        • Tenant user management
          • CreateTenantUser
          • DescribeTenantUsers
          • DeleteTenantUsers
          • ModifyTenantUserPassword
          • ModifyTenantUserRoles
        • Database management
          • CreateDatabase
          • DescribeDatabases
          • DeleteDatabases
          • ModifyDatabaseUserRoles
        • Backup and restore
          • DescribeDataBackupSet
          • DescribeRestorableTenants
          • ModifyBackupStrategy
          • CreateTenantRestoreTask
          • CreateDataBackupTask
          • DescribeOneDataBackupSet
        • Database proxy management
          • CreateTenantAddress
          • CreateTenantSingleTunnelSLBAddress
          • DeleteTenantAddress
          • DescribeTenantAddress
          • ModifyOdpClusterSpec
          • ModifyTenantAddressPort
          • ModifyTenantAddressDomainPrefix
          • ConfirmPrivatelinkConnection
          • DescribeTenantAddressInfo
        • Monitoring management
          • DescribeTenantMetrics
          • DescribeMetricsData
          • DescribeNodeMetrics
        • Diagnostic management
          • DescribeOasTopSQLList
          • DescribeOasAnomalySQLList
          • DescribeOasSlowSQLList
          • DescribeOasSQLText
          • DescribeSqlAudits
          • DescribeOutlineBinding
          • DescribeSampleSqlRawTexts
          • DescribeSQLTuningAdvices
          • DescribeOasSlowSQLSamples
          • DescribeOasSQLTrends
          • DescribeOasSQLPlanGroup
        • Security management
          • CreateSecurityIpGroup
          • DescribeInstanceSSL
          • ModifyInstanceSSL
          • DescribeTenantEncryption
          • ModifyTenantEncryption
          • ModifySecurityIps
          • DeleteSecurityIpGroup
          • DescribeTenantSecurityConfigs
          • DescribeInstanceSecurityConfigs
        • Tag management
          • DescribeTags
          • CreateTags
          • UpdateTag
          • DeleteTag
        • Historical event management
          • DescribeOperationEvents
      • Differences between ApsaraDB for OceanBase APIs and OceanBase Cloud APIs
    • Download OBClient
      • Download OBClient
      • Download OceanBase Connector/J
      • Download client ODC
      • Download OceanBase Connector/ODBC
      • Download OBClient Libs
    • Metrics References
      • Cluster database
      • Cluster hosts
      • Binlog service
      • Cross-cloud network channel connection
      • Performance and SQL
      • Transactions
      • Storage and caching
      • Proxy database
      • Proxy host
    • ODC User Guide
      • What is ODC?
        • What is ODC?
        • Limitations
      • Quick Start
        • Client ODC
          • Overview
          • Install Client ODC
          • Use Client ODC
        • Web ODC
          • Overview
          • Use Web ODC
      • Data Source Management
        • Create a data source
        • Data sources and project collaboration
        • Database O&M
          • Session management
          • Global variable management
          • Recycle bin management
      • SQL Development
        • Edit and execute SQL statements
        • Perform PL compilation and debugging
        • Edit and export the result set of an SQL statement
        • Execution analysis
        • Generate test data
        • System settings
        • Database objects
          • Table objects
            • Overview
            • Create a table
          • View objects
            • Overview
            • Create a view
            • Manage views
          • Materialized view objects
            • Overview
            • Create a materialized view
            • Manage materialized views
          • Function objects
            • Overview
            • Create a function
            • Manage functions
          • Stored procedure objects
            • Overview
            • Create a stored procedure
            • Manage stored procedures
          • Sequence objects
            • Overview
            • Create a sequence
            • Manage sequences
          • Package objects
            • Overview
            • Create a program package
            • Manage program packages
          • Trigger objects
            • Overview
            • Create a trigger
            • Manage triggers
          • Type objects
            • Overview
            • Create a type
            • Manage types
          • Synonym objects
            • Overview
            • Create a synonym
            • Manage synonyms
      • Import and Export
        • Import schemas and data
        • Export schemas and data
      • Database Change Management
        • User Permission Management
          • Users and roles
          • Automatic authorization
          • User permission management
        • Project collaboration management
        • Risk levels, risk identification rules, and approval processes
        • SQL check specifications
        • SQL window specification
        • Database change management
        • Batch database change management
        • Online schema changes
        • Synchronize shadow tables
        • Schema comparison
      • Data Lifecycle Management
        • Partitioning Plan Management
          • Manage partitioning plans
          • Set partitioning strategies
          • Examples
        • SQL plan task
      • Data Desensitization and Auditing
        • Desensitize data
        • Operation records
      • Notification Management
        • Overview
        • View notification records
        • Manage Notification Channel
          • Create a notification channel
          • View, edit, and delete a notification channel
          • Configure a custom channel
        • Manage notification rules
      • Best Practices
        • Tips for SQL development
        • Explore ODC team workspaces
        • Understanding real-time SQL diagnostics for OceanBase AP
        • OceanBase historical database solutions
        • ODC SQL check for automatic identification of high-risk operations
        • Manage and modify sharded databases and tables via ODC
        • Data masking and control practices
        • Enterprise-level control and collaboration: Safeguard every database change
    • Data Development
      • Overview
      • Workspace management
      • Worksheet management
      • Compute node pool management
      • Workflow management
      • Dashboard management
      • Manage Git repositories
      • SQL development
        • SQL editing and execution
        • Result set editing
        • Execution analysis
        • Database object management
          • Create a table
          • Create a view
          • Create a function
          • Create a stored procedure
        • Session variable management
        • Git integration
      • Sample datasets
      • Data development terms
  • Manage Billing
    • Access billing
    • View monthly bills
    • View payment details
    • View orders
    • Use vouchers for payment
    • View invoices
  • Legal Agreements
    • OceanBase Cloud Services Agreement
    • Service Level Agreement
    • OceanBase Data Processing Addendum
    • Service Level Agreement for OceanBase Cloud Migration Service

Download PDF

Release notes for 2026 Release notes for 2025 Release notes for 2024 Release history Data development module deprecation notice Optimization of Backup and Restore commercialization strategy Cross-AZ data transfer billing (OceanBase Cloud on AWS) Database Proxy pricing update AWS instance pricing adjustment Overview Management mode and scenarios High availability with cross-cloud active-active architecture High availability with cross-cloud primary-standby databases Multi-level caching in shared storage Multi-layer online scaling and on-demand adjustment Deployment modes Storage architecture Product specifications Overview Backup and restore billing SQL audit billing Migrations billing Database proxy billing Binlog service billing Overview of OceanBase Cloud support plans Read-only replica billing Supported database versions Get started with a transactional instance Get started with an analytical instance Get started with a Key-Value instance Overview Overview Create via OceanBase Cloud official website Create via AWS Marketplace Create via GCP Marketplace Create via Huawei Cloud Marketplace Create via Alibaba Cloud Marketplace Create via Azure Marketplace Release an instance Manage tags Manage JVM-dependent services Create a data source Manage data sources Archive data Clean up data Instance recycle bin Overview Core features Create an instance Overview Table overview Export data OceanBase data processing Statistics Materialized views for query acceleration Select a query parallelism level Instance overview Change configuration Modify primary zone Manage parameters Stop and restart instances Release instances Manage tags Archive data Clean up data Use the DBMS_XPLAN package for performance diagnostics Use the GV$SQL_PLAN_MONITOR view for performance analysis Views related to AP performance analysis Performance testing Product integration View instance recycle bin Create an instance Create a tenant Create an account for a database user OBKV HBase data operation examples Create an instance OBKV-HBase Overview Create an instance Performance test Overview Connect using a public IP address Data migration and import solutions Data assessment and migration quick start Overview Perform online assessment Perform offline assessment Obtain files for upload Configure PrivateLink Add an IP address to an allowlist Overview Migrations specification Purchase a data migration instance Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data between OceanBase database tenants of the same compatibility mode Migrate data between OceanBase database tenants of different compatibility modes Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database Configure a two-way synchronization task Migrate data from an OceanBase database to a Kafka instance
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Cloud
iconOceanBase Cloud

    Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database

    Last Updated:2026-04-07 08:08:33  Updated
    share
    What is on this page
    Prerequisites
    Limitations
    Considerations
    Data type mappings
    Convert Oracle table partitions
    Check and modify the system configuration of the Oracle instance
    Enable archive mode on the source Oracle database
    Enable supplemental logging in the source Oracle database
    Set the system parameter of the Oracle database (optional)
    Supported source and target instance types
    Procedure

    folded

    share

    Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database

    You can create a data migration task to migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database. By performing schema migration, full migration, and incremental synchronization, you can seamlessly migrate existing business data and incremental data from the source database to the Oracle-compatible OceanBase Database.

    Notice

    A data migration task remaining in an inactive state for a long time may fail to be resumed depending on the retention period of incremental logs. Inactive states are Failed, Stopped, and Completed. The data migration service automatically releases data migration tasks that remain in an inactive state for more than 7 days to recycle resources. We recommend that you configure alerting for tasks and handle task exceptions in a timely manner.

    Prerequisites

    • You have created the corresponding schema in the target Oracle database.

    • The source Oracle instance must have archive log enabled, and the log file must have been switched before incremental replication in the data migration service.

    • The source Oracle instance must have installed and be able to use the LogMiner tool.

    • You have created the target OceanBase instance and tenant. For more information, see Create an instance and Create a tenant.

    • You have created dedicated database users for data migration in the source and the target, and granted required privileges to the users. For more information, see User privileges.

    • The Oracle instance must have enabled supplemental logging at the database or table level.

    • If you enable supplemental logging at the database level for primary keys (PKs) and unique keys (UKs), the LogMiner Reader will pull more logs when the tables that do not need to be synchronized generate a large number of unnecessary logs. This will increase the pressure on the LogMiner Reader and the Oracle instance. Therefore, the data migration service supports enabling supplemental logging for only PKs and UKs at the table level. However, if you set ETL filters for non-PK or non-UK columns when you create a migration task, you must enable supplemental logging for the corresponding columns, or enable supplemental logging for all columns.

    • Clock synchronization must be performed between the Oracle server and the the data migration service server (for example, by configuring the Network Time Protocol (NTP) service). Otherwise, data risks will exist. If the Oracle instance is an Oracle Real Application Clusters (Oracle RAC) instance, clock synchronization must also be performed between the Oracle instances.

    Limitations

    • Only users with the Project Owner, Project Admin or Data Services Admin roles are allowed to create data migration tasks.

    • Limitations on the source database

      Do not perform DDL operations that modify database or table schemas during schema migration or full data migration. Otherwise, the data migration task may be interrupted.

    • At present, the data migration service supports Oracle Database 10g, 11g, 12c, 18c, and 19c. Oracle Database 12c and later versions support container database (CDB) and pluggable database (PDB). It also supports OceanBase Database (in the Oracle compatible mode) V2.x, V3.x, and V4.x.

    • The data migration service supports the migration of only ordinary tables and views.

    • The data migration service supports the migration of only objects whose database name, table name, and column name are ASCII-encoded and do not contain special characters. The special characters are line breaks, spaces, and . | " ' ` ( ) = ; / &

    • If the target is a database, the data migration service does not support triggers in the target database. If triggers exist in the target database, the data migration may fail.

    • The data migration service does not support incremental data migration for a table whose data in all columns is of any of the following large object (LOB) types: BLOB, CLOB, and NCLOB.

    • Data source identifiers and user accounts must be globally unique in the data migration service system.

    • The maximum incremental log parsing for an Oracle database is 5 TB per day.

    • You cannot create a database object with a name exceeding 30 bytes in an Oracle database version 11g or earlier.

    • The data migration service does not support the execution of certain UPDATE commands on an Oracle database. The following example shows an unsupported UPDATE command.

      UPDATE TABLE_NAME SET KEY=KEY+1;
      

      In the preceding example, TABLE_NAME is the name of the table, and KEY is a NUMERIC column defined as the primary key.

    Considerations

    • When you need to perform incremental synchronization for an Oracle database, the size of a single archive file is recommended to be less than 2 GB.

    • The archive files of an Oracle database are retained for more than 2 days. Otherwise, if the number of archive files increases sharply in a certain period, the archive files may be unavailable when you prepare to restore data, which prevents you from restoring data.

    • If the source Oracle database contains DML statements that exchange primary keys, the data migration service may fail to parse the logs, resulting in data loss during migration to the target. The following is an example of a DML statement that exchanges primary keys:

      UPDATE test SET c1=(CASE WHEN c1=1 THEN 2 WHEN c1=2 THEN 1 END) WHERE c1 IN (1,2);
      
    • The character set of an Oracle instance can be AL32UTF8, AL16UTF16, ZHS16GBK, or GB18030. If the UTF-8 character set is used in the source, we recommend that you use a compatible character set, such as UTF-8 or UTF-16, in the target to avoid garbled characters.

    • When you migrate a table without a primary key from an Oracle database to an Oracle-compatible tenant of OceanBase Database, do not perform operations that change the ROWID of the table, such as import, export, Alter Table, FlashBack Table, partition splitting, or partition merging.

    • If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.

      For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.

    • Due to the historical practice of daylight saving time in China, the incremental synchronization from an Oracle database to an Oracle-compatible tenant of OceanBase Database may have a 1-hour time difference between the source and target for the start and end dates of daylight saving time in 1986 to 1991, and for the period from April 10 to April 17, 1988, for the TIMESTAMP(6) WITH TIME ZONE data type.

    • If you modify a unique index at the target when DDL synchronization is disabled, you must restart the data migration task to avoid data inconsistency.

    • If the character encoding configurations of the source and target are different, the schema migration provides a strategy for expanding the field length definition. For example, the field length may be expanded to 1.5 times the original length, and the length unit may be converted from BYTE to CHAR.

    • If the source contains a data type that contains time zone information (such as TIMESTAMP WITH TIME ZONE), make sure that the target database supports and contains the corresponding time zone. Otherwise, data inconsistency may occur during data migration.

    • In the scenario of table aggregation:

      • We recommend that you map the source and target relationships by using matching rules.

      • We recommend that you create the table schema in the target. If you use the data migration service to create the table schema, skip failed objects in the schema migration step.

    • Check the objects in the recycle bin of the Oracle database. If the recycle bin contains more than 100 objects, internal table queries may time out. You must clear the objects in the recycle bin.

      • Query whether the recycle bin is enabled.

        SELECT Value FROM V$parameter WHERE Name = 'recyclebin';
        
      • Query the number of objects in the recycle bin.

        SELECT COUNT(*) FROM RECYCLEBIN;
        
    • If you select only Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for more than 48 hours.

      If you select Full Migration and Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for at least 7 days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because the data migration service cannot obtain incremental logs.

    • For incremental synchronization tasks of source Oracle databases (excluding those that obtain incremental data from Kafka), if a single transaction spans multiple archive files, LogMiner may fail to return complete data information in the transaction. In this case, data loss may occur. We recommend that you enable full data validation and data correction to ensure data consistency.

    Data type mappings

    Notice

    • CLOB and BLOB data must be less than 48 MB in size.

    • Migration of ROWID, BFILE, XMLType, UROWID, UNDEFINED, and UDT data is not supported.

    • Incremental synchronization of tables of the LONG or LONG RAW type is not supported.

    Oracle Data Types OceanBase Database Oracle compatible mode data types
    CHAR(n CHAR) CHAR(n CHAR)
    CHAR(n BYTE) CHAR(n BYTE)
    NCHAR(n) NCHAR(n)
    VARCHAR2(n) VARCHAR2(n)
    NVARCHAR2(n) NVARCHAR2(n)
    NUMBER(n) NUMBER(n)
    NUMBER (p, s) NUMBER(p,s)
    RAW RAW
    CLOB CLOB
    NCLOB NVARCHAR2
    Note
    In OceanBase Database Oracle compatible mode, fields of the NVARCHAR2 type do not support null values. If the source data contains null values, they are represented as the string "NULL".
    BLOB BLOB
    REAL FLOAT
    FLOAT(n) FLOAT
    BINARY_FLOAT BINARY_FLOAT
    BINARY_DOUBLE BINARY_DOUBLE
    DATE DATE
    TIMESTAMP TIMESTAMP
    TIMESTAMP WITH TIME ZONE TIMESTAMP WITH TIME ZONE
    TIMESTAMP WITH LOCAL TIME ZONE TIMESTAMP WITH LOCAL TIME ZONE
    INTERVAL YEAR(p) TO MONTH INTERVAL YEAR(p) TO MONTH
    INTERVAL DAY(p) TO SECOND INTERVAL DAY(p) TO SECOND
    LONG CLOB
    Note: This type is not supported for incremental synchronization.
    LONG RAW BLOB
    Note: This type is not supported for incremental synchronization.

    Convert Oracle table partitions

    During data migration, the data migration service converts business SQL statements for Oracle databases.

    Note

    The partition conversion rules in this topic apply to all partition types.

    Original table definition Converted output
    CREATE TABLE T_RANGE_0 (
      A INT,
      B INT,
      PRIMARY KEY (B)
    )PARTITION BY RANGE(A)(
    ....
    );
    CREATE TABLE "T_RANGE_0" (
      "A" NUMBER,
       "B" NUMBER NOT NULL,
       CONSTRAINT "T_RANGE_10_UK" UNIQUE ("B")
    )PARTITION BY RANGE ("A")(
    ....
    );
    CREATE TABLE T_RANGE_10 (
      "A" INT,
       "B" INT,
       "C" DATE,
       "D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,
      CONSTRAINT "T_RANGE_10_PK" PRIMARY KEY (A)
    )PARTITION BY RANGE(D)(
    ....
    );
    CREATE TABLE T_RANGE_10 (
      "A" INT NOT NULL,
      "B" INT,
      "C" DATE,
      "D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,
      CONSTRAINT "T_RANGE_10_PK" UNIQUE (A)
    )PARTITION BY RANGE(D)(
    ....
    );
    CREATE TABLE T_RANGE_1 (
      A INT,
      B INT,
      UNIQUE (B)
    )PARTITION BY RANGE(A)(
    partition P_MAX values less than (10)
    );
    The original table definition is supported.
    CREATE TABLE T_RANGE_2 (
      A INT,
      B INT NOT NULL,
      UNIQUE (B)
    )PARTITION BY RANGE(A)(
    partition P_MAX values less than (10)
    );
    The original table definition is supported.
    CREATE TABLE T_RANGE_3 (
      A INT,
      B INT,
      UNIQUE (A)
    )PARTITION BY RANGE(A)(
    ....
    );
    The original table definition is supported.
    CREATE TABLE T_RANGE_4 (
      A INT NOT NULL,
      B INT,
      UNIQUE (A)
    )PARTITION BY RANGE(A)(
    ....
    );
    CREATE TABLE "T_RANGE_4" (
      "A" NUMBER NOT NULL,
      "B" NUMBER,
      PRIMARY KEY ("A")
    )PARTITION BY RANGE ("A")(
    ....
    );
    CREATE TABLE T_RANGE_5 (
      A INT,
      B INT,
      UNIQUE (A, B)
    )PARTITION BY RANGE(A)(
    partition P_MAX values less than (10)
    );
    Supported in the original table definition.
    CREATE TABLE T_RANGE_6 (
      A INT NOT NULL,
      B INT,
      UNIQUE (A, B)
    )PARTITION BY RANGE(A)(
    partition P_MAX values less than (10)
    );
    Supported in the original table definition.
    CREATE TABLE T_RANGE_7 (
      A INT NOT NULL,
      B INT NOT NULL,
      UNIQUE (A, B)
    )PARTITION BY RANGE(A)(
    partition P_MAX values less than (10)
    );
    CREATE TABLE "T_RANGE_7" (
      "A" NUMBER NOT NULL,
      "B" NUMBER NOT NULL,
      PRIMARY KEY ("A", "B")
    )PARTITION BY RANGE ("A")(
    ....
    );
    CREATE TABLE T_RANGE_8 (
      "A" INT,
      "B" INT,
      "C" INT NOT NULL,
      UNIQUE (A),
      UNIQUE (B),
      UNIQUE (C)
    )PARTITION BY RANGE(B)(
    partition P_MAX values less than (10)
    );
    Supported in the original table definition.
    CREATE TABLE T_RANGE_9 (
      "A" INT,
      "B" INT,
      "C" INT NOT NULL,
      UNIQUE(A),
      UNIQUE(B),
      UNIQUE (C)
    )PARTITION BY RANGE(C)(
    partition P_MAX values less than (10)
    );
    CREATE TABLE "T_RANGE_9" (
      "A" NUMBER,
      "B" NUMBER,
      "C" NUMBER NOT NULL,
      PRIMARY KEY ("C"),
      UNIQUE ("A"),
      UNIQUE ("B")
    )PARTITION BY RANGE ("C")(
    ....
    );

    Check and modify the system configuration of the Oracle instance

    To do this, perform the following steps:

    Notice

    When you perform the following operations on an AWS RDS Oracle instance, there are some permission limitations. For more information, see Users and privileges for RDS for Oracle.

    1. Enable the archive mode in the source Oracle database.

    2. Enable the supplemental logging in the source Oracle database.

    3. (Optional) Set the system parameters of the Oracle database.

      Note

      When you select Instance type as Self-managed Oracle, you can set the system parameter _log_parallelism_max.

    Enable archive mode on the source Oracle database

    Enable archive mode on an AWS RDS Oracle instance

    When you enable archive mode on an AWS RDS Oracle instance, you can only set it based on the retention period, and cannot set it based on the size.

    EXEC rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24);
    

    Enable archive mode on a self-managed Oracle database

    SELECT log_mode FROM v$database;
    

    The log_mode field must be archivelog. Otherwise, you need to modify it as described below.

    1. Run the following command to enable archive mode.

      SHUTDOWN IMMEDIATE;
      STARTUP MOUNT;
      ALTER DATABASE ARCHIVELOG;
      ALTER DATABASE OPEN;
      
    2. Run the following command to view the path and quota of the archive logs.

      Check the path and quota of the recovery file. It is recommended that you set the db_recovery_file_dest_size parameter to a large value. After enabling archive mode, you need to regularly clean up archive logs using tools such as RMAN.

      SHOW PARAMETER db_recovery_file_dest;
      
    3. Change the quota of the archive logs based on your business needs.

      ALTER SYSTEM SET db_recovery_file_dest_size =50G SCOPE = BOTH;
      

    Enable supplemental logging in the source Oracle database

    Enable supplemental logging in an AWS RDS Oracle instance

    EXEC rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD');
    EXEC rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','PRIMARY KEY');
    EXEC rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','UNIQUE');
    

    After you enable supplemental logging, switch to archive logs.

    EXEC rdsadmin.rdsadmin_util.switch_logfile;
    

    When you enable supplemental logging in an AWS RDS Oracle instance, the lack of table-level control can lead to the following effects:

    • Only PK and UK settings are supported, which may affect the filtering conditions for data. If you need to filter data, you must set alter_supplemental_logging('ADD','ALL'), which will significantly increase the log volume.

    • Inconsistencies in the table structures (such as primary keys and unique keys) between the source and target databases can lead to data quality issues.

    Enable supplemental logging in a self-managed Oracle database

    LogMiner Reader supports configuring only table-level supplemental logging in the Oracle system. If new tables are created in the source Oracle database during migration, you need to enable PK and UK supplemental logging before executing DML operations. Otherwise, the data migration service will report an incomplete log exception.

    Notice

    Supplemental logging must be enabled in the Oracle primary database.

    To address issues such as inconsistent indexes between the source and target databases, ETL operations not meeting expectations, and reduced performance for partitioned table migrations, you need to add the following supplemental logs:

    • Add supplemental_log_data_pk and supplemental_log_data_ui at the database or table level.

    • Add specific columns to the supplemental logs.

      • Add all PK and UK-related columns from both the source and target databases. This resolves inconsistencies in indexes between the source and target databases.

      • If ETL is involved, add the ETL columns. This ensures ETL operations meet expectations.

      • If the target database is a partitioned table, add the partitioning columns. This prevents performance degradation due to the inability to perform partition pruning.

      You can execute the following statement to check the addition results.

      SELECT log_group_type FROM all_log_groups WHERE OWNER = '<schema_name>' AND table_name = '<table_name>';
      

      If the query result contains "ALL COLUMN LOGGING," the check is successful. If not, verify whether all the mentioned columns are present in the ALL_LOG_GROUP_COLUMNS table.

      Here is an example of how to add specific columns to the supplemental logs:

      ALTER TABLE <table_name> ADD SUPPLEMENTAL LOG GROUP <table_name_group> (c1, c2) ALWAYS;
      

    The following table lists the risks and solutions when a DDL operation is performed during the running of a data migration task.

    Operation Risk Solution
    CREATE TABLE (and the table needs to be synchronized) If the target database has a partitioned table and the indexes are inconsistent between the source and target databases, or ETL is required, it may affect data migration performance and lead to ETL not meeting expectations. Enable PK and UK supplemental logging at the database level. Manually add the relevant columns to the supplemental logs.
    Add, delete, or modify PK/UK/partition columns or ETL columns This may not meet the rule of adding supplemental logs at startup, leading to data inconsistencies or reduced data migration performance. Follow the rules for adding supplemental logs as mentioned above.

    LogMiner Reader checks in the following two ways. If it detects that supplemental logging is not enabled, it exits.

    • Check if supplemental_log_data_pk and supplemental_log_data_ui are enabled at the database level.

      Execute the following command to check if supplemental logging is enabled. If the query result shows YES for all, it indicates that supplemental logging is enabled.

      SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM v$database;
      

      If not enabled, perform the following steps:

      1. Execute the following statement to enable supplemental logging.

        ALTER DATABASE ADD supplemental log DATA(PRIMARY KEY, UNIQUE) columns;
        
      2. After enabling, switch to archive logs twice, and wait more than 5 minutes before starting the task. For Oracle RAC, multiple instances alternate in switching.

        ALTER SYSTEM SWITCH LOGFILE;
        

        In the case of Oracle RAC, if one instance switches multiple times and then another instance is switched without alternating, the later switched instance may locate logs before supplemental logging was enabled when determining the starting log file.

    • Check if supplemental_log_data_pk and supplemental_log_data_ui are enabled at the table level.

      1. Execute the following statement to check if supplemental_log_data_min is enabled at the database level.

        SELECT supplemental_log_data_min FROM v$database;
        

        If the query result is YES or IMPLICIT, it indicates that it is enabled.

      2. Execute the following statement to check if table-level supplemental logging is enabled for the table to be synchronized.

        SELECT log_group_type FROM all_log_groups WHERE OWNER = '<schema_name>' AND table_name = '<table_name>';
        

        Each type of supplemental log returns one row. The result must include ALL COLUMN LOGGING, or both PRIMARY KEY LOGGING and UNIQUE KEY LOGGING.

        If table-level supplemental logging is not enabled, execute the following statement.

        ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;
        
      3. After enabling, switch to archive logs twice, and wait more than 5 minutes before starting the task. For Oracle RAC, multiple instances alternate in switching.

        ALTER SYSTEM SWITCH LOGFILE;
        

    Set the system parameter of the Oracle database (optional)

    When you use a self-managed Oracle database, we recommend that you set the _log_parallelism_max parameter of the Oracle database to 1. By default, the value of this parameter is 2.

    1. Query the value of _log_parallelism_max. You can use the following two methods:

      • Method 1

        SELECT NAM.KSPPINM,VAL.KSPPSTVL,NAM.KSPPDESC FROM SYS.X$KSPPI NAM,SYS.X$KSPPSV VAL WHERE NAM.INDX= VAL.INDX AND NAM.KSPPINM LIKE '_%' AND UPPER(NAM.KSPPINM) LIKE '%LOG_PARALLEL%';
        
      • Method 2

        SELECT VALUE FROM v$parameter WHERE name = '_log_parallelism_max';
        
    2. Modify the value of _log_parallelism_max. You can use the following two methods:

      • Method 1: Modify the parameter for an Oracle RAC database

        ALTER SYSTEM SET "_log_parallelism_max" = 1 SID = '*' SCOPE = spfile;
        
      • Method 2: Modify the parameter for a non-Oracle RAC database

        ALTER SYSTEM SET "_log_parallelism_max" = 1 SCOPE = spfile;
        
    3. After you modify the _log_parallelism_max parameter, restart the instance, switch the archive logs twice, and wait for more than 5 minutes before you start a task.

    Supported source and target instance types

    Cloud vendor Source Target
    AWS RDS Oracle OceanBase Oracle Compatible (Transactional)
    AWS Self-managed Oracle OceanBase Oracle Compatible (Transactional)
    Huawei Cloud Self-managed Oracle OceanBase Oracle Compatible (Transactional)
    Google Cloud Self-managed Oracle OceanBase Oracle Compatible (Transactional)
    Alibaba Cloud Self-managed Oracle OceanBase Oracle Compatible (Transactional)

    Procedure

    1. Create a data migration task.

      migration151-en

      1. Log in to the OceanBase Cloud console.

      2. In the left-side navigation pane, select Data Services > Migrations.

      3. On the Migrations page, click the Migrate Data tab.

      4. On the Migrate Data tab, click Create Task in the upper-right corner.

    2. In the task name field, enter a custom migration task name.

      We recommend that you use a combination of Chinese characters, numbers, and English letters. The name cannot contain spaces and must be less than 64 characters in length.

    3. On the Configure Source & Target page, configure the parameters.

      1. In the Source Profile section, configure the parameters.

        If you want to reference the configurations of an existing data source, click Quick Fill next to Source Profile and select the required data source from the drop-down list. Then, the parameters in the Source Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.

        You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source.

        Parameter Description
        Cloud Vendor At present, supported cloud vendors are AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud.
        Database Type The type of the source. Select Oracle.
        Instance Type ul>
      2. When you select AWS as the cloud vendor, the instance types supported are RDS Oracle and Self-managed Oracle.
      3. When you select Huawei Cloud or Google Cloud as the cloud vendor, the instance type supported is Self-managed Oracle.
      4. Region The region of the source database.
        Connection Type Available connection types are Endpoint and Public IP.
        • If you select Endpoint connection type, you need to first add the account ID displayed on the page to the whitelist of your endpoint service. This allows the endpoint from that account to connect to the endpoint service. For more information, see the corresponding topic under Connect via private network.
        • When you select Cloud Vendor as AWS, if you selected Acceptance required for the parameter Require acceptance for endpoint when you created the endpoint service, the data migration service will prompt you to go to the AWS console to perform the Accept endpoint connection request operation in the AWS console when the data migration service first connects to the PrivateLink.
        • When your Cloud Vendor is Google Cloud, add authorized projects to Published Services. After authorization, no manual authorization is needed when you test the data source connection.
        If you select the Public IP connection type, you must add the data migration IP address to the Oracle database whitelist. For more information, see the corresponding topic under Connect via public network.

        Note

        You need to select the source and target regions before the page displays the data source IP addresses that need to be added to the whitelist.

        Connection Details
        • When you select Connection Type as Endpoint, enter the endpoint service name.
        • When you select Connection Type as Public IP, enter the IP address and port number of the database host machine.
        Service Name The service name of the Oracle database.
        Database Account The name of the Oracle database user for data migration.
        Password The password of the database user.
      5. In the Target Profile section, configure the parameters.

        If you want to reference the configurations of an existing data source, click Quick Fill next to Target Profile and select the required data source from the drop-down list. Then, the parameters in the Target Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.

        You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source.

        Parameter Description
        Cloud Vendor We support AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. You can choose the same cloud vendor as the source, or perform cross-cloud data migration.

        Notice

        Cross-cloud vendor data migration is disabled by default. If you need to use this feature, please contact our technical support.

        Database Type Select OceanBase Oracle Compatible as the database type for the target.
        Instance Type Select Dedicated (Transactional).
        Region The region of the target database.
        Instance The ID or name of the instance to which the Oracle-compatible tenant of OceanBase Database belongs. You can view the ID or name of the instance on the Instances page.

        Note

        When your cloud vendor is Alibaba Cloud, you can also select a cross-account authorized instance of an Alibaba Cloud primary account. For more information, see Alibaba Cloud account authorization.

        Tenant The ID or name of the Oracle-compatible tenant of OceanBase Database. You can expand the information about the target instance on the Instances page and view the ID or name of the tenant.
        Database Account The name of the database user in the Oracle-compatible tenant of OceanBase Database for data migration.
        Password The password of the database user.
    4. Click Test and Continue.

    5. On the Select Type & Objects page, configure the parameters.

      1. Select One-way Sync for Sync Topology.

        Data migration supports One-way Sync and Two-way Sync. This topic introduces the operation of one-way synchronization. For more information on two-way synchronization, see Configure a two-way synchronization task.

      2. Select the migration type for your data migration task.

        Options of Migration Type are Schema Migration, Full Migration, and Incremental Synchronization.

        Migration type Description
        Schema migration If you select this migration type, you must define the mapping between the character sets. The data migration service only copies schemas from the source database to the target database without affecting the schemas in the source.
        Full Migration After the full migration task begins, the data migration service will transfer the existing data from the source database tables to the corresponding tables in the target database.
        Incremental Synchronization After the incremental synchronization task begins, the data migration service will synchronize the changes (inserts, updates, or deletes) from the source database to the corresponding tables in the target database. Incremental Synchronization includes DML Synchronization and DDL Synchronization. You can select based on your needs. For more information on synchronizing DDL, see Custom DML/DDL configurations.
      3. In the Select Migration Objects section, specify your way to select migration objects.

        You can select migration objects in two ways: Specify Objects and Match by Rule.

      4. In the Select Migration Scope section, select migration objects.

        • If you select Specify Objects, data migration supports Table-level and Database-level. Table-level migration allows you to select one or more tables or views from one or more databases as migration objects. Database-level migration allows you to select an entire database as a migration object. If you select table-level migration for a database, database-level migration is no longer supported for that database. Conversely, if you select database-level migration for a database, table-level migration is no longer supported for that database.

          After selecting Table-level or Database-level, select the objects to be migrated in the left pane and click > to add them to the right pane.

          The data migration service allows you to rename objects, set row filters, and remove a single migration object or all migration objects.

          migration113-en

          Note

          Take note of the following items when you select Database-level:

          • The right-side pane displays only the database name and does not list all objects in the database.

          • If you have selected DDL Synchronization-Synchronize DDL, newly added tables in the source database can also be synchronized to the target database.

          Operation Description
          Import Objects In the list on the right side of the selection area, click Import in the upper right corner. For more information, see Import migration objects.
          Rename an object The data migration service allows you to rename a migration object. For more information, see Rename a migration object.
          Set row filters The data migration service allows you to filter rows by using WHERE conditions. For more information, see Use SQL conditions to filter data. You can also view column information about the migration objects in the View Column section.
          Remove one or all objects The data migration service allows you to remove one or all migration objects during data mapping.
          • Remove a single migration object
            In the right-side pane, hover the pointer over the object that you want to remove, and then click Remove.
          • Remove all migration objects
            In the right-side pane, click Clear All. In the dialog box that appears, click OK to remove all migration objects.
        • If you select Match by Rule, for more information, see Configure database-to-database matching rules.

    6. Click Next. On the Migration Options page, configure the parameters.

      • Schema migration

        On the Select Type & Objects page, select Schema Migration, and the following parameters will be displayed only if the character sets of the source and target are different.

        migration53-en

        When the character sets of the source and target are different (e.g., the source is GBK and the target is UTF-8), there may be cases of field truncation and data inconsistency. You can configure the Character Length Expansion Factor to increase the length of character type fields.

        Note

        The extended length cannot exceed the maximum limit of the target.

      • Full migration

        The following parameters will be displayed only if Full Migration is selected on the Select Type & Objects page.

        migration114-en

        Parameter Description
        Read Concurrency This parameter specifies the number of concurrent threads for reading data from the source during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the source and affect business operations.
        Write Concurrency This parameter specifies the number of concurrent threads for writing data to the target during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the target and affect business operations.
        Rate Limiting for Full Migration You can decide whether to limit the full migration rate based on your needs. If you enable this option, you must also set the RPS (maximum number of data rows that can be migrated to the target per second during full migration) and BPS (maximum amount of data that can be migrated to the target per second during full migration).

        Note

        The RPS and BPS values specified here are only for throttling and limiting capabilities. The actual performance of full migration is limited by factors such as the source, target, and instance specifications.

        Handle Non-empty Tables in Target Database This parameter specifies the strategy for handling records in target table objects. Valid values: Stop Migration and Ignore.
        • If you select Stop Migration, data migration will report an error when target table objects contain data, indicating that migration is not allowed. Please handle the data in the target database before resuming migration.

          Notice

          If you click Restore after an error occurs, data migration will ignore this setting and continue to migrate table data. Proceed with caution.

        • If you select Ignore, when target table objects contain data, data migration will adopt the strategy of recording conflicting data in logs and retaining the original data.
        Post-Indexing This parameter specifies whether to allow index creation to be postponed after full migration is completed. If you select this option, note the following items.

        Notice

        • Before you select this option, make sure that you have selected both Schema Migration and Full Migration on the Select Migration Type page.

        • Only non-unique key indexes support index creation after migration.

        When post index is allowed, we recommend that you adjust the following business tenant parameters based on the hardware conditions of the OceanBase Database and the current business traffic using a command-line client tool.

        // File memory buffer limit
        ALTER SYSTEM SET _temporary_file_io_area_size = '10' tenant = 'xxx'; 
        // For OceanBase database V4.x, disable throttling
        ALTER SYSTEM SET sys_bkgd_net_percentage = 100;  
        
      • Incremental synchronization

        On the Select Type & Objects page, select One-way Sync > Incremental Synchronization to display the following parameters.

        migration115-en

        Parameter Description
        Write Concurrency Specifies target data write concurrency during incremental synchronization. The maximum limit is 512. Excessive concurrency may overload the target system and impact business operations.
        Rate Limiting for Incremental Migration Enable the incremental migration rate limit based on your needs. If enabled, set the RPS (maximum data limit that can be migrated to the target per second during full migration) and BPS (maxim data limit that can be migrated to the target per second during full migration).

        Notice

        The RPS and BPS settings here only serve as rate limiting. The actual performance of full migration is limited by factors such as the source, target, and instance specification.

        Incremental Synchronization Start Timestamp
        • If Full Migration has been selected when choosing the migration type, this parameter will not be displayed.
        • If Full Migration has not been selected when choosing the migration type, but Incremental Synchronization has been selected, please specify here the data to be migrated after a certain timestamp. The default is the current system time. For more information, see Set incremental synchronization timestamp.
      • Advanced Options

        The parameters in this section will only be displayed if the target OceanBase Database Oracle-compatible tenant is V4.3.0 or later, and Schema Migration or Incremental Synchronization > DDL Synchronization was selected on the Select Type & Objects page.

        migration74-en

        The storage types for target table objects include Default, Row Storage, Column Storage, and Hybrid Row-Column Storage. This configuration determines the storage type of target table objects during schema migration or incremental synchronization.

        Note

        The Default option adapts to other options based on target parameter settings, and structures of schema migration table objects or new table objects created by incremental DDL will follow the configured storage type.

    7. Click Next to proceed to the pre-check stage for the data migration task.

      During the precheck, the data migration service checks the read and write privileges of the database user and the network connection of the database. A data migration task can be started only after it passes all check items. If an error is returned during the precheck, you can perform the following operations:

      • You can identify and troubleshoot the problem and then perform the precheck again.

      • You can also click Skip in the Actions column of a failed precheck item. In the dialog box that appears, you can view the prompt for the consequences of the operation and click OK.

    8. After the pre-check succeeds, click Purchase to go to the Purchase Data Migration Instance page.

      After the purchase succeeds, you can start the data migration task. For more information about how to purchase a data migration instance, see Purchase a data migration instance. If you do not need to purchase a data migration instance at this time, click Save to go to the details page of the data migration task. You can manually purchase a data migration instance later as needed.

      You can click Configure Validation Task in the upper-right corner of the details page to compare the data differences between the source database and the target database. For more information, see Create a data validation task.

      The data migration service allows you to modify the migration objects when the task is running. For more information, see View and modify migration objects. After the data migration task is started, it is executed based on the selected migration types. For more information, see the "View migration details" section in View details of a data migration task.

    Previous topic

    Migrate data between OceanBase database tenants of different compatibility modes
    Last

    Next topic

    Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
    Next
    What is on this page
    Prerequisites
    Limitations
    Considerations
    Data type mappings
    Convert Oracle table partitions
    Check and modify the system configuration of the Oracle instance
    Enable archive mode on the source Oracle database
    Enable supplemental logging in the source Oracle database
    Set the system parameter of the Oracle database (optional)
    Supported source and target instance types
    Procedure