OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Best Practices

All Versions

  • Deploy
    • Configuration guide for read-write splitting in AP scenarios
    • Best practices for read-write splitting
  • Migrate
    • Data transfer solutions in OceanBase Database
    • Overview on data migration
    • Best practices for importing data files to OceanBase Database
    • Best practice for migrating data from other databases to OceanBase Database
    • Massive data migration strategy
    • Best practices for migrating data from MyCat to OceanBase Database
    • Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode
  • Route
    • ODP routing best practices
  • Table Design
    • Best practices for table design and index optimization
    • Best practices for creating indexes on large tables
    • Best practices for database development
  • Develop
    • Best practices for connecting Java applications to OceanBase Database
    • Best practices for integrating Spark Catalog with OceanBase Database
    • Best practices for achieving optimal performance in batch DML using JDBC and OBServer
    • Best practices for bulk data cleanup in OceanBase Database
    • Best practices for PDML processing in OceanBase Database
    • Best practices for hot tables in OceanBase Database
    • Best practices for auto-increment columns and sequences in OceanBase Database
  • Manage
    • Best practices for resource throttling
    • Best practices for data load balancing
    • Best practices for security certification
    • Best practices for access control
    • Best practices for data encryption
  • Diagnose
    • Best practices for log interpretation in common scenarios
    • Best practices for end-to-end tracing
    • Best practices for using obdiag to collect performance information
    • Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements
    • Best practices for troubleshooting OceanBase Database performance issues
  • Performance Tuning
    • Best practices for handling slow queries
    • Best practices for collecting statistics to generate an efficient execution plan
    • Best practices for updating hotspot rows
    • Best practices for large object storage performance
    • Best practices for semi-structured storage performance
    • Best practices for OceanBase materialized views
  • Cloud Database
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through primary and standby databases across clouds
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud

Download PDF

Configuration guide for read-write splitting in AP scenarios Best practices for read-write splitting Data transfer solutions in OceanBase Database Overview on data migration Best practices for importing data files to OceanBase Database Best practice for migrating data from other databases to OceanBase Database Massive data migration strategy Best practices for migrating data from MyCat to OceanBase Database Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode ODP routing best practices Best practices for table design and index optimization Best practices for creating indexes on large tables Best practices for database development Best practices for connecting Java applications to OceanBase Database Best practices for integrating Spark Catalog with OceanBase Database Best practices for achieving optimal performance in batch DML using JDBC and OBServer Best practices for bulk data cleanup in OceanBase Database Best practices for PDML processing in OceanBase Database Best practices for hot tables in OceanBase Database Best practices for auto-increment columns and sequences in OceanBase Database Best practices for resource throttling Best practices for data load balancing Best practices for security certification Best practices for access control Best practices for data encryption Best practices for log interpretation in common scenarios Best practices for end-to-end tracing Best practices for using obdiag to collect performance information Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements Best practices for troubleshooting OceanBase Database performance issues Best practices for handling slow queries Best practices for collecting statistics to generate an efficient execution plan Best practices for updating hotspot rows Best practices for large object storage performance Best practices for semi-structured storage performance Best practices for OceanBase materialized views Best practices for achieving high availability through cross-cloud active-active deployment High availability through primary and standby databases across clouds High host CPU usage Best practices for read/write splitting in OceanBase Cloud
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Best Practices
  3. master
iconOceanBase Best Practices
master
  • master

Best practices for achieving high availability through cross-cloud active-active deployment

Last Updated:2025-08-12 09:47:55  Updated
share
What is on this page
Overview
Scenario
Prerequisites
Procedure
Create tenants and database accounts
Create a data migration task
Configure two-way synchronization
Data synchronization and validation

folded

share

The migration feature of OceanBase Cloud and its cross-region synchronization capability can efficiently implement cross-cloud active-active and high availability architectures.

Note

The cross-cloud vendor data synchronization migration feature is an allowlist feature. To use it, please contact OceanBase Cloud technical support.

Overview

OceanBase Cloud supports deploying the same data and services across different cloud service providers. Each cloud environment is active and processes business requests simultaneously, maintaining data consistency through real-time data synchronization. If one cloud environment fails, another can seamlessly take over all traffic, ensuring service continuity. This architecture enhances system fault tolerance, reduces reliance on a single cloud provider, and boosts overall reliability and stability. For more information, see Achieving high availability with cross-cloud active-active architecture.

Scenario

In this scenario, the business of the mall is deployed across different cloud providers. After the customer places an order on AWS and Google Cloud respectively, the two databases are kept in real-time synchronization through a real-time cross-cloud active-active replication link. This best practice introduces how to implement cross-cloud active-active operations in the OceanBase Cloud console.

Prerequisites

  • Transactional instances with Google Cloud and AWS as cloud providers have been created, namely Instance A and Instance B. Instance A serves as the source, and Instance B as the target.

  • Only users with the Project Owner, Project Admin, or Data Services Admin role can create data migration tasks.

  • Avoid performing DDL operations for database or table schema changes in the source database during schema migration or full migration. Otherwise, data migration tasks may be interrupted.

  • Data migration is only supported between tenants of the same type in OceanBase Database. Specifically, data can be migrated from a MySQL-compatible tenant to another MySQL-compatible tenant, or from an Oracle-compatible tenant to another Oracle-compatible tenant.

  • Data migration only supports migrating databases, tables, and column objects with ASCII-compliant names that do not contain special characters (.,'`()=;/& or line breaks).

  • Data migration is not case-sensitive. If the target database or table has the same name but differing only in capitalization, a precheck will fail.

  • If the target is a database, data migration does not support triggers in the target database. If triggers exist, data migration may fail.

Procedure

The following describes how to build a cross-cloud disaster recovery and high-availability database architecture by using the migration feature and synchronization mechanism of OceanBase Cloud.

Create tenants and database accounts

Create tenants, databases, and database accounts for instances A and B, and save the database account password.

  1. In the instance overview page, click Create Tenant to create a MySQL tenant. For more information, see Create a tenant.

  2. Click Create Database and then Create.

  3. Go to the account management page, and then click Create Account. In the dialog box that appears, specify the account name, account type, and other parameters. For more information, see Create and manage an account.

  4. Execute the following statement to create a student table in the database of Instance A.

    CREATE TABLE `student` (
    `id` int(11) NOT NULL AUTO_INCREMENT,
    `name` varchar(50) NOT NULL,
    `age` int(11) NOT NULL,
    `gender` enum('male','female') NOT NULL,
    `enrollment_date` date NOT NULL,
    PRIMARY KEY (`id`)
    ) 
    

Create a data migration task

  1. Log in to the OceanBase Cloud console.

  2. In the left-side navigation pane, choose Data Services > Migrations.

  3. On the Migrations page, click the Migrate Data tab.

  4. On the Migrate Data tab, click Create Task.

  5. In the Source Profile section, configure the parameters.

    Parameter Description
    Cloud Vendor Select Google Cloud.
    Database Type Select OceanBase MySQL Mode for the source.
    Instance Type Select Dedicated (Transactional).
    Region Select the region where the source database is located.
    Instance Select Instance A.
    Database Account The username of the OceanBase Cloud MySQL-compatible database for data migration.
    Password The password of the database user.
  6. In the Target Profile section, configure the parameters.

    Parameter Description
    Cloud Vendor Select AWS.
    Database Type Select OceanBase MySQL Mode for the target.
    Instance Type Select Dedicated (Transactional).
    Region Select the region where the target database is located.
    Instance Select Instance B.
    Database Account The username of the OceanBase Cloud MySQL-compatible database for data migration.
    Password The password of the database user.
  7. Click Test Connection and Continue to configure two-way synchronization.

Configure two-way synchronization

  1. When the synchronization topology is set to two-way sync, the supported migration types include Schema Migration, Full Migration, Incremental Synchronization, and Full Validation.

    Parameter Description
    Schema Migration During schema migration, you need to define the character set mapping relationships. Data migration only copies the data (schema) from the source database to the target database without affecting the source data (schema).
    Full Migration After a full migration task is initiated, the data migration service will migrate the existing data from the source database tables to the corresponding tables in the target database.
    Incremental Synchronization After an incremental synchronization task is initiated, data migration will synchronize the data that has changed in the source database (new data, modified data, or deleted data) to the corresponding tables in the target database. Incremental Synchronization includes DML Synchronization and DDL Synchronization, and you can customize the settings as needed. For more information, see Customize DML/DDL settings.
    Full Validation After full migration is completed and incremental data is synchronized to the target database and is basically synchronized with the source database, data migration will automatically initiate a full validation task for the source and target tables.
    Data migration only supports full validation for tables with unique keys (tables with primary keys or non-null unique keys).
  2. In the Select Migration Objects section, configure the method for selecting migration objects. You can select migration objects by using the Specify Objects and Match by Rule options. In the Select Migration Scope section, select the student table in the target database. For more information, see Configure a two-way synchronization task.

  3. Confirm the migration options.

  4. Click Next. The system performs a pre-check on the forward task.

    In the Pre-check section, data migration checks whether the read and write permissions of the database user and the network connection of the database meet the requirements. You can start the data migration task only after all check tasks are passed. If the pre-check fails:

    • You can troubleshoot and resolve the issue, and then reinitiate the pre-check until it succeeds.

    • You can also click Skip in the Actions column of the failed pre-check item, and a dialog box will appear, prompting you about the specific impact of skipping this operation. After you confirm the impact, click OK in the dialog box.

  5. After the pre-check succeeds, click Purchase & Start.

  6. After the purchase is successful, return to the data migration page to view the task progress.

  7. When the forward task is in the incremental synchronization phase of the Monitoring state and the task status is Running, click the Configure button next to the reverse task, confirm the source and target information, and click Next to configure the reverse task.

  8. After the pre-check succeeds, go to the Purchase Data Migration Instance page to purchase the instance.

  9. Return to the data migration page to view the progress of the two-way synchronization task.

Data synchronization and validation

  1. After the two-way synchronization task is configured, view the initial data of the student table in Instance B to verify the data synchronization from Instance A to Instance B.

  2. Add data to the student table in Instance B.

  3. View the data in the table in Instance A to verify the data synchronization from Instance B to Instance A.

Previous topic

Best practices for OceanBase materialized views
Last

Next topic

High availability through primary and standby databases across clouds
Next
What is on this page
Overview
Scenario
Prerequisites
Procedure
Create tenants and database accounts
Create a data migration task
Configure two-way synchronization
Data synchronization and validation