OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Best Practices

All Versions

  • Deploy
    • Configuration guide for read-write splitting in AP scenarios
    • Best practices for read-write splitting
  • Migrate
    • Data transfer solutions in OceanBase Database
    • Overview on data migration
    • Best practices for importing data files to OceanBase Database
    • Best practice for migrating data from other databases to OceanBase Database
    • Massive data migration strategy
    • Best practices for migrating data from MyCat to OceanBase Database
    • Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode
  • Route
    • ODP routing best practices
  • Table Design
    • Best practices for table design and index optimization
    • Best practices for creating indexes on large tables
    • Best practices for database development
  • Develop
    • Best practices for connecting Java applications to OceanBase Database
    • Best practices for integrating Spark Catalog with OceanBase Database
    • Best practices for achieving optimal performance in batch DML using JDBC and OBServer
    • Best practices for bulk data cleanup in OceanBase Database
    • Best practices for PDML processing in OceanBase Database
    • Best practices for hot tables in OceanBase Database
    • Best practices for auto-increment columns and sequences in OceanBase Database
  • Manage
    • Best practices for resource throttling
    • Best practices for data load balancing
    • Best practices for security certification
    • Best practices for access control
    • Best practices for data encryption
  • Diagnose
    • Best practices for log interpretation in common scenarios
    • Best practices for end-to-end tracing
    • Best practices for using obdiag to collect performance information
    • Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements
    • Best practices for troubleshooting OceanBase Database performance issues
  • Performance Tuning
    • Best practices for handling slow queries
    • Best practices for collecting statistics to generate an efficient execution plan
    • Best practices for updating hotspot rows
    • Best practices for large object storage performance
    • Best practices for semi-structured storage performance
    • Best practices for OceanBase materialized views
  • Cloud Database
    • Best practices for achieving high availability through cross-cloud active-active deployment
    • High availability through primary and standby databases across clouds
    • High host CPU usage
    • Best practices for read/write splitting in OceanBase Cloud

Download PDF

Configuration guide for read-write splitting in AP scenarios Best practices for read-write splitting Data transfer solutions in OceanBase Database Overview on data migration Best practices for importing data files to OceanBase Database Best practice for migrating data from other databases to OceanBase Database Massive data migration strategy Best practices for migrating data from MyCat to OceanBase Database Best practices for migrating PostgreSQL to OceanBase MySQL-compatible mode ODP routing best practices Best practices for table design and index optimization Best practices for creating indexes on large tables Best practices for database development Best practices for connecting Java applications to OceanBase Database Best practices for integrating Spark Catalog with OceanBase Database Best practices for achieving optimal performance in batch DML using JDBC and OBServer Best practices for bulk data cleanup in OceanBase Database Best practices for PDML processing in OceanBase Database Best practices for hot tables in OceanBase Database Best practices for auto-increment columns and sequences in OceanBase Database Best practices for resource throttling Best practices for data load balancing Best practices for security certification Best practices for access control Best practices for data encryption Best practices for log interpretation in common scenarios Best practices for end-to-end tracing Best practices for using obdiag to collect performance information Best practices for using obdiag to collect diagnostic information of parallel and slow SQL statements Best practices for troubleshooting OceanBase Database performance issues Best practices for handling slow queries Best practices for collecting statistics to generate an efficient execution plan Best practices for updating hotspot rows Best practices for large object storage performance Best practices for semi-structured storage performance Best practices for OceanBase materialized views Best practices for achieving high availability through cross-cloud active-active deployment High availability through primary and standby databases across clouds High host CPU usage Best practices for read/write splitting in OceanBase Cloud
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Best Practices
  3. master
iconOceanBase Best Practices
master
  • master

Overview on data migration

Last Updated:2025-06-11 05:52:43  Updated
share
What is on this page
Application scenarios
Migration paths
Data migration from other databases to OceanBase Database
Data migration from files to OceanBase Database
Migration tools
OMS
obloader
LOAD DATA statements
INSERT SQL
CREATE EXTERNAL TABLE
Flink
Canal
Kafka
Select a migration solution
Object storage services
File systems
Streaming systems
Databases

folded

share

This topic describes the concepts, methods, and tools for migrating data from your local server or other cloud environments to OceanBase Database. In this topic, the target database is an OceanBase database.

Application scenarios

Data migration is a common operation in database O&M and is typical in the following scenarios:

  • Cluster load adjustment and data center relocation
  • Database replacement
  • Logical database replication, including read-write splitting, database disaster recovery, and multi-site high availability
  • Data replication

Migration paths

This topic defines the following migration paths:

  • Data migration from other databases to OceanBase Database
  • Data migration from files to OceanBase Database

Data migration from other databases to OceanBase Database

We recommend that you use OceanBase Migration Service (OMS) to migrate data from a database to OceanBase Database. OMS supports the following data sources:

  • MySQL
  • Oracle
  • DB2 LUW
  • TiDB
  • PostgreSQL

If OMS does not support your database, choose another migration path. You can dump data from your database as files and then use OMS to migrate the data from the files to OceanBase Database.

Data migration from files to OceanBase Database

You can import data from CSV files to OceanBase Database in the following ways:

  • LOAD DATA statements in MySQL tenants
  • DataX or DataX Web
  • obloader

You can import data from SQL files to OceanBase Database in the following ways:

  • obloader
  • MySQL or OceanBase Command-Line Client (OBClient) commands
  • Subtopics
  • Database clients

You can import data from Parquet files to OceanBase Database in the following ways:

  • obloader
  • LOAD DATA statements

You can import data from ORC files to OceanBase Database by using obloader.

Migration tools

OceanBase Database provides a wide range of data migration methods, including:

  • OceanBase solutions

    • OMS, which is designed for large-scale data migration
    • obloader, which is designed for data import from files
  • CLI tools

    • Standard SQL statements LOAD DATA and INSERT, which are easy to use and suitable for small-scale data import
    • CREATE EXTERNAL TABLE statement, which allows you to directly query external data files for more flexible data analysis
    • CREATE TABLE AS SELECT statement
  • Third-party integrated tools

    Flink OceanBase Connector, DataX OceanBase Writer Plugin, Kafka, and Canal

OMS

Characteristics:

OMS has the following characteristics:

  • Real-time data migration without interrupting or affecting business applications
  • High-performance, secure, and reliable data migration
  • All-in-one interaction

Scenarios:

OMS is suitable for large-scale data migration between databases.

References:

For more information about OMS, see OMS documentation.

obloader

Characteristics:

obloader has the following characteristics:

  • Allows you to import database object definitions and table data from local disks, Network Attached Storage (NAS), Hadoop Distributed File System (HDFS), Alibaba Cloud Object Storage Service (OSS), Tencent Cloud Object Storage (COS), Huawei Cloud Object Storage Service (OBS), Apache Hadoop, and Amazon Simple Storage Service (S3).
  • Allows you to import files in the INSERT SQL format that are exported by using mysqldump.
  • Allows you to import data files in standard formats, such a CSV, INSERT SQL, Apache ORC, and Apache Parquet.
  • Allows you to set data preprocessing rules and configure field mappings between files and tables.
  • Supports features such as import throttling, memory exhaustion prevention, resumption after an interruption, and automatic retries.
  • Allows you to specify a custom log directory to store bad data and conflicting data during import.
  • Automatically splits large files without consuming additional storage space.
  • Supports encryption of sensitive parameters specified in commands, such as the database account and password and the cloud storage account and password.

Scenarios:

obloader is suitable for large-scale data import.

References:

For more information about obloader, see obloader documentation.

LOAD DATA statements

LOAD DATA LOCAL

Characteristics:

The LOAD DATA LOCAL statement has the following characteristics:

  • Allows you to import and insert local files to OceanBase Database as network streams.
  • Supports importing only a small amount of data at a time.
  • Supports importing only one file at a time.
  • Allows you to import CSV, SQL, and Parquet files.

Scenarios:

The LOAD DATA LOCAL statement is suitable for small-scale data import.

References:

For more information about the LOAD DATA LOCAL statement, see Import data by using the LOAD DATA statement.

LOAD DATA FROM OSS

Characteristics:

The LOAD DATA FROM OSS statement has the following characteristics:

  • Allows you to import files from Alibaba Cloud OSS to OceanBase Database.
  • Allows you to import multiple files from Alibaba Cloud OSS at a time.
  • Supports importing only CSV files.

Scenarios:

The LOAD DATA FROM OSS statement is suitable for large-scale data import.

References:

For more information about the LOAD DATA FROM OSS statement, see LOAD DATA (Oracle mode) and LOAD DATA (MySQL mode).

INSERT SQL

Scenarios:

  • The INSERT INTO VALUES statement is suitable for writing a small amount of data to an internal table.
  • The INSERT INTO SELECT FROM <table_name> statement is suitable for writing the query result of another internal or external table to the target table. In other words, it is suitable for data migration between tables.
  • The INSERT /*+ [APPEND |direct(need_sort,max_error,'full')] enable_parallel_dml parallel(N)*/ INTO table_name select_sentence statement is suitable for full and incremental direct load.

References:

For more information about the INSERT statement, see INSERT (Oracle mode) and INSERT (MySQL mode).

CREATE EXTERNAL TABLE

Scenarios:

External tables are a key feature in a database management system. Generally, the data of a table in a database is stored in the storage space of the database, while an external table has data stored in an external storage service.

References:

For more information about external tables, see Overview (Oracle mode) and Overview (MySQL mode).

Flink

Scenarios:

Flink OceanBase Connector is suitable for importing data from Flink in real time.

References:

For more information, see Use Flink CDC to synchronize data from a MySQL database to OceanBase Database.

Canal

Scenarios:

Canal is suitable for importing data to OceanBase Database in real time.

References:

For more information, see Use Canal to synchronize data from a MySQL database to OceanBase Database.

Kafka

Scenarios:

CloudCanal is suitable for migrating or synchronizing data from MySQL, Oracle, and PostgreSQL databases to OceanBase Database.

References:

For more information, see Best practices for integrating OceanBase Database with Kafka.

Select a migration solution

This section describes the solutions of migrating data from common data sources, in order to help you quickly select an appropriate migration strategy as needed. The migration solutions are introduced by data source type.

Object storage services

The following table describes the solutions for importing data from object storage services of cloud service providers to OceanBase Database.

Data source Supported import solution
Alibaba Cloud OSS
  • DataX
  • LOAD DATA INFILE
  • Download the data to a local or accessible server. Then use a MySQL CLI tool or SQL management tool to import the data to OceanBase Database. You can also write scripts and execute an SQL statement by using a MySQL connector library to batch insert the data.
  • Tencent Cloud COS
  • LOAD DATA INFILE
  • Download the data to a local or accessible server. Then use a MySQL CLI tool or SQL management tool to import the data to OceanBase Database. You can also write scripts and execute an SQL statement by using a MySQL connector library to batch insert the data.
  • Huawei Cloud OBS
  • LOAD DATA INFILE
  • Download the data to a local or accessible server. Then use a MySQL CLI tool or SQL management tool to import the data to OceanBase Database. You can also write scripts and execute an SQL statement by using a MySQL connector library to batch insert the data.
  • Flink CDC
  • Amazon S3 Download the data to a local or accessible server. Then use a MySQL CLI tool or SQL management tool to import the data to OceanBase Database. You can also write scripts and execute an SQL statement by using a MySQL connector library to batch insert the data.
    Azure Blob Storage Download the data to a local or accessible server. Then use a MySQL CLI tool or SQL management tool to import the data to OceanBase Database. You can also write scripts and execute an SQL statement by using a MySQL connector library to batch insert the data.
    Google Cloud GCS Download the data to a local or accessible server. Then use a MySQL CLI tool or SQL management tool to import the data to OceanBase Database. You can also write scripts and execute an SQL statement by using a MySQL connector library to batch insert the data.

    File systems

    The following table describes the solutions for importing data from local or distributed file systems to OceanBase Database.

    Data source Supported import solution
    Local file system (NFS and NAS)
    • If the data file is in the CSV format, use obloader.
    • If the data file is in the SQL format, use the LOAD DATA INFILE statement.
    • Write scripts and execute an SQL statement by using a MySQL connector library to batch insert the data.
    HDFS
    • Import the data to a TXT file, and then use obloader to import the file.
    • Write custom extract-transform-load (ETL) scripts, and read data from HDFS through an HDFS API. Then use a MySQL client such as JDBC or mysql-connector-python to convert the data and import it to OceanBase Database.

    Streaming systems

    The following table describes the solutions for importing data from streaming systems to OceanBase Database.

    Data source Supported import solution
    Flink
    • MySQL CDC Connector
    • Flink JDBC SQL Connector
    • Flink OceanBase CDC Connector
    Canal
    • Canal Server
    • Canal Adapter
    Spark JDBC

    Databases

    The following table describes the solutions for importing data from other databases to OceanBase Database.

    Data source Supported import solution
    MySQL database
    • Use OMS to migrate data to OceanBase Database.
    • DataX.
    • Dump data from the MySQL database, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database. The solution is similar to those for migrating data from file systems.
    Oracle database
    • Online data migration: Use OMS to migrate data to OceanBase Database.
    • DataX.
    • Dump data from the Oracle database, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    PostgreSQL database
    • Use OMS to migrate data to OceanBase Database.
    • Dump data from the PostgreSQL database, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    TiDB database
    • Use OMS to migrate data to OceanBase Database.
    • Dump data from the TiDB database, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    SQLServer
    • DataX.
    • Dump data from SQL Server, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    StarRocks database Dump data from the StarRocks database, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    Doris database Dump data from the Doris database, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    HBase database
    • DataX.
    • Dump data from the HBase database, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    MaxCompute
    • Dataworks
    • Dump data from MaxCompute, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.
    Hologres
    • Dataworks
    • Dump data from Hologres, and then use obloader or the LOAD DATA statement to import the data to OceanBase Database.

    Previous topic

    Data transfer solutions in OceanBase Database
    Last

    Next topic

    Best practices for importing data files to OceanBase Database
    Next
    What is on this page
    Application scenarios
    Migration paths
    Data migration from other databases to OceanBase Database
    Data migration from files to OceanBase Database
    Migration tools
    OMS
    obloader
    LOAD DATA statements
    INSERT SQL
    CREATE EXTERNAL TABLE
    Flink
    Canal
    Kafka
    Select a migration solution
    Object storage services
    File systems
    Streaming systems
    Databases