OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Loader and Dumper

V4.3.5

  • Document Overview
  • Introduction
  • Technical Mechanism
  • Preparations
    • Prepare the environment
    • Prepare data
    • Download OBLOADER & OBDUMPER
    • Startup parameters
  • User Guide (OBLOADER)
    • Quick start
    • Command-line options
    • Multi-database import
    • Direct load
    • Data processing
      • Define a control file
      • Preprocessing functions
      • Case expressions
    • Use cases of command-line options
    • Performance tuning
    • Error handling
    • FAQ
  • User Guide (OBDUMPER)
    • Quick start
    • Command-line options
    • Multi-database export
    • Data processing
      • Define a control file
      • Preprocessing functions
      • Case expressions
    • Performance tuning
    • FAQ
  • Security Features
  • Connection configuration
  • Self-service Troubleshooting
  • Release Note
    • Release Note
      • 4.x
        • OBLOADER & OBDUMPER V4.3.5
        • OBLOADER & OBDUMPER V4.3.4.1
        • OBLOADER & OBDUMPER V4.3.4
        • OBLOADER & OBDUMPER V4.3.3.1
        • OBLOADER & OBDUMPER V4.3.3
        • OBLOADER & OBDUMPER V4.3.2.1
        • OBLOADER & OBDUMPER V4.3.2
        • OBLOADER & OBDUMPER V4.3.1.1
        • OBLOADER & OBDUMPER V4.3.1
        • OBLOADER & OBDUMPER V4.3.0
        • OBLOADER & OBDUMPER V4.2.8.2
        • OBLOADER & OBDUMPER V4.2.8.1
        • OBLOADER & OBDUMPER V4.2.8
        • OBLOADER & OBDUMPER V4.2.7
        • OBLOADER & OBDUMPER V4.2.6
        • OBLOADER & OBDUMPER V4.2.5
        • OBLOADER & OBDUMPER V4.2.4
        • OBLOADER & OBDUMPER V4.2.1
        • OBLOADER & OBDUMPER V4.1.0
        • OBLOADER & OBDUMPER V4.0.0
      • 3.x
        • OBLOADER \& OBDUMPER V3.1.0
        • OBLOADER \& OBDUMPER V3.0.0
    • Versioning rules

Download PDF

Document Overview Introduction Technical Mechanism Prepare the environment Prepare data Download OBLOADER & OBDUMPER Startup parameters Quick start Command-line options Multi-database import Direct load Define a control file Preprocessing functions Case expressions Use cases of command-line options Performance tuning Error handling FAQ Quick start Command-line options Multi-database export Define a control file Preprocessing functions Case expressions Performance tuning FAQ Security Features Connection configuration Self-service Troubleshooting Versioning rules
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Loader and Dumper
  3. V4.3.5
iconOceanBase Loader and Dumper
V 4.3.5
  • V 4.3.5
  • V 4.3.4.1
  • V 4.3.4
  • V 4.3.3.1
  • V 4.3.3
  • V 4.3.2.1
  • V 4.3.2
  • V 4.3.1
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5 and earlier

Multi-database import

Last Updated:2026-04-02 06:15:21  Updated
share
What is on this page
Overview
Implementation
Single-database mode
Multi-database mode
Examples
Example 1: Single-database export + single-database import (original behavior)
Example 2: Multi-database export + multi-database import (import each database as is)
Example 3: Multi-database export + single-database import (select one database for import)
Example 4: Tenant-level import (import each database as is)

folded

share

OBLOADER V4.3.5 and later versions support importing database objects to multiple databases or schemas in a single import task, while maintaining compatibility with the original single-database import behavior.

Overview

The import feature for multiple databases allows you to specify the range of objects to be imported using an object expression in the format <schema.object>, such as test.table1, test1.*, *.*, test[0-9].tbl, test*.table, or test?.tbl.

When you use the multi-database import feature, the system will print an import list and ask for confirmation. You can use the -y option to skip the confirmation.

Implementation

Single-database mode

If the object parameters such as --table and --view do not contain a specific database or schema (i.e., they do not contain a period (.)), the system operates in single-database mode, maintaining its original behavior.

Multi-database mode

If any object parameter value contains a specific database or schema (i.e., it contains a period (.)), the system switches to multi-database mode. Examples include test.table1, test1.*, or *.*.

In multi-database mode, a single multi-database import command is split into multiple single-database import subtasks. Here's how it works:

  • The system parses the object parameters such as --table and --view, and distributes the <schema.object> expressions to corresponding subtasks for each database or schema, specifying the import range for each.

  • If an object does not contain a specific database or schema (e.g., tbl2 in --table 'db1.tbl1, tbl2' -D 'db2'), the system uses the default database specified by -D for that object, while keeping other options unchanged.

In multi-database mode, importing objects to different databases or schemas is equivalent to executing multiple single-database import tasks. Here's an example:

obloader -D test --table 'test1.tbl1,test2.tbl2' ...
-> obloader -D test1 --table tbl1 ...
-> obloader -D test2 --table tbl2 ...

obloader -D test --table 'tbl1,test2.tbl2' ...
-> obloader -D test --table tbl1 ...
-> obloader -D test2 --table tbl2 ...

By default, the import directory structure matches the dataset generated by multi-database export, which is organized by database/schema levels. Based on the source and characteristics of the directory structure of the data to be imported, multi-database import can be categorized into the following three scenarios.

Notice

If the database or schema name contains characters such as [, {, or other wildcard control characters, you need to escape these characters to successfully import the data.

  • The directory structure to be imported is generated by OBDUMPER.

    Assume that you specify -f ./outputs when you use OBLOADER for multi-database import, and the outputs directory has the following structure:

    ./outputs/
    └── data
        ├── test
        │   ├── TABLE
        │   │   ├── sample_tbl1.csv
        │   │   ├── sample_tbl1-schema.sql
        │   │   ├── sample_tbl2.csv
        │   │   └── sample_tbl2-schema.sql
        │   └── VIEW
        │       ├── sample_tbl1_view-schema.sql
        │       └── sample_tbl2_view-schema.sql
        └── test1
            ├── TABLE
            │   ├── sample_tbl1.csv
            │   ├── sample_tbl1-schema.sql
            │   ├── sample_tbl2.csv
            │   └── sample_tbl2-schema.sql
            └── VIEW
                ├── sample_tbl1_view-schema.sql
                └── sample_tbl2_view-schema.sql
    
    7 directories, 12 files
    

    In this case, if you specify the following parameters for multi-database import: --table 'test*.*' --view 'test*.*' -f ./outputs, the system will locate the data in the corresponding database/schema directories based on the <schema.object> expressions and import it.

    Assume that the target OceanBase Database has three databases named test, test1, and test2. In this case, multi-database import will import the objects in the ./outputs/data/test directory to the test database, and the objects in the ./outputs/data/test1 directory to the test1 database. The test2 database does not have a corresponding directory structure, so no data will be imported to it.

  • The directory structure to be imported is not generated by OBDUMPER.

    This directory structure is created by the user or another program. If you perform multi-database import in this directory structure, the behavior is equivalent to executing multiple single-database import tasks in the same directory.

    Assume that you specify --table 'test*.*' --view 'test*.*' -f ./outputs, and the outputs directory has the following structure:

    ./outputs/
    ├── sample_tbl1.csv
    ├── sample_tbl1-schema.sql
    ├── sample_tbl2.csv
    └── sample_tbl2-schema.sql
    
    0 directories, 4 files
    

    Assume that the target OceanBase Database has three databases named test, test1, and test2. Since the import program cannot identify the database names from the files, multi-database import will import all objects in the ./outputs directory to the test, test1, and test2 databases in the target OceanBase Database.

  • The directory structure to be imported is not generated by OBDUMPER, and the --file-regular-expression parameter is used.

    The --file-regular-expression parameter allows you to use regular expressions to define rules for matching specific files for import. It also allows you to use capture groups to extract schema and table names from the file names.

    Assume that you specify --table 'test*.*' --file-regular-expression '(?<schema>[a-z0-9]+)\.(?<table>[a-z_0-9]+)\.csv' -f ./outputs for import, and the outputs directory has the following structure:

    ./outputs/
    ├── test1.sample_tbl2.csv
    └── test.sample_tbl1.csv
    
    0 directories, 2 files
    

    Assume that the target OceanBase Database has three databases named test, test1, and test2. In this case, multi-database import will import the ./outputs/test1.sample_tbl2.csv file to the test1 database, and the ./outputs/test.sample_tbl1.csv file to the test database. The test2 database does not have a corresponding directory structure, so no data will be imported to it.

Examples

Example 1: Single-database export + single-database import (original behavior)

  • Import the test database exported from /outputs back to the test database.

    ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** -D test --csv --table '*' -f /outputs
    
  • Import the test database exported from /outputs to another database named test1.

    ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** -D test1 --csv --table '*' -f /outputs
    

Example 2: Multi-database export + multi-database import (import each database as is)

Before you proceed, you run the following command to export the test1, test2, and test3 databases.

./obdumper -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table 'test[1-3].*' -f /outputs

Notice

When you export multiple databases, you can only import the corresponding database files to the databases with the same names. For example, the test2 database file can be imported only to the test2 database. This is the default behavior of multi-database import.

  • Import all databases

    • Method 1

      ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table 'test[1-3].*' -f /outputs
      
    • Method 2

      ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table '*.*' -f /outputs
      
    • Method 3

      ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table 'test1.*, test2.*, test3.*' -f /outputs
      
  • Import only the test1 and test2 databases

    • Method 1

      ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table 'test[1-2].*' -f /outputs
      
    • Method 2

      ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table 'test1.*,test2.*' -f /outputs
      
    • Method 3

      ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table 'test1.*' --table 'test2.*' -f /outputs
      

Example 3: Multi-database export + single-database import (select one database for import)

After you export multiple databases, you can import them to the same database by using the -f parameter to specify the input directory.

Before you proceed, you run the following command to export the test1, test2, and test3 databases.

./obdumper -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --csv --table 'test[1-3].*' -f /outputs
  • Import the test2 database exported from /outputs to the test2 database. This scenario executes the multi-database import logic. The test2.* parameter specifies the database, and the -D test parameter is ignored.

    ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** -D test --csv --table 'test2.*' -f /outputs
    
  • Import the test1 subdirectory as a single database to the target database test. This scenario executes the single-database import logic.

    ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** -D test --csv --table '*' -f /outputs/data/test1
    
  • Point to the root directory and use --table '*' to import the test1, test2, and test3 databases as a single database to the test database. This scenario executes the original single-database import logic.

    ./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** -D test --csv --table '*' -f /outputs
    

    Note

    When you import multiple databases to a single database, the original single-database import logic is executed. The input directory is specified by using the -f parameter.

Example 4: Tenant-level import (import each database as is)

Import all objects of all databases, including definitions and data.

./obloader -h xxx.xxx.xxx.xxx -P 2883 -u test@mysql#cluster_a -p ****** --table '*.*' --ddl --csv --all -f /outputs

Previous topic

Command-line options
Last

Next topic

Direct load
Next
What is on this page
Overview
Implementation
Single-database mode
Multi-database mode
Examples
Example 1: Single-database export + single-database import (original behavior)
Example 2: Multi-database export + multi-database import (import each database as is)
Example 3: Multi-database export + single-database import (select one database for import)
Example 4: Tenant-level import (import each database as is)