OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Loader and Dumper

V4.2.8

  • Document Overview
  • Introduction
  • Preparations
    • Prepare the environment
    • Download OBLOADER & OBDUMPER
  • User Guide (OBLOADER)
    • Quick start
    • Command-line options
    • Data processing
      • Define control files
      • Preprocessing functions
      • Case expressions
    • Performance tuning
    • Error handling
    • direct load
    • FAQ
  • User Guide (OBDUMPER)
    • Quick start
    • Command-line options
    • Data processing
      • Define control files
      • Preprocessing functions
      • Case expressions
    • Performance tuning
    • FAQ
  • Security features
  • Connection settings
  • Self-service troubleshooting
  • Release Note
    • Release Note
      • 4.x
        • OBLOADER & OBDUMPER V4.2.8
        • OBLOADER & OBDUMPER V4.2.7
        • OBLOADER & OBDUMPER V4.2.6
        • OBLOADER & OBDUMPER V4.2.5
        • OBLOADER & OBDUMPER V4.2.4
        • OBLOADER & OBDUMPER V4.2.1
        • OBLOADER \& OBDUMPER V4.1.0
        • OBLOADER \& OBDUMPER V4.0.0
      • 3.x
        • OBLOADER \& OBDUMPER V3.1.0
        • OBLOADER \& OBDUMPER V3.0.0
    • Version rules

Download PDF

Document Overview Introduction Prepare the environment Download OBLOADER & OBDUMPER Quick start Command-line optionsDefine control filesPreprocessing functionsCase expressionsPerformance tuning Error handling direct loadFAQ Quick start Command-line optionsDefine control filesPreprocessing functionsCase expressions Performance tuningFAQ Security features Connection settings Self-service troubleshooting Version rules
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Loader and Dumper
  3. V4.2.8
iconOceanBase Loader and Dumper
V 4.2.8
  • V 4.3.6
  • V 4.3.5
  • V 4.3.4.1
  • V 4.3.4
  • V 4.3.3.1
  • V 4.3.3
  • V 4.3.2.1
  • V 4.3.2
  • V 4.3.1
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5 and earlier

Quick start

Last Updated:2026-04-13 05:55:21  Updated
share
What is on this page
Step 1: Download the software
Step 2: Configure the running environment
Step 3: Prepare data
Step 4: Create a database
Step 5: View configuration files
Connection configuration file
Log configuration file
Step 6: Export data
Export DDL definition files
Export CSV data files
Export SQL data files
Export CUT data files
Export data files to Amazon S3
Export data to Alibaba Cloud OSS
Customize the name of an exported file
Use a control file to export data
Export data from specified table columns
Export the result set of a custom query
Export database object definitions and table data from ApsaraDB for OceanBase
Export database object definitions and table data from OceanBase Database
Step 7: Congratulations

folded

share

This topic describes how to have a quick start with OBLOADER & OBDUMPER.

Step 1: Download the software

Note

OBLOADER & OBDUMPER are no longer distinguished by the community edition and enterprise edition since V4.2.1. You can download the software package from OceanBase Download Center.

Click here to download the package of the latest version, and decompress the package:

$ unzip ob-loader-dumper-4.2.1-RELEASE.zip
$ cd ob-loader-dumper-4.2.1-RELEASE

Step 2: Configure the running environment

Note

You must install Java 8 and configure the JAVA_HOME environment variable in the local environment. We recommend that you install JDK 1.8.0_3xx or later. For more information about environment configuration, see Prepare the environment.

This step aims to modify JVM parameters.

A small JVM memory size may affect the performance and stability of the import and export features. For example, a full garbage collection (GC) or GC crash may occur. We recommend that you modify the JVM memory size, which is -Xms4G -Xmx4G by default, to 60% of the available memory of the server. If you are good at Java performance tuning, you can modify the JVM parameters in the JAVA_OPTS option as needed.

  1. Edit the configuration file that contains the JAVA_OPTS option.

    • In Linux, edit the obloader and obdumper scripts in the {ob-loader-dumper}/bin/ directory.

    • In Windows, edit the obloader.bat and obdumper.bat scripts in the {ob-loader-dumper}/bin/windows/ directory.

  2. Modify JVM parameters.

    JAVA_OPTS="$JAVA_OPTS -server -Xms4G -Xmx4G -XX:MetaspaceSize=128M -XX:MaxMetaspaceSize=128M -Xss352K"
    JAVA_OPTS="$JAVA_OPTS -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -Xnoclassgc -XX:+DisableExplicitGC
    

Step 3: Prepare data

Notice

When you try the export feature of OBDUMPER, you do not need to prepare any data. You can directly go to Step 4.

When you try the import feature of OBLOADER, you can use existing data files or use the TPC-H tool to generate temporary data files. The format of content in the imported file must meet related specifications. Identify the data format in the file based on Is your data ready.

Step 4: Create a database

Notice

When you try the export feature of OBDUMPER, you also need to create tables and insert data into the tables after you create a database.

  1. Deploy an OceanBase cluster by using OceanBase Cloud Platform (OCP) or OceanBase Deployer (OBD).

  2. Create a test database.

  3. Create a test table and insert data into the table. This operation is optional when you import data.

Step 5: View configuration files

The configuration files of OBLOADER & OBDUMPER refer to the connection configuration file session.config.json and log configuration file log4j2.xml.

Connection configuration file

The connection configuration file {ob-loader-dumper}/conf/session.config.json configures database connection parameters. OBLOADER & OBDUMPER builds a JDBC URL to connect to the target database based on the JDBC parameters in the connection configuration file and then sequentially executes SQL statements for initialization in the established connection. You can modify JDBC parameters and SQL statements for initialization in the connection configuration file. Default connection configurations apply to most scenarios. However, in special cases, you need to manually modify parameters to adapt to different OBServer versions and extract, transform, and load (ETL) scenarios. For more information about the connection configuration file, see Connection settings.

Log configuration file

In the log configuration file {ob-loader-dumper}/conf/log4j2.xml, you can view the log output path and log format, and adjust the log level during self-service troubleshooting. For more information, see How do I customize log file names for an export job?

Step 6: Export data

./obdumper -h 'IP address' -P'port' -u'user' -t'tenant' -c'cluster' -p'password' -D'database name' --table 'table name' --csv -f 'file path' --sys-password 'password of the sys tenant' --skip-check-dir

Note

This example only exports data. For more information, see Command-line options of OBDUMPER.

The following table describes the database information that is used in the examples.

Database information item Example
Cluster name cluster_a
IP address of the OceanBase Database Proxy (ODP) host xx.x.x.x
ODP port number 2883
Tenant name of the cluster mysql
Name of the root/proxyro user under the sys tenant **u***
Password of the root/proxyro user under the sys tenant ******
User account (with read/write privileges) under the business tenant test
Password of the user under the business tenant ******
Schema name USERA

Export DDL definition files

Scenario: Export all supported object definition statements in the USERA schema to the /output directory. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --ddl --all -f /output

Note

The --sys-user option is used to connect to a user with required privileges in the sys tenant. If the --sys-user option is not specified during the export, the default value --sys-user root takes effect.

Sample task result:

...
All Dump Tasks Finished:
---------------------------------------------------
|  No.#  | Type   | Name     | Count   | Status   |  
---------------------------------------------------     
|  1     | TABLE  | table    | 1->1    | SUCCESS  |                
---------------------------------------------------

Total Count: 4          End Time: 2023-04-28 15:32:49
...

Sample exported content:

View the table-schema.sql table in the {ob-loader-dumper}/output/data/chz/TABLE directory.

[xxx@xxx /ob-loader-dumper-4.2.0-RELEASE/bin]
$cat /home/admin/obloaderobdumper/output/data/chz/TABLE/table-schema.sql
create table if not exists `table` (
        `id` int(11) comment 'table id',
        `name` varchar(120) comment 'table name',
        `type` varchar(128) not null default 'COLLABORATION' comment 'organization type, enum values: COLLABORATION, PRIVACY'
)
default charset=gbk
default collate=gbk_chinese_ci;

Export CSV data files

Scenario: Export all table data in the USERA schema to the /output directory in the CSV format. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided. For more information about CSV data file (.csv file) specifications, see RFC 4180. CSV data files store data in the form of plain text. You can open CSV data files by using a text editor or Excel.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --csv --table '*' -f /output

Sample task result:

...
All Dump Tasks Finished:
----------------------------------------------------
|  No.#  | Type   | Name       | Count  | Status   |
----------------------------------------------------      
|  1     | TABLE  | table      | 4      | SUCCESS  |                
----------------------------------------------------

Total Count: 4          End Time: 2023-04-28 15:32:49
...

Sample exported content:

View the table.1.0.csv table in the {ob-loader-dumper}/output/data/chz/TABLE directory.

[xxx@xxx /ob-loader-dumper-4.2.0-RELEASE/bin]
$cat table.1.0.csv
'id','name','type'
1001,'xiaoning','COLLABORATION'
1002,'xiaohong','COLLABORATION'
1001,'xiaoning','COLLABORATION'
1002,'xiaohong','COLLABORATION'

Export SQL data files

Scenario: Export all table data in the USERA schema to the /output directory in the SQL format. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided. SQL data files (.sql files) store INSERT SQL statements. You can open SQL data files by using a text editor or SQL editor.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --sql --table '*' -f /output

Sample task result:

...
All Dump Tasks Finished:
----------------------------------------------------
|  No.#  | Type   | Name       | Count  | Status   |
----------------------------------------------------      
|  1     | TABLE  | table      | 4      | SUCCESS  |                
----------------------------------------------------

Total Count: 4          End Time: 2023-04-28 15:32:49
...

Sample exported content:

View the table.1.0.sql table in the {ob-loader-dumper}/output/data/chz/TABLE directory.

[xxx@xxx /ob-loader-dumper-4.2.0-RELEASE/bin]
$cat table.1.0.sql
INSERT INTO `chz`.`table` (`id`,`name`,`type`) VALUES (1001,'xiaoning','COLLABORATION');
INSERT INTO `chz`.`table` (`id`,`name`,`type`) VALUES (1002,'xiaohong','COLLABORATION');
INSERT INTO `chz`.`table` (`id`,`name`,`type`) VALUES (1001,'xiaoning','COLLABORATION');
INSERT INTO `chz`.`table` (`id`,`name`,`type`) VALUES (1002,'xiaohong','COLLABORATION');

Export CUT data files

Scenario: Export all table data in the USERA schema to the /output directory in the CUT format. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided. Specify |@| as the column separator string for the exported data. CUT data files (.dat files) use a character or character string to separate values. You can open CUT data files by using a text editor.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --table '*' -f /output --cut --column-splitter '|@|' --trail-delimiter

Sample task result:

...
All Dump Tasks Finished:
----------------------------------------------------
|  No.#  | Type   | Name       | Count  | Status   |
----------------------------------------------------       
|  1     | TABLE  | table      | 4      | SUCCESS  |                
----------------------------------------------------

Total Count: 4          End Time: 2023-04-28 15:32:49
...

Sample exported content:

View the table.1.0.dat table in the {ob-loader-dumper}/output/data/chz/TABLE directory.

[xxx@xxx /ob-loader-dumper-4.2.0-RELEASE/bin]
$cat table.1.0.dat
1001|xiaoning|COLLABORATION|
1002|xiaohong|COLLABORATION|
1001|xiaoning|COLLABORATION|
1002|xiaohong|COLLABORATION|

Export data files to Amazon S3

Scenario: Export all table data in the USERA schema to an Amazon S3 bucket in the CSV format.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --csv --table '*' --skip-check-dir --upload-behavior 'FAST' -f /output --storage-uri 's3://obloaderdumper/obdumper?region=cn-north-1&access-key=******&secret-key=******'

The --storage-uri 's3://obloaderdumper/obdumper?region=cn-north-1&access-key=******&secret-key=******' option specifies the storage URI, which comprises the following components.

Component Description
s3 The S3 storage scheme.
obloaderdumper The name of the S3 bucket.
/obdumper The data storage path in the S3 bucket.
region=cn-north-1&access-key=******&secret-key=****** The parameters required for the request.
  • region=cn-north-1: the physical location of the S3 bucket.
  • access-key=******: the AccessKey ID for accessing the S3 bucket.
  • secret-key=******: the AccessKey secret for accessing the S3 bucket.

Sample task result:

...
All Dump Tasks Finished:

----------------------------------------------------
|  No.#  | Type   | Name       | Count  | Status   |
----------------------------------------------------      
|  1     | TABLE  | table      | 3      | SUCCESS  |                
----------------------------------------------------

Total Count: 3          End Time: 2023-05-10 18:46:26
...

Note

The --storage-uri option must be used in combination with the -f option.
OBDUMPER first exports object definitions and table data in OceanBase Database to the path specified by -f, and then uploads the exported content to the S3 bucket specified by --storage-uri. After the upload is complete, the content in the path specified by -f is automatically deleted.

Export data to Alibaba Cloud OSS

Scenario: Export all table data in the USERA schema to an Alibaba Cloud OSS bucket in the CSV format.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --csv --table '*' --skip-check-dir --upload-behavior 'FAST' -f /output --storage-uri 'oss://antsys-oceanbasebackup/backup_obloader_obdumper/obdumper?endpoint=https://cn-hangzhou-alipay-b.oss-cdn.aliyun-inc.com&access-key=******&secret-key=******'

The --storage-uri 'oss://antsys-oceanbasebackup/backup_obloader_obdumper/obdumper?endpoint=https://cn-hangzhou-alipay-b.oss-cdn.aliyun-inc.com&access-key=******&secret-key=******' option specifies the storage URI, which comprises the following components.

Component Description
oss The OSS storage scheme.
antsys-oceanbasebackup The name of the OSS bucket.
/backup_obloader_obdumper/obdumper The data storage path in the OSS bucket.
endpoint=https://cn-hangzhou-alipay-b.oss-cdn.aliyun-inc.com&access-key=******&secret-key=****** The parameters required for the request.
  • endpoint=https://cn-hangzhou-alipay-b.oss-cdn.aliyun-inc.com: the endpoint of the region where the OSS host resides.
  • access-key=******: the AccessKey ID for accessing the OSS bucket.
  • secret-key=******: the AccessKey secret for accessing the OSS bucket.

Sample task result:

...
All Dump Tasks Finished:

----------------------------------------------------
|  No.#  | Type   | Name       | Count  | Status   |
----------------------------------------------------      
|  1     | TABLE  | table      | 3      | SUCCESS  |                
----------------------------------------------------

Total Count: 3          End Time: 2023-05-10 18:40:48
...

Note

The --storage-uri option must be used in combination with the -f option.
OBDUMPER first exports object definitions and table data in OceanBase Database to the path specified by -f, and then uploads the exported content to the OSS bucket specified by --storage-uri. After the upload is complete, the content in the path specified by -f is automatically deleted.

Customize the name of an exported file

Scenario: Export all table data in the USERA schema to the /output directory in the CSV format. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided. Specify the name of the exported file as filetest.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --csv --table 'table' --file-name 'filetest.txt' -f /output

Sample task result:

...
All Dump Tasks Finished:
----------------------------------------------------
|  No.#  | Type   | Name       | Count  | Status   |
----------------------------------------------------      
|  1     | TABLE  | table      | 4      | SUCCESS  |                
----------------------------------------------------

Total Count: 4          End Time: 2023-04-28 15:32:49
...

Sample exported content:

View the filetest.txt file in the {ob-loader-dumper}/output/data/chz/TABLE directory.

[xxx@xxx /ob-loader-dumper-4.2.0-RELEASE/bin]
$cat filetest.txt
1001,'xiaoning','COLLABORATION'
1002,'xiaohong','COLLABORATION'
1001,'xiaoning','COLLABORATION'
1002,'xiaohong','COLLABORATION'

Use a control file to export data

Scenario: Export all table data in the USERA schema to the /output directory in the CSV format. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided. Specify /output as the path of the control file for preprocessing the data to be exported.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --table'table' -f /output --csv --ctl-path /output

Note

The table name defined in the database must be in the same letter case as its corresponding control file name. Otherwise, OBLOADER fails to recognize the control file. For more information about the rules for defining control files, see Preprocessing functions.

Export data from specified table columns

Scenario: Export all table data in the USERA schema to the /output directory in the CSV format. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided. Specify the --exclude-column-names option to exclude the columns that do not need to be exported.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA -f /output --table'table' --csv --exclude-column-names 'deptno'

Export the result set of a custom query

Scenario: Export the result set of the query statement specified in the --query-sql option to the /output directory in the CSV format. For OceanBase Database of a version earlier than V4.0.0, the password of the sys tenant must be provided.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA -f /output --csv --query-sql 'select deptno,dname from dept where deptno<3000'

Note

Make sure that the SQL query statement has correct syntax and required query performance.

Export database object definitions and table data from ApsaraDB for OceanBase

Scenario: When you cannot provide the password of the sys tenant, export table data and the definitions of all defined database objects in the USERA schema of ApsaraDB for OceanBase to the /output directory.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** -D USERA --ddl --csv --public-cloud --all -f /output

Export database object definitions and table data from OceanBase Database

Scenario: When you cannot provide the password of the sys tenant, export table data and the definitions of all defined database objects in the USERA schema of OceanBase Database to the /output directory.

Sample statement:

$./obdumper -h xx.x.x.x -P 2883 -u test -p ****** -c cluster_a -t mysql -D USERA --ddl --csv --no-sys --all -f /output

Notice

The export of database object definitions may have defects if you cannot provide the password of the sys tenant in ApsaraDB for OceanBase or OceanBase Database. For example, sequence definitions cannot be exported from MySQL tenants, table group definitions cannot be exported from OceanBase Database of versions earlier than V2.2.70, index definitions cannot be exported from Oracle tenants in OceanBase Database V2.2.30 and earlier, partition information of unique indexes cannot be exported from OceanBase Database of versions earlier than V2.2.70, and unique index definitions of partitioned tables cannot be exported from Oracle tenants in OceanBase Database V2.2.70 to V4.0.0.0.

Step 7: Congratulations

You have had a quick start with OBLOADER & OBDUMPER.

For more information, refer to the following steps:

  • Learn about the operating principles, major features, and differences from other tools of OBLOADER & OBDUMPER from the product introduction. For more information about the tools, see OBLOADER & OBDUMPER.

  • Welcome to join OceanBase community to discuss issues and requirements on import and export and future plans with OceanBase R&D engineers.

Previous topic

FAQ
Last

Next topic

Command-line options
Next
What is on this page
Step 1: Download the software
Step 2: Configure the running environment
Step 3: Prepare data
Step 4: Create a database
Step 5: View configuration files
Connection configuration file
Log configuration file
Step 6: Export data
Export DDL definition files
Export CSV data files
Export SQL data files
Export CUT data files
Export data files to Amazon S3
Export data to Alibaba Cloud OSS
Customize the name of an exported file
Use a control file to export data
Export data from specified table columns
Export the result set of a custom query
Export database object definitions and table data from ApsaraDB for OceanBase
Export database object definitions and table data from OceanBase Database
Step 7: Congratulations