OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Loader and Dumper

V4.1.0Enterprise Edition

  • Document Overview
  • What's New
  • Introduction
  • Preparations
    • Prepare the environment
    • Download OBLOADER
  • User Guide (OBLOADER)
    • OBLOADER overview
    • Command-line options
    • Data processing
      • Define control files
      • Preprocessing functions
      • Case expressions
    • Performance tuning
    • Scenarios and examples
    • FAQ
  • User Guide (OBDUMPER)
    • OBDUMPER overview
    • Command-line options
    • Data processing
      • Define control files
      • Preprocessing functions
      • Case expressions
    • Performance tuning
    • Scenarios and examples
    • FAQ
  • Release Note
    • Release Note
      • 4.x
        • OBLOADER \& OBDUMPER V4.1.0
        • OBLOADER \& OBDUMPER V4.0.0
      • 3.x
        • OBLOADER \& OBDUMPER V3.1.0
        • OBLOADER \& OBDUMPER V3.0.0
    • Version rules

Download PDF

Document Overview What's New Introduction Prepare the environment Download OBLOADER OBLOADER overviewCommand-line optionsDefine control filesPreprocessing functionsCase expressionsPerformance tuningScenarios and examplesFAQ OBDUMPER overview Command-line optionsDefine control filesPreprocessing functionsCase expressions Performance tuningScenarios and examplesFAQ Version rules
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Loader and Dumper
  3. V4.1.0
iconOceanBase Loader and Dumper
V 4.1.0Enterprise Edition
  • V 4.3.6
  • V 4.3.5
  • V 4.3.4.1
  • V 4.3.4
  • V 4.3.3.1
  • V 4.3.3
  • V 4.3.2.1
  • V 4.3.2
  • V 4.3.1
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5 and earlier

Scenarios and examples

Last Updated:2023-07-03 05:50:18  Updated
share
What is on this page
Import DDL definition files
Import CSV data files
Import SQL data files
Import POS data files
Import CUT data files
Import database objects and data to a public cloud
Import database objects and data to a private cloud
Use security key files
Set session-level variables

folded

share

Scenarios and examples

This topic describes the common business scenarios in which OBLOADER is used and provides the corresponding examples.

The following table describes the database information that is used in the examples.

Database information item Example value
Cluster name cluster_a
IP address of the OceanBase DataBase Proxy (ODP) host 10.0.0.0
ODP port number 2883
Tenant name of the cluster mysql
Name of the root/proxyro user under the sys tenant **u***
Password of the root/proxyro user under the sys tenant ******
User account (with read/write privileges) under the business tenant test
Password of the user under the business tenant ******
Schema name USERA

Import DDL definition files

Scenario description: Import all supported database object definitions in the /output directory to the USERA schema. OceanBase Database of versions earlier than V4.0.0.0 require the password of the sys tenant.

Sample code:

[admin@localhost]> ./obloader -h 10.0.0.0 -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --ddl --all -f /output

Note
The --sys-user option specifies the username of a user with required privileges under the sys tenant. If the --sys-user option is not specified during import, --sys-user root is specified by default.

Import CSV data files

Scenario description: Import all supported CSV data files in the /output directory to the USERA schema. OceanBase Database of versions earlier than V4.0.0.0 require the password of the sys tenant. For more information about the CSV data file specifications, see RFC 4180. CSV data files (.csv files) store data in the form of plain text. You can open them by using a text editor or Excel.

Sample code:

[admin@localhost]> ./obloader -h 10.0.0.0 -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --csv --table '*' -f /output

Import SQL data files

Scenario description: Import all supported SQL data files in the /output directory to the USERA schema. OceanBase Database of versions earlier than V4.0.0.0 require the password of the sys tenant. SQL data files (.sql files) store INSERT SQL statements. You can open them by using a text editor or SQL editor.

Sample code:

[admin@localhost]> ./obloader -h 10.0.0.0 -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --sql --table '*' -f /output

Import POS data files

Scenario description: Import all supported POS data files in the /output directory to the USERA schema. OceanBase Database of versions earlier than V4.0.0.0 require the password of the sys tenant. To import a POS data file, you must specify the path of the control file and use | as the column separator. POS data files (.dat files by default) organize data based on a byte offset position with a fixed length. To import a POS data file, you must define the fixed length of each field by using a control file. You can open POS data files by using a text editor.

Sample code:

./obloader -h 10.0.0.0 -P 2883 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql
-D USERA --table '*' -f /output --pos --column-splitter '|' --ctl-path /output

Note
The table name defined in the database must be in the same letter case as its corresponding control file name. Otherwise, OBLOADER fails to recognize the control file. For more information about control file definition rules, see Preprocessing functions.

Import CUT data files

Scenario description: Import all supported CUT data files in the /output directory to the USERA schema. OceanBase Database of versions earlier than V4.0.0.0 require the password of the sys tenant. To import a CUT data file, you must specify the path of the control file and use |@| as the column separator string. CUT data files (.dat files) use a character or character string to separate values. You can open CUT data files by using a text editor.

Sample code:

./obloader -h127.1 -P2881 -u test -p ****** --sys-user **u*** --sys-password ****** -c cluster_a -t mysql -D USERA --table '*' -f /output --cut --column-splitter '|@|' --ctl-path /output

Note
The table name defined in the database must be in the same letter case as its corresponding control file name. Otherwise, OBLOADER fails to recognize the control file. For more information about control file definition rules, see Preprocessing functions.

Import database objects and data to a public cloud

Scenario description: When the user cannot provide the sys tenant password, import all supported database objects and data in the /output directory to the USERA schema in a public cloud.

Sample code:

[admin@localhost]> ./obloader -h 10.0.0.0 -P 2883 -u test -p ****** -D USERA --ddl --csv --public-cloud --all -f /output

Import database objects and data to a private cloud

Scenario description: When the user cannot provide the sys tenant password, import all supported database objects and data in the /output directory to the USERA schema in a private cloud.

Sample code:

[admin@localhost]> ./obloader -h 10.0.0.0 -P 2883 -u test -p ****** -c cluster_a -t mysql -D USERA --ddl --csv --no-sys --all -f /output

Use security key files

OBLOADER & OBDUMPER V3.1.0 and later versions support the use of a key file as a substitution of specifying the password of the sys tenant on the command line. Perform the following steps:

  1. Use OpenSSL to generate public and private keys and encrypt sys-password.

    # 1. Use genrsa to generate the private key.
    openssl genrsa -out xxx.pem 1024
    
    # 2. Use the private key to generate the public key.
    openssl rsa -in xxx.pem -pubout -out xxx.pem
    
    # 3. Convert the private key format and specify it in properties.
    openssl pkcs8 -topk8 -in xxx.pem -out xxx.pem -nocrypt
    
    # 4. Use the public key to encrypt the sys-password file and output the ciphertext to the file.
    openssl rsautl -encrypt -pubin -inkey xxx.pem -in pwd.txt -out xxx.txt
    
  2. Modify the decrypt.properties file in the conf directory.

Set session-level variables

# This variable is used to initialize the session.
# The default value is 5 minutes.
ob.query.timeout.for.init.session=5

# This variable is used to initialize the session.
# The default value is 5 minutes.
ob.trx.timeout.for.init.session=5

# This variable is used to query metadata, such as
#   to query the database;
#   to query the row key, primary key, and macro range;
#   to query the primary key;
#   to query the unique key;
#   to query the load status;
# The default value is 5 minutes.
ob.query.timeout.for.query.metadata=5

# This variable is used to dump records for CSV, CUT, and SQL.
# The default value is 24 hours.
ob.query.timeout.for.dump.record=24

# This variable is used to dump records for query-sql.
# The default value is 5 hours.
ob.query.timeout.for.dump.custom=5

# This variable is used to execute DDL statements, such as statements for loading the schema and truncating the table.
# The default value is 1 minute.
ob.query.timeout.for.exec.ddl=1

# This variable is used to execute DML statements, such as the statement for deleting the table.
# The default value is 1 hour.
ob.query.timeout.for.exec.dml=1

# This variable is used to dump records for CSV, CUT, and SQL.
# The default value is 24 hours.
ob.trx.timeout.for.dump.record=24

# This variable is used to dump records for query-sql.
# The default value is 5 hours.
ob.trx.timeout.for.dump.custom=5

# This variable is used to dump records for CSV, CUT, and SQL.
# The default value is 24 hours.
ob.net.read.timeout.for.dump.record=24

# This variable is used to dump records for query-sql.
# The default value is 5 hours.
ob.net.read.timeout.for.dump.custom=5

# This variable is used to dump records for CSV, CUT, and SQL.
# The default value is 24 hours.
ob.net.write.timeout.for.dump.record=24

# This variable is used to dump records for query-sql.
# The default value is 5 hours.
ob.net.write.timeout.for.dump.custom=5

# This variable is used to set the session variable ob_proxy_route_policy.
# The default value is follower_first.
ob.proxy.route.policy=follower_first

# This variable is used to set the JDBC URL option useSSL.
# The default value is false.
jdbc.url.use.ssl=false

# This variable is used to set the JDBC URL option useUnicode.
# The default value is true.
jdbc.url.use.unicode=true

# This variable is used to set the JDBC URL option socketTimeout.
# The default value is 30 minutes.
jdbc.url.socket.timeout=30

# This variable is used to set the JDBC URL option connectTimeout.
# The default value is 3 minutes.
jdbc.url.connect.timeout=3

# This variable is used to set the JDBC URL option characterEncoding.
# The default value is utf8.
jdbc.url.character.encoding=utf8

# This variable is used to set the JDBC URL option useCompression.
# The default value is true.
jdbc.url.use.compression=true

# This variable is used to set the JDBC URL option cachePrepStmts.
# The default value is true.
jdbc.url.cache.prep.stmts=true

# This variable is used to set the JDBC URL option noDatetimeStringSync.
# The default value is true.
jdbc.url.no.datetime.string.sync=true

# This variable is used to set the JDBC URL option useServerPrepStmts.
# The default value is true.
jdbc.url.use.server.prep.stmts=true

# This variable is used to set the JDBC URL option allowMultiQueries.
# The default value is true.
jdbc.url.allow.multi.queries=true

# This variable is used to set the JDBC URL option rewriteBatchedStatements.
# The default value is true.
jdbc.url.rewrite.batched.statements=true

# This variable is used to set the JDBC URL option useLocalSessionState.
# The default value is true.
jdbc.url.use.local.session.state=true

# This variable is used to set the JDBC URL option zeroDateTimeBehavior.
# The default value is convertToNull.
jdbc.url.zero.datetime.behavior=convertToNull

# This variable is used to set the JDBC URL option verifyServerCertificate.
# The default value is false.
jdbc.url.verify.server.certificate=false

# This variable is used to set the JDBC URL option usePipelineAuth.
# The default value is false.
jdbc.url.use.pipeline.auth=false

# This variable is used to set the JDBC URL option socketProxyHost.
# The default value is null.
jdbc.url.socks.proxy.host=null

# This variable is used to set the JDBC URL option socketProxyPort.
# The default value is null.
jdbc.url.socks.proxy.port=null

Previous topic

Performance tuning
Last

Next topic

FAQ
Next
What is on this page
Import DDL definition files
Import CSV data files
Import SQL data files
Import POS data files
Import CUT data files
Import database objects and data to a public cloud
Import database objects and data to a private cloud
Use security key files
Set session-level variables