OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Database

SQL - V4.3.0

    Download PDF

    OceanBase logo

    The Unified Distributed Database for the AI Era.

    Follow Us
    Products
    OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
    Resources
    DocsBlogLive DemosTraining & Certification
    Company
    About OceanBaseTrust CenterLegalPartnerContact Us
    Follow Us

    © OceanBase 2026. All rights reserved

    Cloud Service AgreementPrivacy PolicySecurity
    Contact Us
    Document Feedback
    1. Documentation Center
    2. OceanBase Database
    3. SQL
    4. V4.3.0
    iconOceanBase Database
    SQL - V 4.3.0
    SQL
    KV
    • V 4.4.2
    • V 4.3.5
    • V 4.3.3
    • V 4.3.1
    • V 4.3.0
    • V 4.2.5
    • V 4.2.2
    • V 4.2.1
    • V 4.2.0
    • V 4.1.0
    • V 4.0.0
    • V 3.1.4 and earlier

    Import data from SQL files to OceanBase Database

    Last Updated:2024-04-19 08:42:49  Updated
    share
    What is on this page
    Configure SQL file information
    Use the command-line interface to import data
    Use ODC to import data
    Import data by using OBLOADER
    Procedure
    Example of importing SQL data files

    folded

    share

    You can import data files in the SQL format to OceanBase Database by using the command-line interface or tools. OceanBase Database supports various tools for importing data files in the SQL format, such as MySQLDumper, OBLOADER, and graphical user interface tools such as OceanBase Developer Center (ODC).

    This topic describes how to use the command-line interface, OBLOADER, and ODC to import data in an SQL file to OceanBase Database.

    Configure SQL file information

    The SQL file to be imported to OceanBase Database must comply with the syntax requirements of OceanBase Database.

    For more information about the SQL syntax requirements of OceanBase Database, see SQL syntax (MySQL mode) and SQL syntax (Oracle mode).

    Here is SQL file information in this example:

    [xxx@xxx /home/admin/test_sql]# cat test_tbl1.sql
    DROP TABLE IF EXISTS test_data.test_tbl1;
    
    CREATE TABLE IF NOT EXISTS test_data.test_tbl1(col1 INT,col2 VARCHAR(50),PRIMARY KEY (col1));
    
    INSERT INTO test_data.test_tbl1 VALUES(1,'test1');
    INSERT INTO test_data.test_tbl1 VALUES(2,'test2');
    INSERT INTO test_data.test_tbl1 VALUES(3,'test3');
    INSERT INTO test_data.test_tbl1 VALUES(4,'test4');
    INSERT INTO test_data.test_tbl1 VALUES(5,'test5');
    

    Use the command-line interface to import data

    1. Log on to OceanBase Database from the server where the SQL file is stored.

      [xxx@xxx /home/admin]# obclient -hxxx.xxx.xxx.1 -P2881 -uroot@mysql001 -p -A
      Enter password:
      Welcome to the OceanBase.  Commands end with ; or \g.
      Your OceanBase connection id is 3221709595
      Server version: OceanBase 4.0.0.0 (r100000302022111120-7cef93737c5cd03331b5f29130c6e80ac950d33b) (Built Nov 11 2022 20:38:33)
      
      Copyright (c) 2000, 2018, OceanBase and/or its affiliates. All rights reserved.
      
      Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
      
      obclient [(none)]>
      
    2. Create a schema named test_data.

      obclient [(none)]> CREATE DATABASE test_data;
      Query OK, 1 row affected
      
    3. Use the SOURCE command to import data of an SQL file.

      obclient [(none)]> SOURCE /home/admin/test_tbl1.sql
      Query OK, 0 rows affected, 1 warning
      
      Query OK, 0 rows affected
      
      Query OK, 1 row affected
      
      Query OK, 1 row affected
      
      Query OK, 1 row affected
      
      Query OK, 1 row affected
      
      Query OK, 1 row affected
      
    4. View information of the imported data.

      obclient [test_data]> use test_data;
      Database changed
      obclient [test_data]> SHOW TABLES;
      +---------------------+
      | Tables_in_test_data |
      +---------------------+
      | test_tbl1           |
      +---------------------+
      1 row in set
      
      obclient [test_data]> SELECT * FROM test_tbl1;
      +------+-------+
      | col1 | col2  |
      +------+-------+
      |    1 | test1 |
      |    2 | test2 |
      |    3 | test3 |
      |    4 | test4 |
      |    5 | test5 |
      +------+-------+
      5 rows in set
      

    Use ODC to import data

    The maximum size of files imported in one batch by using ODC cannot exceed 2 GB after compression. To import data of a larger size, use the data import tool OBLOADER.

    The following section describes how to import data by using web ODC.

    1. Prepare the ODC environment.

      Both web ODC and client ODC are available. For more information, see Deployment Guide (Web ODC) and Install ODC (Client ODC).

    2. Connect to OceanBase Database.

      For more information about how to connect to OceanBase Database, see Create a private connection.

    3. Go to the Import Settings panel.

      After you open a database connection, click Tool in the top navigation bar, and then select Import from the drop-down list to go to the Import Settings panel.

    4. Upload the SQL file.

      The following figure shows the details.

      • Import File Format: Select SQL File.
      • Upload the file to be imported: Click in the file pool to go to the file explorer and select the file to be imported. You can also directly drag the file to the file pool to upload it. The format of the uploaded file must be the same as that of the value you specified for Import Format. To upload a file in the SQL format, you must first package it into an .sql file.
      • Encoding Standard: ODC supports the following character sets: ASCII, ISO-8859-1, GB2312, GBK, GB18030, Unicode (UTF-8), Unicode (UTF-16), Unicode (UTF-32), and BIG5. You can select the encoding standard from the File Encoding drop-down list.
      • Click Next: Import Settings.
    5. Configure the import settings.

      The following figure shows the details.

      • Database: Select the database to which the data object is to be imported. The name of the database connection to which the task belongs is also displayed.

      • Execution Mode: You can select Execute immediately or Timed execution.

      • Sys Tenant Account Settings: After you specify the account and password, click Test Connection next to the password field to verify whether the account information is valid. By default, the account for the connection settings is automatically filled in. If the connection fails, we recommend that you change the password for this export.

        Notice

        The account and password of the sys tenant are the account and password of a user in a cluster tenant. Do not enter "@sys#cluster" in the Account field.

      • After you complete the configurations, click Submit.

    6. View the import task.

      After the task is generated, the Task Center page automatically appears, where you can view the task information. For more information, see Import tasks.

    Import data by using OBLOADER

    Procedure

    1. Set the SQL file.

    2. Prepare the running environment for OBLOADER. For more information, see Prepare the environment and Download tools.

    3. Import data.

      [xxx@xxx /ob-loader-dumper-3.3.2-SNAPSHOT/bin]
      $./obloader -h <host IP address> -P <port number> -u <username> -p <password> --sys-user <root or proxyro user in the sys tenant> --sys-password <password of the user in the sys tenant> -c <cluster> -t <tenant> -D <schema name> [--ddl] [--csv|--sql] [--all|--table 'table name'] -f<data file or directory>
      

      Note

      When you use the host address and port number of a physical node to import data, you do not need to specify the -c option.

      For more information about the command-line options of OBLOADER, see Command-line options.

    Example of importing SQL data files

    The following table describes the database information that is used in the examples.

    Database information Example value
    Cluster name test4000
    IP address of the OceanBase Database Proxy (ODP) host xxx.xxx.xxx.1
    ODP port number 2883
    Business tenant name mysql001
    Password of the root user in the sys tenant ******
    User account (with read/write privileges) under the business tenant obloader_user01
    Password of the user under the business tenant ******
    Schema name test_data

    Import table schemas

    Notice

    • When you import a table schema, the content of the SQL file is DDL statements, and the --ddl option is used. The SQL statements cannot include comments or SET switches.
    • The directory of the SQL file must be under the /data/TABLE directory, which is specified in the -f option, for example, /home/admin/test_sql/data/TABLE/test_tbl1-schema.sql.
    • The name of the SQL file is in the table name-schema.sql format.
    • When you import data, specify the --sys-user and --sys-password options, so that OBLOADER can obtain the metadata of the table schema.

    Scenario: Import the SQL data files in the /home/admin/test_sql/data/TABLE directory to the test_data schema of the mysql001 tenant in the test4000 cluster.

    The sample statement is as follows:

    1. Create an SQL file.

      [root@xxx /home/admin]# mkdir -p /home/admin/test_sql/data/TABLE
      [root@xxx /home/admin]# cd /home/admin/test_sql/data/TABLE
      [root@xxx /home/admin/test_sql/data/TABLE]# vi test_tbl1-schema.sql
      [root@xxx /home/admin/test_sql/data/TABLE]# cat test_tbl1-schema.sql
      CREATE TABLE IF NOT EXISTS test_tbl1(col1 INT,col2 VARCHAR(50),PRIMARY KEY (col1));
      
    2. Prepare the running environment for OBLOADER. For more information, see Prepare the environment and Download tools.

    3. Import the table schema.

      [root@xxx /home/admin/ob-loader-dumper-3.3.2-SNAPSHOT/bin]
      $./obloader -h xxx.xxx.xxx.1 -P 2883 -u obloader_user01 -p ****** --sys-user root --sys-password ****** -c test4000 -t mysql001 -D test_data --ddl --all -f /home/admin/test_sql
      2022-12-01 07:11:32 [INFO] Parsed args:
      [--host] xxx.xxx.xxx.1
      [--port] 2883
      [--user] obloader_user01
      [--tenant] mysql001
      [--cluster] test4000
      [--password] ******
      [--database] test_data
      [--sys-user] root
      [--sys-password] ******
      [--ddl] true
      [--file-path] /home/admin/test_sql
      [--all] true
      
      2022-12-01 07:11:32 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished
      2022-12-01 07:11:33 [INFO] The manifest file: "/home/admin/test_sql/data/MANIFEST.bin" has been saved
      2022-12-01 07:11:33 [INFO] Init writer thread pool finished
      2022-12-01 07:11:33 [WARN] The object type : "SEQUENCE" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "TABLE_GROUP" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [INFO] Start 128 schema file loader threads successed
      2022-12-01 07:11:33 [INFO] No.1 sql of the file: "/home/admin/test_sql/data/TABLE/test_tbl1-schema.sql" exec  success . Elapsed: 36.94 ms
      2022-12-01 07:11:33 [INFO] Load file: "test_tbl1-schema.sql" succeeded
      2022-12-01 07:11:33 [WARN] The object type : "VIEW" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "FUNCTION" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "PROCEDURE" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "TRIGGER" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "PACKAGE" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "TYPE" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "PACKAGE_BODY" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "TYPE_BODY" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "SYNONYM" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "PUBLIC_SYNONYM" doesn't exist in the -schema.sql files
      2022-12-01 07:11:33 [WARN] The object type : "FILE" doesn't exist in the -schema.sql files
      2022-12-01 07:11:34 [INFO] Close connection count: 1 of the DataSource. Key: xxx.xxx.xxx.1_11532_332378361_test_data
      2022-12-01 07:11:34 [INFO] Shutdown task context finished
      2022-12-01 07:11:34 [INFO]
      Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%
      2022-12-01 07:11:34 [INFO]
      
      All Load Tasks Finished:
      
      ----------------------------------------------------------------------------------------------------------------------------
              No.#        |        Type        |             Name             |            Count             |       Status
      ----------------------------------------------------------------------------------------------------------------------------
              1          |       TABLE        |          test_tbl1           |            1 -> 1            |      SUCCESS
      ----------------------------------------------------------------------------------------------------------------------------
      
      Total Count: 1          End Time: 2022-12-01 07:11:34
      
      
      2022-12-01 07:11:34 [INFO] Load schema finished. Total Elapsed: 1.061 s
      2022-12-01 07:11:34 [INFO] System exit 0
      

    Import table data

    Notice

    • When you import a table schema, the content of the SQL file is DML statements, and the --sql option is used. The SQL file contains only INSERT statements. Each statement occupies one line without line breaks.
    • The directory of the SQL file must be under the data folder, which is specified in the -f option, for example, /home/admin/test_sql/data/test_tbl1.sql.
    • The name of the SQL file is in the table name.sql format.
    • The table that corresponds to the SQL file must exist in the database or schema.

    Scenario: Import the SQL data files in the /home/admin/test_sql/data directory to the test_data schema of the mysql001 tenant in the test4000 cluster.

    The syntax is as follows:

    1. Create an SQL file.

      [root@xxx /home/admin]# mkdir -p /home/admin/test_sql/data
      [root@xxx /home/admin]# cd /home/admin/test_sql/data
      [root@xxx /home/admin/test_sql/data]# vi test_tbl1.sql
      [root@xxx /home/admin/test_sql/data/TABLE]# cat test_tbl1.sql
      INSERT INTO test_tbl1 VALUES(1,'test1');
      INSERT INTO test_tbl1 VALUES(2,'test2');
      INSERT INTO test_tbl1 VALUES(3,'test3');
      INSERT INTO test_tbl1 VALUES(4,'test4');
      INSERT INTO test_tbl1 VALUES(5,'test5');
      
    2. Prepare the running environment for OBLOADER. For more information, see Prepare the environment and Download tools.

    3. Import table data.

      [root@xxx /home/admin/ob-loader-dumper-3.3.2-SNAPSHOT/bin]
      $./obloader -h xxx.xxx.xxx.1 -P 2883 -u obloader_user01 -p ****** --sys-user root --sys-password ****** -c test4000 -t mysql001 -D test_data --sql --all -f /home/admin/test_sql
      2022-12-01 07:17:39 [INFO] Parsed args:
      [--host] xxx.xxx.xxx.1
      [--port] 2883
      [--user] obloader_user01
      [--tenant] mysql001
      [--cluster] test4000
      [--password] ******
      [--database] test_data
      [--sys-user] root
      [--sys-password] ******
      [--sql] true
      [--file-path] /home/admin/test_sql
      [--all] true
      
      2022-12-01 07:17:39 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished
      2022-12-01 07:17:40 [INFO] The manifest file: "/home/admin/test_sql/data/MANIFEST.bin" has been saved
      2022-12-01 07:17:40 [INFO] Query the column metadata for the table: "test_tbl1" finished
      2022-12-01 07:17:40 [WARN] File: "/home/admin/test_sql/data/TABLE/test_tbl1-schema.sql" is unmatched on the suffix[.sql], ignore it
      2022-12-01 07:17:40 [WARN] File: "/home/admin/test_sql/data/MANIFEST.bin" is unmatched on the suffix[.sql], ignore it
      2022-12-01 07:17:40 [INFO] Binding table: "test_tbl1" to the file: "/home/admin/test_sql/data/test_tbl1.sql" finished
      2022-12-01 07:17:40 [INFO] File: "/home/admin/test_sql/data/test_tbl1.sql" has not been splitted. 205 < 67108864
      2022-12-01 07:17:40 [INFO] Splitted 1 sql subfiles by 64.0 MB. Elapsed: 15.64 ms
      2022-12-01 07:17:40 [INFO] Generate 1 subfiles finished
      2022-12-01 07:17:40 [INFO] Ignore to clean any tables as --truncate-table or --delete-from-table is not specified
      2022-12-01 07:17:40 [ERROR] Invalid table entry: TableEntryKey [ cluster: test4000, tenant: mysql001, database: test_data, table: test_tbl1 ], TableId: -9223372036854775808, PartitionNum: 1, ReplicaNum: -9223372036854775808, SchemaVersion: null, PartitionInfo: null
      2022-12-01 07:17:40 [INFO] Query table entry and primary key for table: "test_tbl1" finished. Remain: 0
      2022-12-01 07:17:40 [INFO] Calculate leader: null of table: "test_tbl1", part: 0. Remain: 0
      2022-12-01 07:17:40 [INFO] Waiting to refresh observer load status ......
      2022-12-01 07:17:40 [INFO] Refresh the observer load status success. Table: "test_tbl1". Remain: 0
      2022-12-01 07:17:40 [INFO] Refresh observer load status finished. Elapsed: 1.203 ms
      2022-12-01 07:17:40 [INFO] Use c.l.d.PhasedBackoffWaitStrategy as available cpu(s) is 64
      2022-12-01 07:17:40 [INFO] Create 4096 slots for ring buffer finished. [0.0.0.0]
      2022-12-01 07:17:40 [INFO] Start 128 database writer threads finished. [0.0.0.0]
      2022-12-01 07:17:40 [INFO] Start 128 sql file reader threads successed
      2022-12-01 07:17:40 [INFO] File: "/home/admin/test_sql/data/test_tbl1.sql" has been parsed finished
      2022-12-01 07:17:42 [INFO] Wait for the all the workers to drain of published events then halt the workers
      2022-12-01 07:17:42 [INFO] Close connection count: 4 of the DataSource. Key: xxx.xxx.xxx.1_11532_332378361_test_data
      2022-12-01 07:17:42 [INFO] Shutdown task context finished
      2022-12-01 07:17:42 [INFO]
      Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%
      2022-12-01 07:17:42 [INFO]
      
      All Load Tasks Finished:
      
      ----------------------------------------------------------------------------------------------------------------------------
              No.#        |        Type        |             Name             |            Count             |       Status
      ----------------------------------------------------------------------------------------------------------------------------
              1          |       TABLE        |          test_tbl1           |            5 -> 5            |      SUCCESS
      ----------------------------------------------------------------------------------------------------------------------------
      
      Total Count: 5          End Time: 2022-12-01 07:17:42
      
      
      2022-12-01 07:17:42 [INFO] Load record finished. Total Elapsed: 2.296 s
      2022-12-01 07:17:42 [INFO] System exit 0
      

    Previous topic

    Import data by using the LOAD DATA statement
    Last

    Next topic

    Use OMS to migrate data from an OceanBase tenant to another of the same type
    Next
    What is on this page
    Configure SQL file information
    Use the command-line interface to import data
    Use ODC to import data
    Import data by using OBLOADER
    Procedure
    Example of importing SQL data files