You can import data files in the SQL format to OceanBase Database by using the command-line interface (CLI) or tools. OceanBase Database supports various tools for importing data files in the SQL format, such as MySQLDumper, obloader, and GUI tools such as OceanBase Developer Center (ODC).
This topic describes how to use the CLI, obloader, and ODC to import data in an SQL file to OceanBase Database.
Configure SQL file information
The SQL file to be imported to OceanBase Database must comply with the syntax requirements of OceanBase Database.
For more information about the SQL syntax requirements of OceanBase Database, see SQL syntax (MySQL mode) and SQL syntax (Oracle mode).
Here is information about a sample SQL file:
[xxx@xxx /home/admin/test_sql]# cat test_tbl1.sql
DROP TABLE IF EXISTS test_data.test_tbl1;
CREATE TABLE IF NOT EXISTS test_data.test_tbl1(col1 INT,col2 VARCHAR(50),PRIMARY KEY (col1));
INSERT INTO test_data.test_tbl1 VALUES(1,'test1');
INSERT INTO test_data.test_tbl1 VALUES(2,'test2');
INSERT INTO test_data.test_tbl1 VALUES(3,'test3');
INSERT INTO test_data.test_tbl1 VALUES(4,'test4');
INSERT INTO test_data.test_tbl1 VALUES(5,'test5');
Use the CLI to import data
Log in to OceanBase Database from the server where the SQL file is stored.
[xxx@xxx /home/admin]# obclient -hxxx.xxx.xxx.1 -P2881 -uroot@mysql001 -p -A Enter password: Welcome to the OceanBase. Commands end with ; or \g. Your OceanBase connection id is 3221709595 Server version: OceanBase 4.0.0.0 (r100000302022111120-7cef93737c5cd03331b5f29130c6e80ac950d33b) (Built Nov 11 2022 20:38:33) Copyright (c) 2000, 2018, OceanBase and/or its affiliates. All rights reserved. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. obclient [(none)]>Create the
test_dataschema.obclient [(none)]> CREATE DATABASE test_data; Query OK, 1 row affectedUse the
SOURCEcommand to import data of the SQL file.obclient [(none)]> SOURCE /home/admin/test_tbl1.sql Query OK, 0 rows affected, 1 warning Query OK, 0 rows affected Query OK, 1 row affected Query OK, 1 row affected Query OK, 1 row affected Query OK, 1 row affected Query OK, 1 row affectedQuery the imported data.
obclient [test_data]> use test_data; Database changed obclient [test_data]> SHOW TABLES; +---------------------+ | Tables_in_test_data | +---------------------+ | test_tbl1 | +---------------------+ 1 row in set obclient [test_data]> SELECT * FROM test_tbl1; +------+-------+ | col1 | col2 | +------+-------+ | 1 | test1 | | 2 | test2 | | 3 | test3 | | 4 | test4 | | 5 | test5 | +------+-------+ 5 rows in set
Use ODC to import data
The maximum size of files imported in one batch by using ODC cannot exceed 2 GB after compression. To import data of a larger size, use the data import tool obloader.
The following section describes how to use Web ODC to import data.
Prepare the ODC environment.
ODC is available in two forms: Web ODC and Client ODC. For more information, see Deployment overview and Install ODC.
Connect to OceanBase Database.
For more information about how to connect to OceanBase Database, see Create a private connection.
Go to the Import Settings panel.
After you open a database connection, click Tool in the top navigation bar, and then select Import from the drop-down list to go to the Import Settings panel.
Upload the SQL file.
Perform the following steps:
- Select SQL Files from the Import Format drop-down list.
- Click in the file pool under Import File to go to the file explorer and select the file to be imported. You can also directly drag the file to the file pool to upload it. The format of the uploaded file must be the same as that of the value you specified for Import Format. To upload a file in the SQL format, you must first package it into an
.sqlfile. - Select an encoding standard from the Encoding Standard drop-down list. ODC supports the following encoding standards: ASCII, ISO-8859-1, GB2312, GBK, GB18030, Unicode (UTF-8), Unicode (UTF-16), Unicode (UTF-32), and BIG5.
- Click Next: Import Settings.
Configure the import settings.
Perform the following steps:
Specify Database: Select the database to which the data object is to be imported. The name of the database connection to which the task belongs is also displayed.
Select Execute Immediately or Execute On Schedule for Execution Mode.
Specify sys Tenant Settings: After you specify the account and password, click Test Connection next to the password field to verify whether the account information is valid. By default, the account for the connection settings is automatically filled in. If the connection fails, we recommend that you change the password for this import.
Notice
The account and password of the
systenant are the account and password of a user in a cluster tenant. Do not specify the account in the "@sys#cluster" format.Click Submit.
View the import task.
After the task is generated, the Task Center page automatically appears, where you can view the task information. For more information, see Import tasks.
Use obloader to import data
Procedure
Configure an SQL file.
Prepare the running environment for obloader. For more information, see Prepare the environment and Download obloader & obdumper.
Import data.
[xxx@xxx /ob-loader-dumper-3.3.2-SNAPSHOT/bin] $./obloader -h <host IP address> -P <port number> -u <username> -p <password> --sys-user <root or proxyro user in the sys tenant> --sys-password <password of the user in the sys tenant> -c <cluster> -t <tenant> -D <schema name> [--ddl] [--csv|--sql] [--all|--table 'table name'] -f<data file or directory>Note
If you specify the host address and port number of a physical node to import data, you do not need to specify the
-coption.For more information about the command-line options of obloader, see Command-line options.
Examples
The following table describes the information about the database that is used in the following examples.
| Database information | Example value |
|---|---|
| Cluster name | test4000 |
| Host IP address of OceanBase Database Proxy (ODP) | xxx.xxx.xxx.1 |
| Port number of ODP | 2883 |
| Business tenant name | mysql001 |
Password of the root user in the sys tenant |
****** |
| User account (with read/write privileges) in the business tenant | obloader_user01 |
| Password of the user in the business tenant | ****** |
| Schema name | test_data |
Import table schemas
Notice
- When you import a table schema, the content of the SQL file is DDL statements, and the
--ddloption is used. The SQL statements cannot include comments orSETstatements. - The directory of the SQL file must be under the
/data/TABLEfolder, which is specified in the-foption, for example,/home/admin/test_sql/data/TABLE/test_tbl1-schema.sql. - The name of the SQL file is in the
table name-schema.sqlformat. - When you import data, specify the
--sys-userand--sys-passwordoptions, so that obloader can obtain the metadata of the table schema.
Scenario: Import an SQL data file in the /home/admin/test_sql/data/TABLE directory to the test_data schema of the mysql001 tenant in the test4000 cluster.
Here is an example:
Create an SQL file.
[root@xxx /home/admin]# mkdir -p /home/admin/test_sql/data/TABLE [root@xxx /home/admin]# cd /home/admin/test_sql/data/TABLE [root@xxx /home/admin/test_sql/data/TABLE]# vi test_tbl1-schema.sql [root@xxx /home/admin/test_sql/data/TABLE]# cat test_tbl1-schema.sql CREATE TABLE IF NOT EXISTS test_tbl1(col1 INT,col2 VARCHAR(50),PRIMARY KEY (col1));Prepare the running environment for obloader. For more information, see Prepare the environment and Download obloader & obdumper.
Import the table schema.
[root@xxx /home/admin/ob-loader-dumper-3.3.2-SNAPSHOT/bin] $./obloader -h xxx.xxx.xxx.1 -P 2883 -u obloader_user01 -p ****** --sys-user root --sys-password ****** -c test4000 -t mysql001 -D test_data --ddl --all -f /home/admin/test_sql 2022-12-01 07:11:32 [INFO] Parsed args: [--host] xxx.xxx.xxx.1 [--port] 2883 [--user] obloader_user01 [--tenant] mysql001 [--cluster] test4000 [--password] ****** [--database] test_data [--sys-user] root [--sys-password] ****** [--ddl] true [--file-path] /home/admin/test_sql [--all] true 2022-12-01 07:11:32 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished 2022-12-01 07:11:33 [INFO] The manifest file: "/home/admin/test_sql/data/MANIFEST.bin" has been saved 2022-12-01 07:11:33 [INFO] Init writer thread pool finished 2022-12-01 07:11:33 [WARN] The object type : "SEQUENCE" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "TABLE_GROUP" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [INFO] Start 128 schema file loader threads succeeded 2022-12-01 07:11:33 [INFO] No.1 sql of the file: "/home/admin/test_sql/data/TABLE/test_tbl1-schema.sql" exec success . Elapsed: 36.94 ms 2022-12-01 07:11:33 [INFO] Load file: "test_tbl1-schema.sql" succeeded 2022-12-01 07:11:33 [WARN] The object type : "VIEW" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "FUNCTION" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "PROCEDURE" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "TRIGGER" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "PACKAGE" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "TYPE" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "PACKAGE_BODY" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "TYPE_BODY" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "SYNONYM" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "PUBLIC_SYNONYM" doesn't exist in the -schema.sql files 2022-12-01 07:11:33 [WARN] The object type : "FILE" doesn't exist in the -schema.sql files 2022-12-01 07:11:34 [INFO] Close connection count: 1 of the DataSource. Key: xxx.xxx.xxx.1_11532_332378361_test_data 2022-12-01 07:11:34 [INFO] Shutdown task context finished 2022-12-01 07:11:34 [INFO] Finished Tasks: 1 Running Tasks: 0 Progress: 100.00% 2022-12-01 07:11:34 [INFO] All Load Tasks Finished: ---------------------------------------------------------------------------------------------------------------------------- No.# | Type | Name | Count | Status ---------------------------------------------------------------------------------------------------------------------------- 1 | TABLE | test_tbl1 | 1 -> 1 | SUCCESS ---------------------------------------------------------------------------------------------------------------------------- Total Count: 1 End Time: 2022-12-01 07:11:34 2022-12-01 07:11:34 [INFO] Load schema finished. Total Elapsed: 1.061 s 2022-12-01 07:11:34 [INFO] System exit 0
Import table data
Notice
- When you import table data, the content of the SQL file is DML statements, and the
--sqloption is used. The SQL file contains onlyINSERTstatements. Each statement occupies one line without line breaks. - The directory of the SQL file must be under the
datafolder, which is specified in the-foption, for example,/home/admin/test_sql/data/test_tbl1.sql. - The name of the SQL file is in the
table name.sql.sqlformat. - The table that corresponds to the SQL file must exist in the database or schema.
Scenario: Import an SQL data file in the /home/admin/test_sql/data/ directory to the test_data schema of the mysql001 tenant in the test4000 cluster.
Here is an example:
Create an SQL file.
[root@xxx /home/admin]# mkdir -p /home/admin/test_sql/data [root@xxx /home/admin]# cd /home/admin/test_sql/data [root@xxx /home/admin/test_sql/data]# vi test_tbl1.sql [root@xxx /home/admin/test_sql/data/TABLE]# cat test_tbl1.sql INSERT INTO test_tbl1 VALUES(1,'test1'); INSERT INTO test_tbl1 VALUES(2,'test2'); INSERT INTO test_tbl1 VALUES(3,'test3'); INSERT INTO test_tbl1 VALUES(4,'test4'); INSERT INTO test_tbl1 VALUES(5,'test5');Prepare the running environment for obloader. For more information, see Prepare the environment and Download obloader & obdumper.
Import table data.
[root@xxx /home/admin/ob-loader-dumper-3.3.2-SNAPSHOT/bin] $./obloader -h xxx.xxx.xxx.1 -P 2883 -u obloader_user01 -p ****** --sys-user root --sys-password ****** -c test4000 -t mysql001 -D test_data --sql --all -f /home/admin/test_sql 2022-12-01 07:17:39 [INFO] Parsed args: [--host] xxx.xxx.xxx.1 [--port] 2883 [--user] obloader_user01 [--tenant] mysql001 [--cluster] test4000 [--password] ****** [--database] test_data [--sys-user] root [--sys-password] ****** [--sql] true [--file-path] /home/admin/test_sql [--all] true 2022-12-01 07:17:39 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished 2022-12-01 07:17:40 [INFO] The manifest file: "/home/admin/test_sql/data/MANIFEST.bin" has been saved 2022-12-01 07:17:40 [INFO] Query the column metadata for the table: "test_tbl1" finished 2022-12-01 07:17:40 [WARN] File: "/home/admin/test_sql/data/TABLE/test_tbl1-schema.sql" is unmatched on the suffix[.sql], ignore it 2022-12-01 07:17:40 [WARN] File: "/home/admin/test_sql/data/MANIFEST.bin" is unmatched on the suffix[.sql], ignore it 2022-12-01 07:17:40 [INFO] Binding table: "test_tbl1" to the file: "/home/admin/test_sql/data/test_tbl1.sql" finished 2022-12-01 07:17:40 [INFO] File: "/home/admin/test_sql/data/test_tbl1.sql" has not been splitted. 205 < 67108864 2022-12-01 07:17:40 [INFO] Splitted 1 sql subfiles by 64.0 MB. Elapsed: 15.64 ms 2022-12-01 07:17:40 [INFO] Generate 1 subfiles finished 2022-12-01 07:17:40 [INFO] Ignore to clean any tables as --truncate-table or --delete-from-table is not specified 2022-12-01 07:17:40 [ERROR] Invalid table entry: TableEntryKey [ cluster: test4000, tenant: mysql001, database: test_data, table: test_tbl1 ], TableId: -9223372036854775808, PartitionNum: 1, ReplicaNum: -9223372036854775808, SchemaVersion: null, PartitionInfo: null 2022-12-01 07:17:40 [INFO] Query table entry and primary key for table: "test_tbl1" finished. Remain: 0 2022-12-01 07:17:40 [INFO] Calculate leader: null of table: "test_tbl1", part: 0. Remain: 0 2022-12-01 07:17:40 [INFO] Waiting to refresh observer load status ...... 2022-12-01 07:17:40 [INFO] Refresh the observer load status success. Table: "test_tbl1". Remain: 0 2022-12-01 07:17:40 [INFO] Refresh observer load status finished. Elapsed: 1.203 ms 2022-12-01 07:17:40 [INFO] Use c.l.d.PhasedBackoffWaitStrategy as available cpu(s) is 64 2022-12-01 07:17:40 [INFO] Create 4096 slots for ring buffer finished. [0.0.0.0] 2022-12-01 07:17:40 [INFO] Start 128 database writer threads finished. [0.0.0.0] 2022-12-01 07:17:40 [INFO] Start 128 sql file reader threads succeeded 2022-12-01 07:17:40 [INFO] File: "/home/admin/test_sql/data/test_tbl1.sql" has been parsed finished 2022-12-01 07:17:42 [INFO] Wait for the all the workers to drain of published events then halt the workers 2022-12-01 07:17:42 [INFO] Close connection count: 4 of the DataSource. Key: xxx.xxx.xxx.1_11532_332378361_test_data 2022-12-01 07:17:42 [INFO] Shutdown task context finished 2022-12-01 07:17:42 [INFO] Finished Tasks: 1 Running Tasks: 0 Progress: 100.00% 2022-12-01 07:17:42 [INFO] All Load Tasks Finished: ---------------------------------------------------------------------------------------------------------------------------- No.# | Type | Name | Count | Status ---------------------------------------------------------------------------------------------------------------------------- 1 | TABLE | test_tbl1 | 5 -> 5 | SUCCESS ---------------------------------------------------------------------------------------------------------------------------- Total Count: 5 End Time: 2022-12-01 07:17:42 2022-12-01 07:17:42 [INFO] Load record finished. Total Elapsed: 2.296 s 2022-12-01 07:17:42 [INFO] System exit 0