Use obloader & obdumper to migrate data from a MySQL tenant to an Oracle tenant in OceanBase Database

2025-01-26 08:21:58  Updated

This topic describes how to use obloader & obdumper to migrate data from a MySQL tenant to an Oracle tenant in OceanBase Database.

About obloader & obdumper

OceanBase Database provides the data export tool obdumper and the data import tool obloader.

  • obdumper is a client data export tool developed in Java. You can use it to export data and the DDL statements of objects in OceanBase Database to files. For more information about obdumper, see obdumper overview.

  • obloader is a client data import tool developed in Java. obloader provides extensive command-line options that allow you to import definitions and data to OceanBase Database in many complex scenarios. For more information about obloader, see obloader overview.

Procedure

  1. Prepare the running environment for obdumper and obloader. For more information, see Prepare the environment and Download tools.

  2. Create a directory to store the exported data.

  3. Export the table schema and data.

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $./obdumper -h <host IP address> -P <port number> -u <username> -p <password> [--sys-user <username in the sys tenant> --sys-password <password of the specified user in the sys tenant>] -c <cluster name> -t <tenant> -D <schema name> [--ddl] [--csv|--sql] [--all|--table 'table name'] -f <data file or directory>
    

    Note

    • When you use the host IP address and port number of a physical node to export data, you do not need to specify the -c option.
    • For more information about obdumper command-line options, see Command-line options.
  4. Import a table schema and data.

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $./obloader -h <host IP address> -P <port number> -u <username> -p <password> --sys-user <root or proxyro user in the sys tenant> --sys-password <password of the user in the sys tenant> -c <cluster> -t <tenant> -D <schema name> [--ddl] [--csv|--sql] [--all|--table 'table name'] -f <data file or directory>
    

    Note

    • When you use the host IP address and port number of a physical node to import data, you do not need to specify the -c option.
    • For more information about obloader command-line options, see Command-line options.

Examples

The following table describes the database information that is used in the examples.

Database information Example value
Cluster name test4000
IP address of the OceanBase Database Proxy (ODP) host xxx.xxx.xxx.xxx
ODP port number 2883
Password of the root user in the sys tenant ******
Source tenant name (MySQL mode) mysql001
User account of the mysql001 tenant (with the read and write privileges) obdumper_user01
Password of the obdumper_user01 user in the mysql001 tenant ******
Schema name in the mysql001 tenant test_data
Destination tenant name (Oracle mode) oracle001
User account of the oracle001 tenant (with read and write privileges) obloader_user01
Password of the oblader_user01 user in the oracle001 tenant ******
Schema name in the oracle001 tenant OBLOADER_USER01

Migrate a table schema

When you use obdumper and obloader to migrate a table schema, you must specify the --sys-user and --sys-password options to obtain the metadata of the table schema. If you do not specify these options, data export/import features and performance may be significantly affected.

Note

The --sys-user option is used to connect to a user with specific privileges in the sys tenant. If you do not specify the --sys-user option, --sys-user root is used by default.

Export DDL definition files

Scenario: Export all supported object definition statements from the test_data schema of the mysql001 tenant in the test4000 cluster to the /home/admin/test_migrate_data/ddl_data directory.

  • The sample statement is as follows:

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $./obdumper -h xxx.xxx.xxx.xxx -P 2883 -u obdumper_user01 -p ****** -c test4000 -t mysql001 -D test_data --sys-user root --sys-password ****** --ddl --all -f /home/admin/test_migrate_data/ddl_data
    2022-12-26 14:11:57 [INFO] Parsed args:
    [--ddl] true
    [--file-path] /home/admin/test_migrate_data/ddl_data
    [--host] xxx.xxx.xxx.xxx
    [--port] 2883
    [--user] obdumper_user01
    [--tenant] mysql001
    [--cluster] test4000
    [--password] ******
    [--database] test_data
    [--sys-user] root
    [--sys-password] ******
    [--all] true
    
    2022-12-26 14:11:57 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished
    2022-12-26 14:11:57 [INFO] The manifest file: "/home/admin/test_migrate_data/ddl_data/data/MANIFEST.bin" has been saved
    2022-12-26 14:11:58 [INFO] Query column metadata for the table: "test_tbl1" finished
    2022-12-26 14:11:58 [INFO] Query column metadata for the table: "test_tbl2" finished
    2022-12-26 14:11:58 [WARN] No views are exist in the schema: "test_data"
    2022-12-26 14:11:58 [WARN] No functions are exist in the schema: "test_data"
    2022-12-26 14:11:58 [WARN] No procedures are exist in the schema: "test_data"
    2022-12-26 14:11:58 [WARN] No table groups are exist in the tenant: "mysql001"
    2022-12-26 14:11:58 [INFO] Generate 1 dump tasks finished. Total Elapsed: 3.926 ms
    2022-12-26 14:11:58 [INFO] Start 3 schema dump threads for 1 dump tasks finished.
    2022-12-26 14:11:58 [INFO] Build direct com.alibaba.druid.pool.DruidDataSource finished
    2022-12-26 14:11:58 [INFO] No need to acquire DataSource for xxx@sys, as observer is 4.0.0.0
    2022-12-26 14:11:58 [INFO] Return the latest compatible version: 4.0.0.0 -> 4.0.0.0
    2022-12-26 14:11:58 [INFO] Dump create objects success. DbType: OBMYSQL Version: 4.0.0.0
    2022-12-26 14:11:58 [INFO] ObMySql(4.0.0.0) is older than 4.0 ? false
    2022-12-26 14:11:58 [INFO] Load meta/obmysql/obmysql14x.xml, meta/obmysql/obmysql22x.xml, meta/obmysql/obmysql2271.xml, meta/obmysql/obmysql3230.xml, meta/obmysql/obmysql40x.xml successed
    2022-12-26 14:11:58 [INFO] Query 0 dependencies elapsed 31.35 ms
    2022-12-26 14:12:02 [INFO] ----------   Finished Tasks: 0       Running Tasks: 1        Progress: 0.00%                                                              ----------
    2022-12-26 14:12:03 [INFO] Query table: "test_tbl1" attr finished. Remain: 0
    2022-12-26 14:12:03 [INFO] Query table: "test_tbl2" attr finished. Remain: 0
    2022-12-26 14:12:03 [INFO] Query 2 tables elapsed 5.721 s
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquireTablespaceMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquireSequenceMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquireSynonymMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquireTypeMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquireTypeBodyMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquirePackageMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquirePackageBodyMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquireTriggerMapping()
    2022-12-26 14:12:03 [WARN] c.o.t.l.s.o.ObMySqlDatabase does't implement acquireDatabaseLinkMapping()
    2022-12-26 14:12:03 [INFO] Dump [TABLE] test_tbl1 to "/home/admin/test_migrate_data/ddl_data/data/test_data/TABLE/test_tbl1-schema.sql " finished
    2022-12-26 14:12:03 [INFO] Dump [TABLE] test_tbl2 to "/home/admin/test_migrate_data/ddl_data/data/test_data/TABLE/test_tbl2-schema.sql " finished
    2022-12-26 14:12:03 [INFO] No.1 It has dumped 2 TABLEs finished. Remain: 0
    2022-12-26 14:12:03 [INFO] Total dumped 2 TABLEs finished. Elapsed: 5.818 s
    2022-12-26 14:12:03 [INFO] Dump the ddl of schema: "test_data" finished
    2022-12-26 14:12:04 [INFO] Close connection count: 24 of the DataSource. Key: xxx.xxx.xxx.xxx_11532_2105466567_test_data
    2022-12-26 14:12:04 [INFO] Shutdown task context finished
    2022-12-26 14:12:04 [INFO] ----------   Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%                                                            ----------
    2022-12-26 14:12:04 [INFO]
    
    All Dump Tasks Finished:
    
    ----------------------------------------------------------------------------------------------------------------------------
          No.#        |        Type        |             Name             |            Count             |       Status
    ----------------------------------------------------------------------------------------------------------------------------
             1          |       TABLE        |          test_tbl1           |              1               |      SUCCESS
             2          |       TABLE        |          test_tbl2           |              1               |      SUCCESS
    ----------------------------------------------------------------------------------------------------------------------------
    
    Total Count: 2          End Time: 2022-12-26 14:12:04
    
    
    2022-12-26 14:12:04 [INFO] Dump schema finished. Total Elapsed: 6.076 s
    2022-12-26 14:12:04 [INFO] Unnecessary to upload the data files to the remote cloud storage service
    2022-12-26 14:12:04 [INFO] System exit 0
    
  • The directory structure exported is as follows:

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $tree /home/admin/test_migrate_data/ddl_data
    /home/admin/test_migrate_data/ddl_data
    ├── data
    │   ├── CHECKPOINT.bin
    │   ├── MANIFEST.bin
    │   └── test_data
    │       └── TABLE
    │           ├── test_tbl1-schema.sql
    │           └── test_tbl2-schema.sql
    └── logs
       ├── ob-loader-dumper.error
       ├── ob-loader-dumper.info
       └── ob-loader-dumper.warn
    
    4 directories, 7 files
    
  • View the exported files.

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $cat /home/admin/test_migrate_data/ddl_data/data/test_data/TABLE/test_tbl1-schema.sql
    create table if not exists `test_tbl1` (
          `col1` int(11) not null,
          `col2` varchar(20),
          `col3` int(11),
          primary key (`col1`)
    )
    default charset=utf8mb4
    default collate=utf8mb4_general_ci;
    

Import DDL definition files

Scenario: Export all supported definitions from the test_data schema of the mysql001 tenant under the test4000 cluster to the /home/admin/test_migrate_data/ddl_data directory, and import it into the obloader_user01 schema of the oracle001 tenant under the test4000 cluster.

Notice

DDL files exported from a MySQL tenant do not meet the syntax requirements of an Oracle tenant. Therefore, before you import the DDL files to the Oracle tenant, you must first modify the files so that the files meet the syntax requirements of the Oracle tenant.

For example, for a DDL file exported from the mysql001 tenant, make the following modifications:

  • Remove the if not exists keyword and single quotation marks.
  • Convert the data types to those supported in Oracle mode:
    • Convert int(11) to number(11).
    • Convert varchar(50) to varchar2(50).
  • Remove the default collation and character set parameters default charset=utf8mb4 and default collate=utf8mb4_general_ci.
  • The sample statement is as follows:

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $./obloader -h xxx.xxx.xxx.xxx -P 2883 -u obloader_user01 -p ****** --sys-user root --sys-password ****** -c test4000 -t oracle001 -D obloader_user01 --ddl --all -f /home/admin/test_migrate_data/ddl_data
    2022-12-26 16:16:35 [INFO] Parsed args:
    [--ddl] true
    [--file-path] /home/admin/test_migrate_data/ddl_data
    [--host] xxx.xxx.xxx.xxx
    [--port] 2883
    [--user] obloader_user01
    [--tenant] oracle001
    [--cluster] test4000
    [--password] ******
    [--database] obloader_user01
    [--sys-user] root
    [--sys-password] ******
    [--all] true
    
    2022-12-26 16:16:35 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished
    2022-12-26 16:16:35 [INFO] The manifest file: "/home/admin/test_migrate_data/ddl_data/data/MANIFEST.bin" has been saved
    2022-12-26 16:16:35 [INFO] Init writer thread pool finished
    2022-12-26 16:16:35 [WARN] The object type : "SEQUENCE" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "TABLE_GROUP" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [INFO] Start 128 schema file loader threads successed
    2022-12-26 16:16:35 [INFO] No.1 sql of the file: "/home/admin/test_migrate_data/ddl_data/data/test_data/TABLE/test_tbl2-schema.sql" exec success. Elapsed: 66.25 ms
    2022-12-26 16:16:35 [INFO] Load file: "test_tbl2-schema.sql" succeeded
    2022-12-26 16:16:35 [INFO] No.1 sql of the file: "/home/admin/test_migrate_data/ddl_data/data/test_data/TABLE/test_tbl1-schema.sql" exec success. Elapsed: 122.9 ms
    2022-12-26 16:16:35 [INFO] Load file: "test_tbl1-schema.sql" succeeded
    2022-12-26 16:16:35 [WARN] The object type : "VIEW" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "FUNCTION" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "PROCEDURE" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "TRIGGER" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "PACKAGE" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "TYPE" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "PACKAGE_BODY" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "TYPE_BODY" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "SYNONYM" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "PUBLIC_SYNONYM" doesn't exist in the -schema.sql files
    2022-12-26 16:16:35 [WARN] The object type : "FILE" doesn't exist in the -schema.sql files
    2022-12-26 16:16:36 [INFO] Close connection count: 1 of the DataSource. Key: xxx.xxx.xxx.xxx_11532_1709388295_obloader_user01
    2022-12-26 16:16:36 [INFO] Shutdown task context finished
    2022-12-26 16:16:36 [INFO] ----------   Finished Tasks: 2       Running Tasks: 0        Progress: 100.00%                                                            ----------
    2022-12-26 16:16:36 [INFO]
    
    All Load Tasks Finished:
    
    ----------------------------------------------------------------------------------------------------------------------------
          No.#        |        Type        |             Name             |            Count             |       Status
    ----------------------------------------------------------------------------------------------------------------------------
             1          |       TABLE        |          test_tbl1           |            1 -> 1            |      SUCCESS
             2          |       TABLE        |          test_tbl2           |            1 -> 1            |      SUCCESS
    ----------------------------------------------------------------------------------------------------------------------------
    
    Total Count: 2          End Time: 2022-12-26 16:16:36
    
    
    2022-12-26 16:16:36 [INFO] Load schema finished. Total Elapsed: 1.066 s
    2022-12-26 16:16:36 [INFO] System exit 0
    
  • Check the imported table schema.

    obclient [OBLOADER_USER01]> SELECT TABLE_NAME FROM USER_TABLES;
    +------------+
    | TABLE_NAME |
    +------------+
    | TEST_TBL2  |
    | TEST_TBL1  |
    +------------+
    2 rows in set (0.034 sec)
    
    obclient [OBLOADER_USER01]> SHOW CREATE TABLE test_tbl1\G
    *************************** 1. row ***************************
          TABLE: TEST_TBL1
    CREATE TABLE: CREATE TABLE "TEST_TBL1" (
    "COL1" NUMBER(11) CONSTRAINT "TEST_TBL1_OBNOTNULL_1672042595521895" NOT NULL ENABLE,
    "COL2" VARCHAR2(20),
    "COL3" NUMBER(11),
    CONSTRAINT "TEST_TBL1_OBPK_1672042595521915" PRIMARY KEY ("COL1")
    ) COMPRESS FOR ARCHIVE REPLICA_NUM = 3 BLOCK_SIZE = 16384 USE_BLOOM_FILTER = FALSE TABLET_SIZE = 134217728 PCTFREE = 0
    1 row in set
    

Migrate table data

This section uses CSV data files as an example to describe how to use obdumper and obloader to migrate table data.

Export CSV data files

File format definition: A CSV data file has the .csv file name extension. CSV is a file format that uses commas (,) as the separator. CSV data files store tabular data in the form of plain text. You can open them by using a text editor or Excel.

Scenario: Export all supported object data from the test_data schema of the mysql001 tenant in the test4000 cluster to the /home/admin/test_migrate_data/csv_data directory in the CSV format.

  • The sample statement is as follows:

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $./obdumper -h xxx.xxx.xxx.xxx -P 2883 -u obdumper_user01 -p ****** -c test4000 -t mysql001 -D test_data --sys-user root --sys-password ****** --csv --table '*' -f /home/admin/test_migrate_data/csv_data
    2022-12-26 14:51:18 [INFO] Parsed args:
    [--csv] true
    [--file-path] /home/admin/test_migrate_data/csv_data
    [--host] xxx.xxx.xxx.xxx
    [--port] 2883
    [--user] obdumper_user01
    [--tenant] mysql001
    [--cluster] test4000
    [--password] ******
    [--database] test_data
    [--sys-user] root
    [--sys-password] ******
    [--table] [*]
    
    2022-12-26 14:51:18 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished
    2022-12-26 14:51:18 [INFO] The manifest file: "/home/admin/test_migrate_data/csv_data/data/MANIFEST.bin" has been saved
    2022-12-26 14:51:18 [INFO] Query column metadata for the table: "test_tbl1" finished
    2022-12-26 14:51:18 [INFO] Query column metadata for the table: "test_tbl2" finished
    2022-12-26 14:51:18 [INFO] Query partition names for table: "test_tbl1" success. (Non-partitioned)
    2022-12-26 14:51:18 [INFO] Query partition names for table: "test_tbl2" success. (Non-partitioned)
    2022-12-26 14:51:18 [INFO] Query primary key for table: "test_tbl1" success. Elapsed: 12.14 ms
    2022-12-26 14:51:18 [INFO] Query primary key for table: "test_tbl2" success. Elapsed: 12.24 ms
    2022-12-26 14:51:18 [INFO] Query table entry for table: "test_tbl1" success. Remain: 0. Elapsed: 2.868 ms
    2022-12-26 14:51:18 [INFO] Query table entry for table: "test_tbl2" success. Remain: 0. Elapsed: 2.868 ms
    2022-12-26 14:51:18 [INFO] Query all table entries success. Total: 2. Elapsed: 61.20 ms
    2022-12-26 14:51:18 [INFO] ....Splitting rows for non-partitioned table: "test_tbl1" success. Batch: 1
    2022-12-26 14:51:18 [INFO] ....Splitting rows for non-partitioned table: "test_tbl2" success. Batch: 1
    2022-12-26 14:51:18 [INFO] Split rows for non-partitioned table(with primary key): "test_tbl2" success. Ranges: 1. Elapsed: 38.42 ms
    2022-12-26 14:51:18 [INFO] Split rows for non-partitioned table(with primary key): "test_tbl1" success. Ranges: 1. Elapsed: 38.57 ms
    2022-12-26 14:51:18 [INFO] Generate 2 dump tasks finished. Total Elapsed: 53.46 ms
    2022-12-26 14:51:18 [INFO] Start 128 record dump threads for 2 dump tasks finished
    2022-12-26 14:51:18 [INFO] Dump 5 rows test_data.test_tbl2 to "/home/admin/test_migrate_data/csv_data/data/test_data/TABLE/test_tbl2.1.*.csv" finished
    2022-12-26 14:51:18 [INFO] Dump 5 rows test_data.test_tbl1 to "/home/admin/test_migrate_data/csv_data/data/test_data/TABLE/test_tbl1.1.*.csv" finished
    2022-12-26 14:51:19 [INFO] Close connection count: 12 of the DataSource. Key: xxx.xxx.xxx.xxx_11532_2105466567_test_data
    2022-12-26 14:51:20 [INFO] Shutdown task context finished
    2022-12-26 14:51:20 [INFO] ----------   Finished Tasks: 2       Running Tasks: 0        Progress: 100.00%                                                            ----------
    2022-12-26 14:51:20 [INFO]
    
    All Dump Tasks Finished:
    
    ----------------------------------------------------------------------------------------------------------------------------
          No.#        |        Type        |             Name             |            Count             |       Status
    ----------------------------------------------------------------------------------------------------------------------------
             1          |       TABLE        |          test_tbl1           |              5               |      SUCCESS
             2          |       TABLE        |          test_tbl2           |              5               |      SUCCESS
    ----------------------------------------------------------------------------------------------------------------------------
    
    Total Count: 10         End Time: 2022-12-26 14:51:20
    
    
    2022-12-26 14:51:20 [INFO] Unnecessary to merge the data files. As --file-name is missing
    2022-12-26 14:51:20 [INFO] Dump record finished. Total Elapsed: 1.206 s
    2022-12-26 14:51:20 [INFO] Unnecessary to upload the data files to the remote cloud storage service
    2022-12-26 14:51:20 [INFO] System exit 0
    
  • The directory structure exported is as follows:

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $tree /home/admin/test_migrate_data/csv_data
    /home/admin/test_migrate_data/csv_data
    ├── data
    │   ├── CHECKPOINT.bin
    │   ├── MANIFEST.bin
    │   └── test_data
    │       └── TABLE
    │           ├── test_tbl1.1.0.csv
    │           └── test_tbl2.1.0.csv
    └── logs
       ├── ob-loader-dumper.error
       ├── ob-loader-dumper.info
       └── ob-loader-dumper.warn
    
    4 directories, 7 files
    
  • View the exported files.

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $cat /home/admin/test_migrate_data/csv_data/data/test_data/TABLE/test_tbl1.1.0.csv
    'col1','col2','col3'
    1,'China',86
    2,'Taiwan',886
    3,'Hong Kong',852
    4,'Macao',853
    5,'North Korea',850
    

Import CSV data files

Scenario: Export all supported CSV data files from the test_data schema of the mysql001 tenant under the test4000 cluster to the /home/admin/test_migrate_data/csv_data directory, and import them into the OBLOADER_USER01 schema of the oracle001 tenant under the test4000 cluster.

Notice

  • The directory where the test_data schema in the mysql001 tenant needs to be exported is /home/admin/test_migrate_data/csv_data/data/. The name test_data in that directory should be changed to OBLOADER_USER01.
  • The table names in the exported CSV file names should be changed to uppercase.

For example:

bash [root@xxx /home/admin/test_migrate_data/csv_data/data]# mv test_data OBLOADER_USER01 [root@xxx /home/admin/test_migrate_data/csv_data/data]# cd OBLOADER_USER01/TABLE [root@xxx /home/admin/test_migrate_data/csv_data/data/OBLOADER_USER01/TABLE]# ls test_tbl1.1.0.csv test_tbl2.1.0.csv [root@xxx /home/admin/test_migrate_data/csv_data/data/OBLOADER_USER01/TABLE]# mv test_tbl1.1.0.csv TEST_TBL1.1.0.csv [root@xxx /home/admin/test_migrate_data/csv_data/data/OBLOADER_USER01/TABLE]# mv test_tbl2.1.0.csv TEST_TBL2.1.0.csv <code></code>

  • The sample statement is as follows:

    [xxx@xxx /ob-loader-dumper-4.0.0-RELEASE/bin]
    $./obloader -h xxx.xxx.xxx.xxx -P 2883 -u obloader_user01 -p ****** --sys-user root --sys-password ****** -c test4000 -t oracle001 -D obloader_user01 --csv --table '*' -f /home/admin/test_migrate_data/csv_data
    2022-12-26 16:54:36 [INFO] Parsed args:
    [--csv] true
    [--file-path] /home/admin/test_migrate_data/csv_data
    [--host] xxx.xxx.xxx.xxx
    [--port] 2883
    [--user] obloader_user01
    [--tenant] oracle001
    [--cluster] test4000
    [--password] ******
    [--database] obloader_user01
    [--sys-user] root
    [--sys-password] ******
    [--table] [*]
    
    2022-12-26 16:54:36 [INFO] Load jdbc driver class: "com.oceanbase.jdbc.Driver" finished
    2022-12-26 16:54:36 [INFO] The manifest file: "/home/admin/test_migrate_data/csv_data/data/MANIFEST.bin" has been saved
    2022-12-26 16:54:36 [INFO] Query column metadata for the table: "TEST_TBL1" finished
    2022-12-26 16:54:36 [INFO] Query column metadata for the table: "TEST_TBL2" finished
    2022-12-26 16:54:36 [INFO] File: "/home/admin/test_migrate_data/csv_data/data/CHECKPOINT.bin" is unmatched on the suffix[.csv], ignore it
    2022-12-26 16:54:36 [INFO] Binding table: "TEST_TBL2" to the file: "/home/admin/test_migrate_data/csv_data/data/OBLOADER_USER01/TABLE/TEST_TBL2.1.0.csv" finished
    2022-12-26 16:54:36 [INFO] Binding table: "TEST_TBL1" to the file: "/home/admin/test_migrate_data/csv_data/data/OBLOADER_USER01/TABLE/TEST_TBL1.1.0.csv" finished
    2022-12-26 16:54:36 [INFO] File: "/home/admin/test_migrate_data/csv_data/data/MANIFEST.bin" is unmatched on the suffix[.csv], ignore it
    2022-12-26 16:54:36 [INFO] Splitted 2 csv subfiles by 64.0 MB. Elapsed: 19.73 ms
    2022-12-26 16:54:36 [INFO] Generate 2 subfiles finished
    2022-12-26 16:54:36 [INFO] Ignore to clean any tables as --truncate-table or --delete-from-table is not specified
    2022-12-26 16:54:37 [INFO] Query table entry and primary key for table: "TEST_TBL1" finished. Remain: 0
    2022-12-26 16:54:37 [INFO] Query table entry and primary key for table: "TEST_TBL2" finished. Remain: 1
    2022-12-26 16:54:37 [INFO] Query the leader location of "TEST_TBL2" finished. Remain: 1
    2022-12-26 16:54:37 [INFO] Query the leader location of "TEST_TBL1" finished. Remain: 0
    2022-12-26 16:54:37 [INFO] Calculate leader: xxx.xxx.xxx.xxx:2881 of table: "TEST_TBL1", part: 0. Remain: 1
    2022-12-26 16:54:37 [INFO] Calculate leader: xxx.xxx.xxx.xxx:2881 of table: "TEST_TBL2", part: 0. Remain: 0
    2022-12-26 16:54:37 [INFO] Waiting to refresh observer load status ......
    2022-12-26 16:54:37 [INFO] Refresh observer load status success. Table: "TEST_TBL2". Remain: 1
    2022-12-26 16:54:37 [INFO] Refresh observer load status success. Table: "TEST_TBL1". Remain: 0
    2022-12-26 16:54:37 [INFO] Refresh observer load status finished. Elapsed: 31.20 ms
    2022-12-26 16:54:37 [INFO] Use c.l.d.PhasedBackoffWaitStrategy as available cpu(s) is 64
    2022-12-26 16:54:37 [INFO] Create 4096 slots for ring buffer finished. [xxx.xxx.xxx.xxx:2881]
    2022-12-26 16:54:37 [INFO] Start 128 database writer threads finished. [xxx.xxx.xxx.xxx:2881]
    2022-12-26 16:54:37 [INFO] Start 128 csv file reader threads successed
    2022-12-26 16:54:37 [INFO] File: "/home/admin/test_migrate_data/csv_data/data/OBLOADER_USER01/TABLE/TEST_TBL1.1.0.csv" has been parsed finished
    2022-12-26 16:54:37 [INFO] File: "/home/admin/test_migrate_data/csv_data/data/OBLOADER_USER01/TABLE/TEST_TBL2.1.0.csv" has been parsed finished
    2022-12-26 16:54:38 [INFO] Wait for the all the workers to drain of published events then halt the workers
    2022-12-26 16:54:38 [INFO] Close connection count: 37 of the DataSource. Key: xxx.xxx.xxx.xxx_11532_1709388295_obloader_user01
    2022-12-26 16:54:38 [INFO] Shutdown task context finished
    2022-12-26 16:54:38 [INFO] ----------   Finished Tasks: 2       Running Tasks: 0        Progress: 100.00%                                                            ----------
    2022-12-26 16:54:38 [INFO]
    
    All Load Tasks Finished:
    
    ----------------------------------------------------------------------------------------------------------------------------
          No.#        |        Type        |             Name             |            Count             |       Status
    ----------------------------------------------------------------------------------------------------------------------------
             1          |       TABLE        |          TEST_TBL2           |            5 -> 5            |      SUCCESS
             2          |       TABLE        |          TEST_TBL1           |            5 -> 5            |      SUCCESS
    ----------------------------------------------------------------------------------------------------------------------------
    
    Total Count: 10         End Time: 2022-12-26 16:54:38
    
    
    2022-12-26 16:54:38 [INFO] Load record finished. Total Elapsed: 2.322 s
    2022-12-26 16:54:38 [INFO] System exit 0
    
  • Check the imported table data.

    obclient [OBLOADER_USER01]> SELECT * FROM TEST_TBL1;
    +------+-------------+------+
    | COL1 | COL2        | COL3 |
    +------+-------------+------+
    |    1 | China       |   86 |
    |    2 | Taiwan      |  886 |
    |    3 | Hong Kong   |  852 |
    |    4 | Macao       |  853 |
    |    5 | North Korea |  850 |
    +------+-------------+------+
    5 rows in set
    

Contact Us