OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Database

SQL - V4.3.3

    Download PDF

    OceanBase logo

    The Unified Distributed Database for the AI Era.

    Follow Us
    Products
    OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
    Resources
    DocsBlogLive DemosTraining & Certification
    Company
    About OceanBaseTrust CenterLegalPartnerContact Us
    Follow Us

    © OceanBase 2026. All rights reserved

    Cloud Service AgreementPrivacy PolicySecurity
    Contact Us
    Document Feedback
    1. Documentation Center
    2. OceanBase Database
    3. SQL
    4. V4.3.3
    iconOceanBase Database
    SQL - V 4.3.3
    SQL
    KV
    • V 4.4.2
    • V 4.3.5
    • V 4.3.3
    • V 4.3.1
    • V 4.3.0
    • V 4.2.5
    • V 4.2.2
    • V 4.2.1
    • V 4.2.0
    • V 4.1.0
    • V 4.0.0
    • V 3.1.4 and earlier

    Full direct load

    Last Updated:2025-11-27 06:23:27  Updated
    share
    What is on this page
    Considerations
    Import data in direct load mode by using the LOAD DATA statement
    Limitations
    Syntax
    Examples
    Import data in direct load mode by using the INSERT INTO SELECT statement
    Limitations
    Syntax
    Examples
    References

    folded

    share

    This topic describes how to use the LOAD DATA and INSERT INTO SELECT statements for full direct load.

    Considerations

    Take note of the following considerations before you use the full direct load feature:

    • OceanBase Database supports direct load of large object (LOB) data in V4.3.0 and later.

    • During direct load, the table involved is locked and no data can be written into the table. In other words, the table is read-only during the whole direct load process.

    • Full direct load applies to scenarios where data is imported to a large table for the first time, data sized 10 GB to 1 TB is to be migrated, or both the CPU and memory resources are insufficient. This is because direct load has a short execution path and can reduce the CPU overhead.

    • The LOAD DATA statement can be executed in a multi-row transaction. However, when the LOAD DATA statement, which is a DDL operation, is executed, the previous transaction is automatically committed.

    • When you use the INSERT INTO SELECT statement to import data, only Parallel Data Manipulation Language (PDML) data can be imported in direct load mode.

    Import data in direct load mode by using the LOAD DATA statement

    You can use the LOAD DATA statement in combination with the append/direct() hint to import data in direct load mode.

    Notice

    OceanBase Database uses parallel processing to increase the data import speed of the LOAD DATA statement. In parallel processing, data is split into multiple subtasks for parallel execution. Each subtask is considered an independent transaction. The execution sequence of the subtasks is not fixed. For a table without a primary key, data may be written in a sequence different from that in the original file.

    Limitations

    • You cannot execute two statements to concurrently write data to the same table. This is because the table is locked during data import and only read operations are supported.
    • Full direct load is not supported for tables with triggers.
    • Full direct load is not supported for tables with generated columns. Some indexes generate hidden generated columns, for example, KEY idx_c2 (c2(16)) GLOBAL).
    • Full direct load is not supported if the data in a single row exceeds 2 MB in size.
    • Liboblog and flashback queries are not supported.

    Syntax

    LOAD DATA /*+ [APPEND | DIRECT(need_sort,max_error,'full')] parallel(N) */ [REMOTE_OSS | LOCAL] INFILE 'file_name' INTO TABLE table_name [COMPRESSION]...
    

    For more information about the syntax of the LOAD DATA statement, see LOAD DATA.

    The following table describes the parameters in the syntax.

    Parameter Description
    APPEND | DIRECT() Specifies to enable direct load.
    • The APPEND hint is equivalent to DIRECT(true, 0), and can collect statistics online like the GATHER_OPTIMIZER_STATISTICS hint does.
    • Options in direct():
      • need_sort: specifies whether to sort the data written. The value is of the Boolean type. Valid values:
        • true: Sorting is needed.
        • false: Sorting is not needed.
      • max_error: the maximum number of erroneous rows allowed. The value is of the INT type. If this value is exceeded, the import task fails.
      • full: the full direct load mode, which is optional. The value must be enclosed with single quotation marks (' ').
    parallel(N) Required. The degree of parallelism (DOP) for loading data. The default value of N is 4.
    REMOTE_OSS | LOCAL Optional. Valid values:
    • REMOTE_OSS: specifies to read data from Alibaba Cloud Object Storage Service (OSS).

      Notice

      If this parameter is specified, the value of file_name must be the address of an OSS system.

    • LOCAL: specifies to read data from the local file system on the client. If you do not specify the LOCAL parameter, data is read from the file system on an OBServer node.
    file_name The path and file name of the file to import. You can specify a value in either of the following formats:
    • If you want to import a file on an OBServer node or client, specify /$PATH/$FILENAME.
    • If you want to import a file in an OSS file system, specify oss://$PATH/$FILENAME/?host=$HOST&access_id=$ACCESS_ID&access_key=$ACCESSKEY.
    The parameters are described as follows:
    • $PATH: the file path in the bucket, which represents the directory where the file is located.
    • $FILENAME: the name of the file to be accessed.
    • $HOST: the host name or CDN domain name of the OSS service, that is, the endpoint of the OSS service to be accessed.
    • $ACCESS_ID: the AccessKey ID for accessing the OSS service.
    • $ACCESSKEY: the AccessKey secret for accessing the OSS service.

    Note

    When you import a file from OSS, make sure that the following conditions are met:

    • You have permissions to access the corresponding OSS bucket and file. That is, you can read data from the bucket and file. Usually, you need to grant required access permissions to the AccessKey pair in the OSS console or through OSS APIs.
    • The database server can connect to the endpoint specified by $HOST to access the OSS service. A CDN domain name is configured correctly and the network connection is normal, if you want to use the CDN domain name to access the OSS service.

    table_name The name of the table into which data is imported. You can specify any number of columns for the table.
    COMPRESSION The compression format of the input file. Valid values:
    • AUTO: OceanBase Database automatically identifies the compression format based on the file name extension.
      Different file name extensions indicate different compression formats.
      • .gz: the GZIP compression format.
      • .deflate: the DEFLATE compression format.
      • .zst/.zstd: the ZSTD compression format.
    • NONE: indicates that the input file is not compressed.
    • GZIP: indicates that the input file is a GZIP compressed file.
    • DEFLATE: indicates that the input file is a GZIP compressed file without metadata.
    • ZSTD: indicates that the input file is a ZSTD compressed file.
    You can specify the file compression format or enable OceanBase Database to identify the compression format based on the file name extension.

    Examples

    Note

    The following example shows how to import data from a file on a server. In OceanBase Database, you can also use the LOCAL INFILE clause in the LOAD DATA statement to import data from a local file in direct load mode. For more information about how to use LOAD DATA LOCAL INFILE, see Import data by using the LOAD DATA statement.

    MySQL mode
    Oracle mode
    1. Log in to the server where the target OBServer node resides and create test data in the /home/admin directory.

      Note

      In OceanBase Database, the LOAD DATA statement can import data only from a local input file on an OBServer node. Therefore, you must copy the file to import to an OBServer node before the import.

      1. Log in to the server where the OBServer node resides.

        [xxx@xxx /home/admin]# ssh admin@10.10.10.1
        
      2. Create a test data file named tbl1.csv.

        [admin@xxx /home/admin]# vi tbl1.csv
        1.11
        2.22
        3.33
        
    2. Specify the path of the file to import.

      Set the system variable secure_file_priv to specify the path that can be accessed for file import or export.

      Notice

      For security reasons, you can only connect to the database through a local socket to execute the SQL statement for setting secure_file_priv. For more information, see secure_file_priv.

      1. Log in to the server where the target OBServer node resides.

        [xxx@xxx /home/admin]# ssh admin@10.10.10.1
        
      2. Connect to the mysql001 tenant through a local Unix socket.

        obclient -S /home/admin/oceanbase/run/sql.sock -uroot@mysql001 -p******
        
      3. Specify the path as /home/admin.

        obclient [(none)]> SET GLOBAL secure_file_priv = "/home/admin";
        Query OK, 0 rows affected
        
    3. After you reconnect to the database, execute the LOAD /*+ DIRECT */ DATA statement to import data.

      1. Create a table named tbl2.

        obclient [test]> CREATE TABLE tbl2(col1 INT PRIMARY KEY,col2 INT);
        Query OK, 0 rows affected
        
      2. Query whether the tbl2 table contains data. At this time, the query result is an empty set.

        obclient [test]> SELECT * FROM tbl2;
        Empty set
        
      3. Import the data in the tbl1.csv file to the tbl2 table in direct load mode.

        • Specify to import data to all columns of the tbl2 table.

          obclient [test]> LOAD DATA /*+ direct(true,1024,'full') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl2 FIELDS TERMINATED BY ',';
          Query OK, 3 rows affected
          Records: 3  Deleted: 0  Skipped: 0  Warnings: 0
          
        • Specify to import data to any columns of the tbl2 table. For example, you can specify to import data to the col1 and col2 columns.

          obclient [test]> LOAD DATA /*+ direct(true,1024,'full') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl2 FIELDS TERMINATED BY ','(col1,col2);
          Query OK, 3 rows affected
          Records: 3  Deleted: 0  Skipped: 0  Warnings: 0
          
      4. Verify whether data has been imported to the tbl2 table.

        obclient [test]>  SELECT * FROM tbl2;
        

        The query result is as follows:

         +------+------+
         | col1 | col2 |
         +------+------+
         |    1 |   11 |
         |    2 |   22 |
         |    3 |   33 |
         +------+------+
         3 rows in set
        

        The query result shows that the data has been imported to the tbl2 table.

    1. Log in to the server where the target OBServer node resides and create a test table named tbl1 in the /home/admin directory.

      Note

      In OceanBase Database, the LOAD DATA statement can import data only from a local input file on an OBServer node. Therefore, you must copy the file to import to an OBServer node before the import.

      1. Log in to the server where the OBServer node resides.

        [xxx@xxx /home/admin]# ssh admin@10.10.10.1
        
      2. Create a test data file named tbl1.csv.

        [admin@xxx /home/admin]# vi tbl1.csv
        1.11
        2.22
        3.33
        
    2. Specify the path of the file to import.

      Set the system variable secure_file_priv to specify the path that can be accessed for file import or export.

      Notice

      For security reasons, you can only connect to the database through a local socket to execute the SQL statement for setting secure_file_priv. For more information, see secure_file_priv.

      1. Log in to the server where the target OBServer node resides.

        [xxx@xxx /home/admin]# ssh admin@10.10.10.1
        
      2. Connect to the oracle001 tenant through a local Unix socket.

        obclient -S /home/admin/oceanbase/run/sql.sock -usys@oracle001 -p******
        
      3. Specify the path as /home/admin.

        obclient [(none)]> SET GLOBAL secure_file_priv = "/home/admin";
        Query OK, 0 rows affected
        
    3. After you reconnect to the database, execute the LOAD /*+ DIRECT */ DATA statement to import data.

      1. Create a table named tbl2.

        obclient [test]> CREATE TABLE tbl2(col1 INT PRIMARY KEY,col2 INT);
        Query OK, 0 rows affected
        
      2. Query whether the tbl2 table contains data. At this time, the query result is an empty set.

        obclient [test]> SELECT * FROM tbl2;
        Empty set
        
      3. Import the data in the tbl1.csv file to the tbl2 table in direct load mode.

        • Specify to import data to all columns of the tbl2 table.

          obclient [test]> LOAD DATA /*+ direct(true,1024,'full') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl2 FIELDS TERMINATED BY ',';
          Query OK, 3 rows affected
          Records: 3  Deleted: 0  Skipped: 0  Warnings: 0
          
        • Specify to import data to any columns of the tbl2 table. For example, you can specify to import data to the col1 and col2 columns.

          obclient [test]> LOAD DATA /*+ direct(true,1024,'full') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl2 FIELDS TERMINATED BY ','(col1,col2);
          Query OK, 3 rows affected
          Records: 3  Deleted: 0  Skipped: 0  Warnings: 0
          
      4. Verify whether data has been imported to the tbl2 table.

        obclient [test]>  SELECT * FROM tbl2;
        

        The query result is as follows:

         +------+------+
         | col1 | col2 |
         +------+------+
         |    1 |   11 |
         |    2 |   22 |
         |    3 |   33 |
         +------+------+
         3 rows in set
        

        The query result shows that the data has been imported to the tbl2 table.

    Import data in direct load mode by using the INSERT INTO SELECT statement

    You can use the INSERT INTO SELECT statement in combination with the append/direct() hint and the enable_parallel_dml parameter to import data in direct load mode.

    Limitations

    • Only PDML data can be imported in direct load mode. For more information about PDML, see Parallel DML.
    • You cannot execute two statements to concurrently write data to the same table. This is because the table is locked during data import and only read operations are supported.
    • Direct load is a DDL operation and cannot be executed in a multi-row transaction that contains multiple operations.
      • Direct load cannot be performed in Begin.
      • autocommit must be set to 1.
    • Full direct load is not supported for tables with triggers.
    • Full direct load is not supported for tables with generated columns. Some indexes generate hidden generated columns, for example, KEY idx_c2 (c2(16)) GLOBAL).
    • Liboblog and flashback queries are not supported.

    Syntax

    INSERT /*+ [APPEND |DIRECT(need_sort,max_error,'full')] enable_parallel_dml parallel(N) */ INTO  table_name select_sentence
    

    For more information about the syntax of the INSERT INTO statement, see INSERT (MySQL mode) and INSERT (Oracle mode).

    The following table describes the parameters in the syntax.

    Parameter Description
    APPEND | DIRECT() Specifies to enable direct load.
    • The APPEND hint is equivalent to DIRECT(true, 0), and can collect statistics online like the GATHER_OPTIMIZER_STATISTICS hint does.
    • Options in DIRECT():
      • need_sort: specifies whether to sort the data written. The value is of the Boolean type. Valid values:
        • true: Sorting is needed.
        • false: Sorting is not needed.
      • max_error: the maximum number of erroneous rows allowed. The value is of the INT type. If this value is exceeded, the import task fails.
      • full: the full direct load mode, which is optional. The value must be enclosed with single quotation marks (' ').
    enable_parallel_dml Specifies whether to enable PDML.

    Note

    Generally, the enable_parallel_dml and parallel hints must be used together to enable PDML. However, if a table-level DOP is specified for the schema of the destination table, you only need to specify the enable_parallel_dml hint.

    parallel(N) Required. The DOP for loading data. The value is an integer greater than 1.

    Examples

    MySQL mode
    Oracle mode

    Import partial data in the tbl2 table to the tbl1 table in direct load mode.

    1. Query whether the tbl1 table contains data. At this time, the query result is an empty set.

      obclient [test]> SELECT * FROM tbl1;
      Empty set
      
    2. Query whether the tbl2 table contains data.

      obclient [test]> SELECT * FROM tbl2;
      

      The query result shows that the tbl2 table contains data.

      +------+------+------+
      | col1 | col2 | col3 |
      +------+------+------+
      |    1 | a1   |   11 |
      |    2 | a2   |   22 |
      |    3 | a3   |   33 |
      +------+------+------+
      3 rows in set
      
    3. Use the LOAD /*+ DIRECT */ DATA statement to import data.

      obclient [test]> INSERT /*+ DIRECT(true, 0, 'full') enable_parallel_dml parallel(16) */ INTO tbl1 SELECT t2.col1,t2.col3 FROM tbl2 t2;
      Query OK, 3 rows affected
      Records: 3  Duplicates: 0  Warnings: 0
      
    4. Verify whether data has been imported to the tbl1 table.

      obclient [test]> SELECT * FROM tbl1;
      

      The query result is as follows:

      +------+------+
      | col1 | col2 |
      +------+------+
      |    1 |   11 |
      |    2 |   22 |
      |    3 |   33 |
      +------+------+
      3 rows in set
      

      The query result shows that the data has been imported to the tbl1 table.

    5. (Optional) In the Note section of the return result of the EXPLAIN EXTENDED statement, check whether the data is written in direct load mode.

      obclient [test]> EXPLAIN EXTENDED INSERT /*+ direct(true, 0, 'full') enable_parallel_dml parallel(16) */ INTO tbl1 SELECT t2.col1,t2.col3 FROM tbl2 t2;
      

      The return result is as follows:

      +--------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Query Plan                                                                                                                                                   |
      +--------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | ==============================================================================                                                                               |
      | |ID|OPERATOR                           |NAME           |EST.ROWS|EST.TIME(us)|                                                                               |
      | ------------------------------------------------------------------------------                                                                               |
      | |0 |PX COORDINATOR                     |               |3       |27          |                                                                               |
      | |1 |└─EXCHANGE OUT DISTR               |:EX10001       |3       |27          |                                                                               |
      | |2 |  └─INSERT                         |               |3       |26          |                                                                               |
      | |3 |    └─EXCHANGE IN DISTR            |               |3       |1           |                                                                               |
      | |4 |      └─EXCHANGE OUT DISTR (RANDOM)|:EX10000       |3       |1           |                                                                               |
      | |5 |        └─SUBPLAN SCAN             |ANONYMOUS_VIEW1|3       |1           |                                                                               |
      | |6 |          └─PX BLOCK ITERATOR      |               |3       |1           |                                                                               |
      | |7 |            └─TABLE FULL SCAN      |t2             |3       |1           |                                                                               |
      | ==============================================================================                                                                               |
      | Outputs & filters:                                                                                                                                           |
      | -------------------------------------                                                                                                                        |
      |   0 - output(nil), filter(nil), rowset=16                                                                                                                    |
      |   1 - output(nil), filter(nil), rowset=16                                                                                                                    |
      |       dop=16                                                                                                                                                 |
      |   2 - output(nil), filter(nil)                                                                                                                               |
      |       columns([{tbl1: ({tbl1: (tbl1.__pk_increment(0x7efa63627790), tbl1.col1(0x7efa63611980), tbl1.col3(0x7efa63611dc0))})}]), partitions(p0),              |
      |       column_values([T_HIDDEN_PK(0x7efa63627bd0)], [column_conv(INT,PS: (11,0),NULL,ANONYMOUS_VIEW1.col1(0x7efa63626f10))(0x7efa63627ff0)], [column_conv(INT, |
      |       PS: (11,0),NULL,ANONYMOUS_VIEW1.col3(0x7efa63627350))(0x7efa6362ff20)])                                                                                 |
      |   3 - output([T_HIDDEN_PK(0x7efa63627bd0)], [ANONYMOUS_VIEW1.col1(0x7efa63626f10)], [ANONYMOUS_VIEW1.col3(0x7efa63627350)]), filter(nil), rowset=16          |
      |   4 - output([T_HIDDEN_PK(0x7efa63627bd0)], [ANONYMOUS_VIEW1.col1(0x7efa63626f10)], [ANONYMOUS_VIEW1.col3(0x7efa63627350)]), filter(nil), rowset=16          |
      |       dop=16                                                                                                                                                 |
      |   5 - output([ANONYMOUS_VIEW1.col1(0x7efa63626f10)], [ANONYMOUS_VIEW1.col3(0x7efa63627350)]), filter(nil), rowset=16                                         |
      |       access([ANONYMOUS_VIEW1.col1(0x7efa63626f10)], [ANONYMOUS_VIEW1.col3(0x7efa63627350)])                                                                 |
      |   6 - output([t2.col1(0x7efa63625ed0)], [t2.col3(0x7efa63626780)]), filter(nil), rowset=16                                                                   |
      |   7 - output([t2.col1(0x7efa63625ed0)], [t2.col3(0x7efa63626780)]), filter(nil), rowset=16                                                                   |
      |       access([t2.col1(0x7efa63625ed0)], [t2.col3(0x7efa63626780)]), partitions(p0)                                                                           |
      |       is_index_back=false, is_global_index=false,                                                                                                            |
      |       range_key([t2.__pk_increment(0x7efa63656410)]), range(MIN ; MAX)always true                                                                            |
      | Used Hint:                                                                                                                                                   |
      | -------------------------------------                                                                                                                        |
      |   /*+                                                                                                                                                        |
      |                                                                                                                                                              |
      |       USE_PLAN_CACHE( NONE )                                                                                                                                 |
      |       PARALLEL(16)                                                                                                                                           |
      |       ENABLE_PARALLEL_DML                                                                                                                                    |
      |       DIRECT(TRUE, 0, 'FULL')                                                                                                                                |
      |   */                                                                                                                                                         |
      | Qb name trace:                                                                                                                                               |
      | -------------------------------------                                                                                                                        |
      |   stmt_id:0, stmt_type:T_EXPLAIN                                                                                                                             |
      |   stmt_id:1, INS$1                                                                                                                                           |
      |   stmt_id:2, SEL$1                                                                                                                                           |
      | Outline Data:                                                                                                                                                |
      | -------------------------------------                                                                                                                        |
      |   /*+                                                                                                                                                        |
      |       BEGIN_OUTLINE_DATA                                                                                                                                     |
      |       PARALLEL(@"SEL$1" "t2"@"SEL$1" 16)                                                                                                                     |
      |       FULL(@"SEL$1" "t2"@"SEL$1")                                                                                                                            |
      |       USE_PLAN_CACHE( NONE )                                                                                                                                 |
      |       PARALLEL(16)                                                                                                                                           |
      |       ENABLE_PARALLEL_DML                                                                                                                                    |
      |       OPTIMIZER_FEATURES_ENABLE('4.3.3.0')                                                                                                                   |
      |       DIRECT(TRUE, 0, 'FULL')                                                                                                                                |
      |       END_OUTLINE_DATA                                                                                                                                       |
      |   */                                                                                                                                                         |
      | Optimization Info:                                                                                                                                           |
      | -------------------------------------                                                                                                                        |
      |   t2:                                                                                                                                                        |
      |       table_rows:3                                                                                                                                           |
      |       physical_range_rows:3                                                                                                                                  |
      |       logical_range_rows:3                                                                                                                                   |
      |       index_back_rows:0                                                                                                                                      |
      |       output_rows:3                                                                                                                                          |
      |       table_dop:16                                                                                                                                           |
      |       dop_method:Global DOP                                                                                                                                  |
      |       avaiable_index_name: [tbl2]                                                                                                                             |
      |       stats info: [version=0, is_locked=0, is_expired=0]                                                                                                      |
      |       dynamic sampling level:0                                                                                                                               |
      |       estimation method: [DEFAULT, STORAGE]                                                                                                                   |
      |   Plan Type:                                                                                                                                                 |
      |       DISTRIBUTED                                                                                                                                            |
      |   Note:                                                                                                                                                      |
      |       Degree of Parallelism is 16 because of hint                                                                                                            |
      |       Direct-mode is enabled in insert into select                                                                                                           |
      +--------------------------------------------------------------------------------------------------------------------------------------------------------------+
      77 rows in set (0.009 sec)
      

    Import partial data in the tbl4 table to the tbl3 table in direct load mode.

    1. Query whether the tbl3 table contains data. At this time, the query result is an empty set.

      obclient [test]> SELECT * FROM tbl3;
      Empty set
      
    2. Query whether the tbl4 table contains data.

      obclient [test]> SELECT * FROM tbl4;
      

      The query result shows that the tbl4 table contains data.

      +------+------+------+
      | COL1 | COL2 | COL3 |
      +------+------+------+
      |    1 | a1   |   11 |
      |    2 | a2   |   22 |
      |    3 | a3   |   33 |
      +------+------+------+
      3 rows in set (0.000 sec)
      
    3. Use the LOAD /*+ DIRECT */ DATA statement to import data.

      obclient [test]> INSERT /*+ direct(true, 0, 'full') enable_parallel_dml parallel(16) */ INTO tbl3 SELECT t2.col1, t2.col3 FROM tbl4 t2 WHERE ROWNUM <= 10000;
      Query OK, 3 rows affected
      Records: 3  Duplicates: 0  Warnings: 0
      
    4. Verify whether data has been imported to the tbl3 table.

      obclient [test]> SELECT * FROM tbl3;
      

      The query result is as follows:

      +------+------+
      | col1 | col3 |
      +------+------+
      |    1 |   11 |
      |    2 |   22 |
      |    3 |   33 |
      +------+------+
      3 rows in set
      

      The query result shows that the data has been imported to the tbl3 table.

    5. (Optional) In the Note section of the return result of the EXPLAIN EXTENDED statement, check whether the data is written in direct load mode.

      obclient [test]> EXPLAIN EXTENDED INSERT /*+ direct(true, 0, 'full') enable_parallel_dml parallel(16) */ INTO tbl3 SELECT t2.col1,t2.col3 FROM tbl4 t2;
      

      The return result is as follows:

      +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Query Plan                                                                                                                                                                                                                 |
      +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      | ===================================================================================                                                                                                                                        |
      | |ID|OPERATOR                                |NAME           |EST.ROWS|EST.TIME(us)|                                                                                                                                        |
      | -----------------------------------------------------------------------------------                                                                                                                                        |
      | |0 |OPTIMIZER STATS MERGE                   |               |3       |29          |                                                                                                                                        |
      | |1 |└─PX COORDINATOR                        |               |3       |29          |                                                                                                                                        |
      | |2 |  └─EXCHANGE OUT DISTR                  |:EX10001       |3       |27          |                                                                                                                                        |
      | |3 |    └─INSERT                            |               |3       |27          |                                                                                                                                        |
      | |4 |      └─OPTIMIZER STATS GATHER          |               |3       |1           |                                                                                                                                        |
      | |5 |        └─EXCHANGE IN DISTR             |               |3       |1           |                                                                                                                                        |
      | |6 |          └─EXCHANGE OUT DISTR (RANDOM) |:EX10000       |3       |1           |                                                                                                                                        |
      | |7 |            └─SUBPLAN SCAN              |ANONYMOUS_VIEW1|3       |1           |                                                                                                                                        |
      | |8 |              └─PX BLOCK ITERATOR       |               |3       |1           |                                                                                                                                        |
      | |9 |                └─COLUMN TABLE FULL SCAN|T2             |3       |1           |                                                                                                                                        |
      | ===================================================================================                                                                                                                                        |
      | Outputs & filters:                                                                                                                                                                                                         |
      | -------------------------------------                                                                                                                                                                                      |
      |   0 - output(nil), filter(nil), rowset=16                                                                                                                                                                                  |
      |   1 - output([column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL1(0x7ef8f6027720))(0x7ef8f60287f0)], [column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60))(0x7ef8f6030710)]), filter(nil), rowset=16 |
      |   2 - output([column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL1(0x7ef8f6027720))(0x7ef8f60287f0)], [column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60))(0x7ef8f6030710)]), filter(nil), rowset=16 |
      |       dop=16                                                                                                                                                                                                               |
      |   3 - output([column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL1(0x7ef8f6027720))(0x7ef8f60287f0)], [column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60))(0x7ef8f6030710)]), filter(nil)            |
      |       columns([{TBL3: ({TBL3: (TBL3.__pk_increment(0x7ef8f6027fa0), TBL3.COL1(0x7ef8f6011d50), TBL3.COL3(0x7ef8f6012190))})}]), partitions(p0),                                                                            |
      |       column_values([T_HIDDEN_PK(0x7ef8f60283e0)], [column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL1(0x7ef8f6027720))(0x7ef8f60287f0)], [column_conv(NUMBER,                                                         |
      |       PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60))(0x7ef8f6030710)])                                                                                                                                               |
      |   4 - output([column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL1(0x7ef8f6027720))(0x7ef8f60287f0)], [column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60))(0x7ef8f6030710)],                         |
      |        [T_HIDDEN_PK(0x7ef8f60283e0)]), filter(nil), rowset=16                                                                                                                                                              |
      |   5 - output([T_HIDDEN_PK(0x7ef8f60283e0)], [ANONYMOUS_VIEW1.COL1(0x7ef8f6027720)], [ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60)]), filter(nil), rowset=16                                                                        |
      |   6 - output([T_HIDDEN_PK(0x7ef8f60283e0)], [ANONYMOUS_VIEW1.COL1(0x7ef8f6027720)], [ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60)]), filter(nil), rowset=16                                                                        |
      |       dop=16                                                                                                                                                                                                               |
      |   7 - output([ANONYMOUS_VIEW1.COL1(0x7ef8f6027720)], [ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60)]), filter(nil), rowset=16                                                                                                       |
      |       access([ANONYMOUS_VIEW1.COL1(0x7ef8f6027720)], [ANONYMOUS_VIEW1.COL3(0x7ef8f6027b60)])                                                                                                                               |
      |   8 - output([T2.COL1(0x7ef8f60264c0)], [T2.COL3(0x7ef8f6026f90)]), filter(nil), rowset=16                                                                                                                                 |
      |   9 - output([T2.COL1(0x7ef8f60264c0)], [T2.COL3(0x7ef8f6026f90)]), filter(nil), rowset=16                                                                                                                                 |
      |       access([T2.COL1(0x7ef8f60264c0)], [T2.COL3(0x7ef8f6026f90)]), partitions(p0)                                                                                                                                         |
      |       is_index_back=false, is_global_index=false,                                                                                                                                                                          |
      |       range_key([T2.__pk_increment(0x7ef8f60584a0)]), range(MIN ; MAX)always true                                                                                                                                          |
      | Used Hint:                                                                                                                                                                                                                 |
      | -------------------------------------                                                                                                                                                                                      |
      |   /*+                                                                                                                                                                                                                      |
      |                                                                                                                                                                                                                            |
      |       USE_PLAN_CACHE( NONE )                                                                                                                                                                                               |
      |       PARALLEL(16)                                                                                                                                                                                                         |
      |       ENABLE_PARALLEL_DML                                                                                                                                                                                                  |
      |       DIRECT(TRUE, 0, 'FULL')                                                                                                                                                                                              |
      |   */                                                                                                                                                                                                                       |
      | Qb name trace:                                                                                                                                                                                                             |
      | -------------------------------------                                                                                                                                                                                      |
      |   stmt_id:0, stmt_type:T_EXPLAIN                                                                                                                                                                                           |
      |   stmt_id:1, INS$1                                                                                                                                                                                                         |
      |   stmt_id:2, SEL$1                                                                                                                                                                                                         |
      | Outline Data:                                                                                                                                                                                                              |
      | -------------------------------------                                                                                                                                                                                      |
      |   /*+                                                                                                                                                                                                                      |
      |       BEGIN_OUTLINE_DATA                                                                                                                                                                                                   |
      |       PARALLEL(@"SEL$1" "T2"@"SEL$1" 16)                                                                                                                                                                                   |
      |       FULL(@"SEL$1" "T2"@"SEL$1")                                                                                                                                                                                          |
      |       USE_COLUMN_TABLE(@"SEL$1" "T2"@"SEL$1")                                                                                                                                                                              |
      |       USE_PLAN_CACHE( NONE )                                                                                                                                                                                               |
      |       PARALLEL(16)                                                                                                                                                                                                         |
      |       ENABLE_PARALLEL_DML                                                                                                                                                                                                  |
      |       OPTIMIZER_FEATURES_ENABLE('4.3.3.0')                                                                                                                                                                                 |
      |       DIRECT(TRUE, 0, 'FULL')                                                                                                                                                                                              |
      |       END_OUTLINE_DATA                                                                                                                                                                                                     |
      |   */                                                                                                                                                                                                                       |
      | Optimization Info:                                                                                                                                                                                                         |
      | -------------------------------------                                                                                                                                                                                      |
      |   T2:                                                                                                                                                                                                                      |
      |       table_rows:3                                                                                                                                                                                                         |
      |       physical_range_rows:3                                                                                                                                                                                                |
      |       logical_range_rows:3                                                                                                                                                                                                 |
      |       index_back_rows:0                                                                                                                                                                                                    |
      |       output_rows:3                                                                                                                                                                                                        |
      |       table_dop:16                                                                                                                                                                                                         |
      |       dop_method:Global DOP                                                                                                                                                                                                |
      |       avaiable_index_name: [TBL4]                                                                                                                                                                                           |
      |       stats info: [version=0, is_locked=0, is_expired=0]                                                                                                                                                                    |
      |       dynamic sampling level:0                                                                                                                                                                                             |
      |       estimation method: [DEFAULT, STORAGE]                                                                                                                                                                                 |
      |   Plan Type:                                                                                                                                                                                                               |
      |       DISTRIBUTED                                                                                                                                                                                                          |
      |   Note:                                                                                                                                                                                                                    |
      |       Degree of Parallelism is 16 because of hint                                                                                                                                                                          |
      +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
      82 rows in set (0.006 sec)
      

    References

    • Migrate data between tables
    • Introduction to SQL execution plans
    • Overview
    • Drop a table

    Previous topic

    Overview
    Last

    Next topic

    Incremental direct load
    Next
    What is on this page
    Considerations
    Import data in direct load mode by using the LOAD DATA statement
    Limitations
    Syntax
    Examples
    Import data in direct load mode by using the INSERT INTO SELECT statement
    Limitations
    Syntax
    Examples
    References