You can use incremental direct load to import incremental data to a table that already contains data. You can also use full direct load to import incremental data but the import performance is poor. This is because the full direct load process rewrites all original data. The incremental direct load process operates only incremental data and therefore the import performance can be ensured.
This topic describes how to use the LOAD DATA and INSERT INTO SELECT statements for incremental direct load.
Considerations
Take note of the following considerations before you use the incremental direct load feature:
- Data imported in incremental direct load mode triggers minor compactions. We recommend that you do not use incremental direct load for a small amount of data that can be imported within minutes.
- The
LOAD DATAstatement can be executed in a multi-row transaction. However, when theLOAD DATAstatement, which is a DDL operation, is executed, the previous transaction is automatically committed. - When you use the
INSERT INTO SELECTstatement to import data, only Parallel Data Manipulation Language (PDML) data can be imported in direct load mode.
Import data in direct load mode by using the LOAD DATA statement
You can use the LOAD DATA statement in combination with the DIRECT() hint to import data in incremental direct load mode.
Limitations
- You cannot execute two statements to concurrently write data to the same table. This is because the table is locked during data import and only read operations are supported.
- Incremental direct load is not supported for tables with triggers.
- Incremental direct load is not supported for tables with generated columns. Some indexes generate hidden generated columns, for example, KEY
idx_c2(c2(16)) GLOBAL). - Incremental direct load is not supported if the data in a single row exceeds 2 MB in size.
- Liboblog and flashback queries are not supported.
- Incremental direct load is not supported for tables with indexes (excluding primary keys).
- Incremental direct load is not supported for tables with foreign keys.
Syntax
LOAD DATA /*+ [DIRECT(need_sort,max_error,{'inc'|'inc_replace'})] parallel(N) */ [REMOTE_OSS | LOCAL] INFILE 'file_name' INTO TABLE table_name [COMPRESSION]...
For more information about the syntax of the LOAD DATA statement, see LOAD DATA.
The following table describes the parameters in the syntax.
| Parameter | Description |
|---|---|
| DIRECT() | Specifies to enable direct load. Options in DIRECT():
NoticeWhen the value of |
| parallel(N) | Required. The degree of parallelism (DOP) for loading data. The default value of N is 4. |
| REMOTE_OSS | LOCAL | Optional. Valid values:
|
| file_name | The path and file name of the file to import. You can specify a value in either of the following formats:
NoteWhen you import a file from OSS, make sure that the following conditions are met:
|
| table_name | The name of the table into which data is imported. You can specify any number of columns for the table. |
| COMPRESSION | The compression format of the input file. Valid values:
|
Note
When you use the LOAD DATA statement for incremental direct load, you can also use wildcards to import data from multiple files.
Examples
The procedure for enabling incremental direct load by using the LOAD DATA statement is the same as that for enabling full direct load, except that the full field is replaced with the inc or inc_replace field.
Note
The following example shows how to import data from a file on a server. In OceanBase Database, you can also use the LOCAL INFILE clause in the LOAD DATA statement to import data from a local file in direct load mode. For more information about how to use LOAD DATA LOCAL INFILE, see Import data by using the LOAD DATA statement.
Log in to the server where the target OBServer node resides and create a test table named
tbl1in the/home/admindirectory.Note
In OceanBase Database, the
LOAD DATAstatement can import data only from a local input file on an OBServer node. Therefore, you must copy the file to import to an OBServer node before the import.[xxx@xxx /home/admin]# ssh admin@10.10.10.1[admin@xxx /home/admin]# vi tbl1.csv 1.11 2.22 3.33Specify the path of the file to import.
Set the system variable
secure_file_privto specify the path that can be accessed for file import or export.Notice
For security reasons, you can only connect to the database through a local socket to execute the SQL statement for setting
secure_file_priv. For more information, see secure_file_priv.Log in to the server where the target OBServer node resides.
[xxx@xxx /home/admin]# ssh admin@10.10.10.1Connect to the
mysql001tenant through a local Unix socket.obclient -S /home/admin/oceanbase/run/sql.sock -uroot@mysql001 -p******Specify the path as
/home/admin.obclient [(none)]> SET GLOBAL secure_file_priv = "/home/admin"; Query OK, 0 rows affected
After you reconnect to the database, execute the
LOAD /*+ DIRECT */ DATAstatement to import data.Create a table named
tbl1.obclient [test]> CREATE TABLE tbl1(col1 INT PRIMARY KEY,col2 INT); Query OK, 0 rows affectedQuery whether the
tbl1table contains data. At this time, the query result is an empty set.obclient [test]> SELECT * FROM tbl1; Empty setImport the data in the
tbl1.csvfile to thetbl1table in direct load mode.Specify to import data to all columns of the
tbl1table.obclient [test]> LOAD DATA /*+ direct(true,1024,'inc_replace') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl1 FIELDS TERMINATED BY ','; Query OK, 3 rows affected Records: 3 Deleted: 0 Skipped: 0 Warnings: 0Specify to import data to any columns of the
tbl1table. For example, you can specify to import data to thecol1andcol2columns.obclient [test]> LOAD DATA /*+ direct(true,1024,'inc_replace') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl1 FIELDS TERMINATED BY ','(col1,col2); Query OK, 3 rows affected Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
Verify whether data has been imported to the
tbl1table.obclient [test]> SELECT * FROM tbl1;The query result is as follows:
+------+------+ | col1 | col2 | +------+------+ | 1 | 11 | | 2 | 22 | | 3 | 33 | +------+------+ 3 rows in setThe query result shows that the data has been imported to the
tbl2table.
Log in to the server where the target OBServer node resides and create a test table named
tbl1in the/home/admindirectory.Note
In OceanBase Database, the
LOAD DATAstatement can import data only from a local input file on an OBServer node. Therefore, you must copy the file to import to an OBServer node before the import.[xxx@xxx /home/admin]# ssh admin@10.10.10.1[admin@xxx /home/admin]# vi tbl1.csv 1.11 2.22 3.33Specify the path of the file to import.
Set the system variable
secure_file_privto specify the path that can be accessed for file import or export.Notice
For security reasons, you can only connect to the database through a local socket to execute the SQL statement for setting
secure_file_priv. For more information, see secure_file_priv.Log in to the server where the target OBServer node resides.
[xxx@xxx /home/admin]# ssh admin@10.10.10.1Connect to the
oracle001tenant through a local Unix socket.obclient -S /home/admin/oceanbase/run/sql.sock -usys@oracle001 -p******Specify the path as
/home/admin.obclient [(none)]> SET GLOBAL secure_file_priv = "/home/admin"; Query OK, 0 rows affected
After you reconnect to the database, execute the
LOAD /*+ DIRECT */ DATAstatement to import data.Create a table named
tbl2.obclient [test]> CREATE TABLE tbl2(col1 INT PRIMARY KEY,col2 INT); Query OK, 0 rows affectedQuery whether the
tbl2table contains data. At this time, the query result is an empty set.obclient [test]> SELECT * FROM tbl2; Empty setImport the data in the
tbl1.csvfile to thetbl2table in direct load mode.Specify to import data to all columns of the
tbl2table.obclient [test]> LOAD DATA /*+ direct(true,1024,'inc_replace') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl2 FIELDS TERMINATED BY ','; Query OK, 3 rows affected Records: 3 Deleted: 0 Skipped: 0 Warnings: 0Specify to import data to any columns of the
tbl2table. For example, you can specify to import data to thecol1andcol2columns.obclient [test]> LOAD DATA /*+ direct(true,1024,'inc_replace') parallel(16) */ INFILE '/home/admin/tbl1.csv' INTO TABLE tbl2 FIELDS TERMINATED BY ','(col1,col2); Query OK, 3 rows affected Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
Verify whether data has been imported to the
tbl2table.obclient [test]> SELECT * FROM tbl2;The query result is as follows:
+------+------+ | col1 | col2 | +------+------+ | 1 | 11 | | 2 | 22 | | 3 | 33 | +------+------+ 3 rows in setThe query result shows that the data has been imported to the
tbl2table.
Import data in direct load mode by using the INSERT INTO SELECT statement
You can use the INSERT INTO SELECT statement in combination with the direct() hint and the enable_parallel_dml parameter to import data in direct load mode.
Limitations
- Only PDML data can be imported in direct load mode. For more information about PDML, see Parallel DML.
- You cannot execute two statements to concurrently write data to the same table. This is because the table is locked during data import and only read operations are supported.
- Incremental direct load is not supported for tables with triggers.
- Incremental direct load is not supported for tables with generated columns. Some indexes generate hidden generated columns, for example, KEY
idx_c2(c2(16)) GLOBAL). - Liboblog and flashback queries are not supported.
- Incremental direct load is not supported for tables with indexes (excluding primary keys).
- Incremental direct load is not supported for tables with foreign keys.
Syntax
INSERT /*+ [DIRECT(need_sort,max_error,{'inc'|'inc_replace'})] enable_parallel_dml parallel(N) */ INTO table_name select_sentence
For more information about the syntax of the INSERT INTO statement, see INSERT (MySQL mode) and INSERT (Oracle mode).
The following table describes the parameters in the syntax.
| Parameter | Description |
|---|---|
| DIRECT() | Specifies to enable direct load. Options in DIRECT():
NoticeWhen the value of |
| enable_parallel_dml | Specifies whether to enable PDML.
NoteGenerally, the |
| parallel(N) | Required. The DOP for loading data. The value is an integer greater than 1. |
Examples
The procedure for enabling incremental direct load by using the INSERT INTO SELECT statement is the same as that for enabling full direct load, except that the full field is replaced with the inc or inc_replace field.
Import partial data in the tbl2 table to the tbl1 table in direct load mode.
Query whether the
tbl1table contains data. At this time, the query result is an empty set.obclient [test]> SELECT * FROM tbl1; Empty setQuery whether the
tbl2table contains data.obclient [test]> SELECT * FROM tbl2;The query result shows that the
tbl2table contains data.+------+------+------+ | col1 | col2 | col3 | +------+------+------+ | 1 | a1 | 11 | | 2 | a2 | 22 | | 3 | a3 | 33 | +------+------+------+ 3 rows in setUse the
LOAD /*+ DIRECT */ DATAstatement to import data.obclient [test]> INSERT /*+ DIRECT(true, 0, 'inc_replace') enable_parallel_dml parallel(16) */ INTO tbl1 SELECT t2.col1,t2.col3 FROM tbl2 t2; Query OK, 3 rows affected Records: 3 Duplicates: 0 Warnings: 0Verify whether data has been imported to the
tbl1table.obclient [test]> SELECT * FROM tbl1;The query result is as follows:
+------+------+ | col1 | col2 | +------+------+ | 1 | 11 | | 2 | 22 | | 3 | 33 | +------+------+ 3 rows in setThe query result shows that the data has been imported to the
tbl1table.(Optional) In the
Notesection of the return result of theEXPLAIN EXTENDEDstatement, check whether the data is written in direct load mode.obclient [test]> EXPLAIN EXTENDED INSERT /*+ direct(true, 0, 'inc_replace') enable_parallel_dml parallel(16) */ INTO tbl1 SELECT t2.col1,t2.col3 FROM tbl2 t2;The return result is as follows:
+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Query Plan | +--------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ============================================================================== | | |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| | | ------------------------------------------------------------------------------ | | |0 |PX COORDINATOR | |3 |27 | | | |1 |└─EXCHANGE OUT DISTR |:EX10001 |3 |27 | | | |2 | └─INSERT | |3 |26 | | | |3 | └─EXCHANGE IN DISTR | |3 |1 | | | |4 | └─EXCHANGE OUT DISTR (RANDOM)|:EX10000 |3 |1 | | | |5 | └─SUBPLAN SCAN |ANONYMOUS_VIEW1|3 |1 | | | |6 | └─PX BLOCK ITERATOR | |3 |1 | | | |7 | └─TABLE FULL SCAN |t2 |3 |1 | | | ============================================================================== | | Outputs & filters: | | ------------------------------------- | | 0 - output(nil), filter(nil), rowset=16 | | 1 - output(nil), filter(nil), rowset=16 | | dop=16 | | 2 - output(nil), filter(nil) | | columns([{tbl1: ({tbl1: (tbl1.__pk_increment(0x7efa518277d0), tbl1.col1(0x7efa518119c0), tbl1.col3(0x7efa51811e00))})}]), partitions(p0), | | column_values([T_HIDDEN_PK(0x7efa51827c10)], [column_conv(INT,PS: (11,0),NULL,ANONYMOUS_VIEW1.col1(0x7efa51826f50))(0x7efa51828030)], [column_conv(INT, | | PS: (11,0),NULL,ANONYMOUS_VIEW1.col3(0x7efa51827390))(0x7efa5182ff60)]) | | 3 - output([T_HIDDEN_PK(0x7efa51827c10)], [ANONYMOUS_VIEW1.col1(0x7efa51826f50)], [ANONYMOUS_VIEW1.col3(0x7efa51827390)]), filter(nil), rowset=16 | | 4 - output([T_HIDDEN_PK(0x7efa51827c10)], [ANONYMOUS_VIEW1.col1(0x7efa51826f50)], [ANONYMOUS_VIEW1.col3(0x7efa51827390)]), filter(nil), rowset=16 | | dop=16 | | 5 - output([ANONYMOUS_VIEW1.col1(0x7efa51826f50)], [ANONYMOUS_VIEW1.col3(0x7efa51827390)]), filter(nil), rowset=16 | | access([ANONYMOUS_VIEW1.col1(0x7efa51826f50)], [ANONYMOUS_VIEW1.col3(0x7efa51827390)]) | | 6 - output([t2.col1(0x7efa51825f10)], [t2.col3(0x7efa518267c0)]), filter(nil), rowset=16 | | 7 - output([t2.col1(0x7efa51825f10)], [t2.col3(0x7efa518267c0)]), filter(nil), rowset=16 | | access([t2.col1(0x7efa51825f10)], [t2.col3(0x7efa518267c0)]), partitions(p0) | | is_index_back=false, is_global_index=false, | | range_key([t2.__pk_increment(0x7efa51856410)]), range(MIN ; MAX)always true | | Used Hint: | | ------------------------------------- | | /*+ | | | | USE_PLAN_CACHE( NONE ) | | PARALLEL(16) | | ENABLE_PARALLEL_DML | | DIRECT(TRUE, 0, 'INC_REPLACE') | | */ | | Qb name trace: | | ------------------------------------- | | stmt_id:0, stmt_type:T_EXPLAIN | | stmt_id:1, INS$1 | | stmt_id:2, SEL$1 | | Outline Data: | | ------------------------------------- | | /*+ | | BEGIN_OUTLINE_DATA | | PARALLEL(@"SEL$1" "t2"@"SEL$1" 16) | | FULL(@"SEL$1" "t2"@"SEL$1") | | USE_PLAN_CACHE( NONE ) | | PARALLEL(16) | | ENABLE_PARALLEL_DML | | OPTIMIZER_FEATURES_ENABLE('4.3.3.0') | | DIRECT(TRUE, 0, 'INC_REPLACE') | | END_OUTLINE_DATA | | */ | | Optimization Info: | | ------------------------------------- | | t2: | | table_rows:3 | | physical_range_rows:3 | | logical_range_rows:3 | | index_back_rows:0 | | output_rows:3 | | table_dop:16 | | dop_method:Global DOP | | avaiable_index_name: [tbl2] | | stats info: [version=0, is_locked=0, is_expired=0] | | dynamic sampling level:0 | | estimation method: [DEFAULT, STORAGE] | | Plan Type: | | DISTRIBUTED | | Note: | | Degree of Parallelism is 16 because of hint | | Direct-mode is enabled in insert into select | +--------------------------------------------------------------------------------------------------------------------------------------------------------------+ 77 rows in set (0.009 sec)
Import partial data in the tbl4 table to the tbl3 table in direct load mode.
Query whether the
tbl3table contains data. At this time, the query result is an empty set.obclient [test]> SELECT * FROM tbl3; Empty setQuery whether the
tbl4table contains data.obclient [test]> SELECT * FROM tbl4;The query result shows that the
tbl4table contains data.+------+------+------+ | COL1 | COL2 | COL3 | +------+------+------+ | 1 | a1 | 11 | | 2 | a2 | 22 | | 3 | a3 | 33 | +------+------+------+ 3 rows in set (0.000 sec)Use the
LOAD /*+ DIRECT */ DATAstatement to import data.obclient [test]> INSERT /*+ direct(true, 0, 'inc_replace') enable_parallel_dml parallel(16) */ INTO tbl3 SELECT t2.col1, t2.col3 FROM tbl4 t2 WHERE ROWNUM <= 10000; Query OK, 3 rows affected Records: 3 Duplicates: 0 Warnings: 0Verify whether data has been imported to the
tbl3table.obclient [test]> SELECT * FROM tbl3;The query result is as follows:
+------+------+ | col1 | col3 | +------+------+ | 1 | 11 | | 2 | 22 | | 3 | 33 | +------+------+ 3 rows in setThe query result shows that the data has been imported to the
tbl3table.(Optional) In the
Notesection of the return result of theEXPLAIN EXTENDEDstatement, check whether the data is written in direct load mode.obclient [test]> EXPLAIN EXTENDED INSERT /*+ direct(true, 0, 'inc_replace') enable_parallel_dml parallel(16) */ INTO tbl3 SELECT t2.col1,t2.col3 FROM tbl4 t2 WHERE ROWNUM <= 10000; The return result is as follows: ```shell +--------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Query Plan | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ========================================================================================= | | |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| | | ----------------------------------------------------------------------------------------- | | |0 |PX COORDINATOR | |3 |34 | | | |1 |└─EXCHANGE OUT DISTR |:EX10002 |3 |33 | | | |2 | └─INSERT | |3 |32 | | | |3 | └─EXCHANGE IN DISTR | |3 |7 | | | |4 | └─EXCHANGE OUT DISTR (RANDOM) |:EX10001 |3 |7 | | | |5 | └─MATERIAL | |3 |3 | | | |6 | └─SUBPLAN SCAN |ANONYMOUS_VIEW1|3 |3 | | | |7 | └─LIMIT | |3 |3 | | | |8 | └─EXCHANGE IN DISTR | |3 |3 | | | |9 | └─EXCHANGE OUT DISTR |:EX10000 |3 |1 | | | |10| └─LIMIT | |3 |1 | | | |11| └─PX BLOCK ITERATOR | |3 |1 | | | |12| └─COLUMN TABLE FULL SCAN|T2 |3 |1 | | | ========================================================================================= | | Outputs & filters: | | ------------------------------------- | | 0 - output(nil), filter(nil), rowset=16 | | 1 - output(nil), filter(nil), rowset=16 | | dop=16 | | 2 - output(nil), filter(nil) | | columns([{TBL3: ({TBL3: (TBL3.__pk_increment(0x7efaad22b0e0), TBL3.COL1(0x7efaad2123f0), TBL3.COL3(0x7efaad212830))})}]), partitions(p0), | | column_values([T_HIDDEN_PK(0x7efaad22b520)], [column_conv(NUMBER,PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL1(0x7efaad22a860))(0x7efaad22b930)], [column_conv(NUMBER, | | PS: (-1,0),NULL,ANONYMOUS_VIEW1.COL3(0x7efaad22aca0))(0x7efaad233850)]) | | 3 - output([T_HIDDEN_PK(0x7efaad22b520)], [ANONYMOUS_VIEW1.COL1(0x7efaad22a860)], [ANONYMOUS_VIEW1.COL3(0x7efaad22aca0)]), filter(nil), rowset=16 | | 4 - output([T_HIDDEN_PK(0x7efaad22b520)], [ANONYMOUS_VIEW1.COL1(0x7efaad22a860)], [ANONYMOUS_VIEW1.COL3(0x7efaad22aca0)]), filter(nil), rowset=16 | | is_single, dop=1 | | 5 - output([ANONYMOUS_VIEW1.COL1(0x7efaad22a860)], [ANONYMOUS_VIEW1.COL3(0x7efaad22aca0)]), filter(nil), rowset=16 | | 6 - output([ANONYMOUS_VIEW1.COL1(0x7efaad22a860)], [ANONYMOUS_VIEW1.COL3(0x7efaad22aca0)]), filter(nil), rowset=16 | | access([ANONYMOUS_VIEW1.COL1(0x7efaad22a860)], [ANONYMOUS_VIEW1.COL3(0x7efaad22aca0)]) | | 7 - output([T2.COL1(0x7efaad229470)], [T2.COL3(0x7efaad229f40)]), filter(nil), rowset=16 | | limit(cast(FLOOR(cast(10000, NUMBER(-1, -85))(0x7efaad227ef0))(0x7efaad25ac30), BIGINT(-1, 0))(0x7efaad25b6d0)), offset(nil) | | 8 - output([T2.COL1(0x7efaad229470)], [T2.COL3(0x7efaad229f40)]), filter(nil), rowset=16 | | 9 - output([T2.COL1(0x7efaad229470)], [T2.COL3(0x7efaad229f40)]), filter(nil), rowset=16 | | dop=16 | | 10 - output([T2.COL1(0x7efaad229470)], [T2.COL3(0x7efaad229f40)]), filter(nil), rowset=16 | | limit(cast(FLOOR(cast(10000, NUMBER(-1, -85))(0x7efaad227ef0))(0x7efaad25ac30), BIGINT(-1, 0))(0x7efaad25b6d0)), offset(nil) | | 11 - output([T2.COL1(0x7efaad229470)], [T2.COL3(0x7efaad229f40)]), filter(nil), rowset=16 | | 12 - output([T2.COL1(0x7efaad229470)], [T2.COL3(0x7efaad229f40)]), filter(nil), rowset=16 | | access([T2.COL1(0x7efaad229470)], [T2.COL3(0x7efaad229f40)]), partitions(p0) | | limit(cast(FLOOR(cast(10000, NUMBER(-1, -85))(0x7efaad227ef0))(0x7efaad25ac30), BIGINT(-1, 0))(0x7efaad25b6d0)), offset(nil), is_index_back=false, | | is_global_index=false, | | range_key([T2.__pk_increment(0x7efaad25a720)]), range(MIN ; MAX)always true | | Used Hint: | | ------------------------------------- | | /*+ | | | | USE_PLAN_CACHE( NONE ) | | PARALLEL(16) | | ENABLE_PARALLEL_DML | | DIRECT(TRUE, 0, 'INC_REPLACE') | | */ | | Qb name trace: | | ------------------------------------- | | stmt_id:0, stmt_type:T_EXPLAIN | | stmt_id:1, INS$1 | | stmt_id:2, SEL$1 | | stmt_id:3, parent:SEL$1 > SEL$658037CB > SEL$DCAFFB86 | | Outline Data: | | ------------------------------------- | | /*+ | | BEGIN_OUTLINE_DATA | | PARALLEL(@"SEL$DCAFFB86" "T2"@"SEL$1" 16) | | FULL(@"SEL$DCAFFB86" "T2"@"SEL$1") | | USE_COLUMN_TABLE(@"SEL$DCAFFB86" "T2"@"SEL$1") | | MERGE(@"SEL$658037CB" < "SEL$1") | | USE_PLAN_CACHE( NONE ) | | PARALLEL(16) | | ENABLE_PARALLEL_DML | | OPTIMIZER_FEATURES_ENABLE('4.3.3.0') | | DIRECT(TRUE, 0, 'INC_REPLACE') | | END_OUTLINE_DATA | | */ | | Optimization Info: | | ------------------------------------- | | T2: | | table_rows:3 | | physical_range_rows:3 | | logical_range_rows:3 | | index_back_rows:0 | | output_rows:3 | | table_dop:16 | | dop_method:Global DOP | | avaiable_index_name: [TBL4] | | stats info: [version=0, is_locked=0, is_expired=0] | | dynamic sampling level:0 | | estimation method: [DEFAULT, STORAGE] | | Plan Type: | | DISTRIBUTED | | Note: | | Degree of Parallelism is 16 because of hint | | Direct-mode is enabled in insert into select | | Expr Constraints: | | cast(FLOOR(cast(10000, NUMBER(-1, -85))), BIGINT(-1, 0)) >= 1 result is TRUE | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 96 rows in set (0.007 sec)