Parallel DML improves the efficiency of inserting, updating, and deleting data in large database tables and indexes by using the parallel execution mechanism. For decision support systems (DSS), parallel DML provides query and update features, complementing parallel query functionality. In online transaction processing (OLTP) databases, parallel DML operations can speed up batch processing jobs.
Enable and disable parallel DML
OceanBase Database allows you to explicitly enable parallel DML in an SQL statement or a session.
Enable and disable parallel DML in SQL statements
To enable parallel DML in an SQL statement, add the following hint to the statement:
/*+ ENABLE_PARALLEL_DML PARALLEL(3) */
Generally, you must add the ENABLE_PARALLEL_DML hint and the PARALLEL hint to an SQL statement to enable parallel DML. However, if the target table has a schema that specifies the table-level parallelism, you need to add only the ENABLE_PARALLEL_DML hint.
The following example shows how to add the ENABLE_PARALLEL_DML hint and the PARALLEL(n) parameter to specify the degree of parallelism (DOP) n in the same statement. In this example, n > 1 and dop=2.
CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 INT) NOPARALLEL;
CREATE TABLE t2 (c1 INT PRIMARY KEY, c2 INT) PARALLEL 11 PARTITION BY HASH(c1) PARTITIONS 3;
CREATE TABLE t3 (c1 INT PRIMARY KEY, c2 INT) PARALLEL 10 PARTITION BY HASH(c1) PARTITIONS 4;
obclient> EXPLAIN INSERT /*+ ENABLE_PARALLEL_DML PARALLEL(2) */ INTO t1 SELECT * FROM T3;
+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| ========================================================================= |
| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
| ------------------------------------------------------------------------- |
| |0 |OPTIMIZER STATS MERGE | |1 |18 | |
| |1 | PX COORDINATOR | |1 |18 | |
| |2 | EXCHANGE OUT DISTR |:EX10001 |1 |18 | |
| |3 | INSERT | |1 |17 | |
| |4 | EXCHANGE IN DISTR | |1 |4 | |
| |5 | EXCHANGE OUT DISTR (HASH)|:EX10000 |1 |4 | |
| |6 | OPTIMIZER STATS GATHER | |1 |4 | |
| |7 | SUBPLAN SCAN |ANONYMOUS_VIEW1|1 |4 | |
| |8 | PX BLOCK ITERATOR | |1 |4 | |
| |9 | TABLE SCAN |t3 |1 |4 | |
| ========================================================================= |
| Outputs & filters: |
| ------------------------------------- |
| 0 - output(nil), filter(nil), rowset=256 |
| 1 - output([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.c2)]), filter(nil), rowset=256 |
| 2 - output([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.c2)]), filter(nil), rowset=256 |
| dop=2 |
| 3 - output([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.c2)]), filter(nil) |
| columns([{t1: ({t1: (t1.c1, t1.c2)})}]), partitions(p0), |
| column_values([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.c2)]) |
| 4 - output([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.c2)]), filter(nil), rowset=256 |
| 5 - output([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.c2)]), filter(nil), rowset=256 |
| (#keys=1, [column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)]), dop=2 |
| 6 - output([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.c1)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.c2)]), filter(nil), rowset=256 |
| 7 - output([ANONYMOUS_VIEW1.c1], [ANONYMOUS_VIEW1.c2]), filter(nil), rowset=256 |
| access([ANONYMOUS_VIEW1.c1], [ANONYMOUS_VIEW1.c2]) |
| 8 - output([t3.c1], [t3.c2]), filter(nil), rowset=256 |
| 9 - output([t3.c1], [t3.c2]), filter(nil), rowset=256 |
| access([t3.c1], [t3.c2]), partitions(p[0-3]) |
| is_index_back=false, is_global_index=false, |
| range_key([t3.c1]), range(MIN ; MAX)always true |
+-------------------------------------------------------------------------------------------------------------------------------------------------------+
34 rows in set
To disable parallel DML in an SQL statement, add the following hint to the statement:
/*+ DISABLE_PARALLEL_DML */
Even if parallel DML is enabled in a session, you can add the DISABLE_PARALLEL_DML hint to a specific SQL statement to disable parallel DML for that statement.
Enable and disable parallel DML in a session
By default, even if the PARALLEL hint is used in an SQL statement, parallel DML is disabled. Therefore, you must enable parallel DML for the session.
To enable parallel DML in a session in MySQL mode, use the following syntax:
SET _FORCE_PARALLEL_DML_DOP = n;
In this syntax, n must be greater than 1.
To enable parallel DML in a session in Oracle mode, use the following syntax:
ALTER SESSION ENABLE PARALLEL DML;
To forcibly enable parallel DML in a session in Oracle mode, execute the following statement:
ALTER SESSION FORCE PARALLEL DML PARALLEL n;
The following example shows how to forcibly enable parallel DML in a session in Oracle mode.
CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 INT) NOPARALLEL;
CREATE TABLE t2 (c1 INT PRIMARY KEY, c2 INT) PARALLEL 11 PARTITION BY HASH(c1) PARTITIONS 3;
CREATE TABLE t3 (c1 INT PRIMARY KEY, c2 INT) PARALLEL 10 PARTITION BY HASH(c1) PARTITIONS 4;
obclient> ALTER SESSION FORCE PARALLEL DML PARALLEL 6;
Query OK, 0 rows affected
obclient> EXPLAIN INSERT INTO t2 SELECT * FROM t3;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ============================================================================== |
| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
| ------------------------------------------------------------------------------ |
| |0 |OPTIMIZER STATS MERGE | |1 |16 | |
| |1 | PX COORDINATOR | |1 |16 | |
| |2 | EXCHANGE OUT DISTR |:EX10001 |1 |15 | |
| |3 | INSERT | |1 |15 | |
| |4 | EXCHANGE IN DISTR | |1 |2 | |
| |5 | EXCHANGE OUT DISTR (PKEY HASH)|:EX10000 |1 |2 | |
| |6 | OPTIMIZER STATS GATHER | |1 |2 | |
| |7 | SUBPLAN SCAN |ANONYMOUS_VIEW1|1 |2 | |
| |8 | PX BLOCK ITERATOR | |1 |2 | |
| |9 | TABLE SCAN |T3 |1 |2 | |
| ============================================================================== |
| Outputs & filters: |
| ------------------------------------- |
| 0 - output(nil), filter(nil), rowset=256 |
| 1 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil), rowset=256 |
| 2 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil), rowset=256 |
| dop=6 |
| 3 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil) |
| columns([{T2: ({T2: (T2.C1, T2.C2)})}]), partitions(p[0-2]), |
| column_values([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]) |
| 4 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)], [PARTITION_ID]), filter(nil), |
| rowset=256 |
| 5 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)], [PARTITION_ID]), filter(nil), |
| rowset=256 |
| (#keys=1, [column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)]), dop=6 |
| 6 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil), rowset=256 |
| 7 - output([ANONYMOUS_VIEW1.C1], [ANONYMOUS_VIEW1.C2]), filter(nil), rowset=256 |
| access([ANONYMOUS_VIEW1.C1], [ANONYMOUS_VIEW1.C2]) |
| 8 - output([T3.C1], [T3.C2]), filter(nil), rowset=256 |
| 9 - output([T3.C1], [T3.C2]), filter(nil), rowset=256 |
| access([T3.C1], [T3.C2]), partitions(p[0-3]) |
| is_index_back=false, is_global_index=false, |
| range_key([T3.C1]), range(MIN ; MAX)always true |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
36 rows in set
Note that if you enable parallel DML in an SQL statement, the DOP specified in the statement takes precedence over the DOP explicitly specified for the session in an earlier statement. An example in Oracle mode is as follows:
CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 INT) NOPARALLEL;
CREATE TABLE t2 (c1 INT PRIMARY KEY, c2 INT) PARALLEL 11 PARTITION BY HASH(c1) PARTITIONS 3;
CREATE TABLE t3 (c1 INT PRIMARY KEY, c2 INT) PARALLEL 10 PARTITION BY HASH(c1) PARTITIONS 4;
obclient> ALTER SESSION FORCE PARALLEL DML PARALLEL 6;
Query OK, 0 rows affected
obclient> EXPLAIN INSERT /*+ PARALLEL(3) */ INTO t2 SELECT * FROM t3;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ============================================================================== |
| |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| |
| ------------------------------------------------------------------------------ |
| |0 |OPTIMIZER STATS MERGE | |1 |17 | |
| |1 | PX COORDINATOR | |1 |17 | |
| |2 | EXCHANGE OUT DISTR |:EX10001 |1 |17 | |
| |3 | INSERT | |1 |16 | |
| |4 | EXCHANGE IN DISTR | |1 |3 | |
| |5 | EXCHANGE OUT DISTR (PKEY HASH)|:EX10000 |1 |3 | |
| |6 | OPTIMIZER STATS GATHER | |1 |3 | |
| |7 | SUBPLAN SCAN |ANONYMOUS_VIEW1|1 |3 | |
| |8 | PX BLOCK ITERATOR | |1 |3 | |
| |9 | TABLE SCAN |T3 |1 |3 | |
| ============================================================================== |
| Outputs & filters: |
| ------------------------------------- |
| 0 - output(nil), filter(nil), rowset=256 |
| 1 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil), rowset=256 |
| 2 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil), rowset=256 |
| dop=3 |
| 3 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil) |
| columns([{T2: ({T2: (T2.C1, T2.C2)})}]), partitions(p[0-2]), |
| column_values([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]) |
| 4 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)], [PARTITION_ID]), filter(nil), |
| rowset=256 |
| 5 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)], [PARTITION_ID]), filter(nil), |
| rowset=256 |
| (#keys=1, [column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)]), dop=3 |
| 6 - output([column_conv(NUMBER,PS:(-1,0),NOT NULL,ANONYMOUS_VIEW1.C1)], [column_conv(NUMBER,PS:(-1,0),NULL,ANONYMOUS_VIEW1.C2)]), filter(nil), rowset=256 |
| 7 - output([ANONYMOUS_VIEW1.C1], [ANONYMOUS_VIEW1.C2]), filter(nil), rowset=256 |
| access([ANONYMOUS_VIEW1.C1], [ANONYMOUS_VIEW1.C2]) |
| 8 - output([T3.C1], [T3.C2]), filter(nil), rowset=256 |
| 9 - output([T3.C1], [T3.C2]), filter(nil), rowset=256 |
| access([T3.C1], [T3.C2]), partitions(p[0-3]) |
| is_index_back=false, is_global_index=false, |
| range_key([T3.C1]), range(MIN ; MAX)always true |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
36 rows in set
To disable parallel DML in MySQL mode, use the following SQL statement:
SET _FORCE_PARALLEL_DML_DOP = 1;
To disable parallel DML in Oracle mode, use the following SQL statement:
ALTER SESSION DISABLE PARALLEL DML;
If you disable parallel DML, the DML operations specified in an SQL statement with the PARALLEL hint are executed serially. If you enable parallel DML for a session, parallel execution applies to all DML statements in the session. If you enable parallel DML for an SQL statement with the ENABLE_PARALLEL_DML hint, parallel execution applies only to the statement. However, if no table has parallelism or if parallel DML operations are not allowed, DML operations are executed serially even if parallel DML is enabled.
Support for parallel processing in intervals
The following SQL statements are used to describe the table partition parallel processing feature.
Create a test table named
branch_sp_tbl_src.CREATE TABLE branch_sp_tbl_src(id INT PRIMARY KEY, v INT) PARTITION BY KEY(id) PARTITIONS 4;Create a test table named
branch_sp_tbl_dest.CREATE TABLE branch_sp_tbl_dest LIKE branch_sp_tbl_src;View the execution plan.
Execute the following SQL statement to view how the insertion operation is performed.
obclient [test]> EXPLAIN BASIC INSERT /*+enable_parallel_dml parallel(100) query_timeout(1000000000)*/ INTO branch_sp_tbl_dest SELECT id, v FROM branch_sp_tbl_src ON DUPLICATE KEY UPDATE v = v + 1;The return result is as follows:
+-------------------------------------------------------------------------------------------------------------------------------------+ | Query Plan | +-------------------------------------------------------------------------------------------------------------------------------------+ | ================================================ | | |ID|OPERATOR |NAME | | | ------------------------------------------------ | | |0 |PX COORDINATOR | | | | |1 |└─EXCHANGE OUT DISTR |:EX10000 | | | |2 | └─PX PARTITION ITERATOR| | | | |3 | └─INSERT_UP | | | | |4 | └─SUBPLAN SCAN |ANONYMOUS_VIEW1 | | | |5 | └─TABLE FULL SCAN|branch_sp_tbl_src| | | ================================================ | | Outputs & filters: | | ------------------------------------- | | 0 - output(nil), filter(nil), rowset=16 | | 1 - output(nil), filter(nil), rowset=16 | | dop=100 | | 2 - output(nil), filter(nil), rowset=16 | | partition wise, force partition granule | | 3 - output(nil), filter(nil) | | columns([{branch_sp_tbl_dest: ({branch_sp_tbl_dest: (branch_sp_tbl_dest.id, branch_sp_tbl_dest.v)})}]), partitions(p[0-3]), | | column_values([column_conv(INT,PS:(11,0),NOT NULL,ANONYMOUS_VIEW1.id)], [column_conv(INT,PS:(11,0),NULL,ANONYMOUS_VIEW1.v)]), | | update([branch_sp_tbl_dest.v=column_conv(INT,PS:(11,0),NULL,cast(branch_sp_tbl_dest.v + 1, INT(-1, 0)))]) | | 4 - output([ANONYMOUS_VIEW1.id], [ANONYMOUS_VIEW1.v]), filter(nil), rowset=16 | | access([ANONYMOUS_VIEW1.id], [ANONYMOUS_VIEW1.v]) | | 5 - output([branch_sp_tbl_src.id], [branch_sp_tbl_src.v]), filter(nil), rowset=16 | | access([branch_sp_tbl_src.id], [branch_sp_tbl_src.v]), partitions(p[0-3]) | | is_index_back=false, is_global_index=false, | | range_key([branch_sp_tbl_src.id]), range(MIN ; MAX)always true | +-------------------------------------------------------------------------------------------------------------------------------------+ 27 rows in setThe query plan involves the following operators:
- Operator 0: indicates a parallel execution coordinator that manages the parallel execution processes.
- Operator 1: indicates the distribution of data among different execution nodes.
- Operator 2: indicates that the query traverses partitions. "Partition-wise" processing means that the query intelligently processes data among partitions.
- Operator 3: indicates an insert or update operation. If a key being inserted does not exist in the table, the operation performs an insertion; otherwise, it performs an update.
- Operators 4 and 5: indicate a full table scan of the
branch_sp_tbl_srctable. This table is the data source from which data is selected for insertion.
Usage
OceanBase Database supports the parallel execution of the following SQL statements:
INSERT INTO SELECTUPDATEDELETE
If any of the following index types exist on a table, parallel execution is supported:
- Local indexes
- Global indexes on a single partition
- Global indexes on multiple partitions
Supported DML operations for parallel DML
DML operation |
INSERT |
UPDATE |
DELETE |
MERGE INTO |
|---|---|---|---|---|
Foreign key/TRIGGER (trigger), PL UDF (user-defined function), or unique index exists on the table
NoteIt is uncertain whether the trigger will be triggered. If a trigger is not triggered during a DML operation, the system continues to perform the DML operation in parallel. For example, an INSERT operation can be performed in parallel on a table that has an UPDATE trigger, but an UPDATE operation cannot be performed in parallel. |
No | No | No | No |
| Multi-table DML | INSERT ALL syntax is not supported.
Note
|
Supported only when associated with parallel UPDATE. |
Same as UPDATE | Syntax not supported |
| Auto-increment column | Parallel DML is supported in some scenarios. For more information, see the following examples. | Supported | Supported | Supported (seq) |
| ArrayBinding batch optimization | No | No | No | No |
| Use of USER_VARIABLE | No | No | No | No |
| IGNORE | No | No | No | Syntax not supported |
| DBLink | No | No | No | No |
| Others | The following statements are not supported for parallel DML:
|
Columns with the specification ON UPDATE CURRENT_TIMESTAMP are not supported. |
If the INSERT statement writes data to a table that has an auto-increment column, and the auto-increment column is specified as the primary key or partitioning key, parallel DML is not supported if the column is specified.
Here are some examples:
Create a table named
tbl1and specify thecol1column as the auto-increment primary key.CREATE TABLE tbl1(col1 INT AUTO_INCREMENT PRIMARY KEY, col2 INT, col3 INT);Create a table named
tbl2and specify thecol1column as the auto-increment column.CREATE TABLE tbl1(col1 INT AUTO_INCREMENT, col2 INT, col3 INT);Create a table named
tbl3.CREATE TABLE tbl3(col1 INT, col2 INT, col3 INT);Insert data into the tables. Parallel DML is not supported because the
col1column, which is an auto-increment primary key, is specified in the statement.INSERT /*+enable_parallel_dml parallel(3)*/ INTO tbl1 SELECT * FROM tbl3;Insert data into the tables. Parallel DML is supported because the
col1column is not specified in the statement.INSERT /*+enable_parallel_dml parallel(3)*/ INTO tbl1(col2, col3) SELECT col2, col3 FROM tbl3;Insert data into the tables. Parallel DML is supported because the
col1column is specified as an auto-increment column only.INSERT /*+enable_parallel_dml parallel(3)*/ INTO tbl2 SELECT * FROM tbl3;
Parallel DML for associated updates
Note
In OceanBase Database V4.3.x, parallel DML has been supported since V4.3.5 for performing associated updates by using multi-table joins (JOIN) in UPDATE statements.
An associated update is an update operation that updates the data in a target table based on the data in associated tables by using multi-table joins in an UPDATE statement. Parallel DML can significantly improve the performance of large-scale data update operations.
Here is an example:
The following steps describe how to perform an associated update by using parallel DML.
Create two test tables named
customersandorders.CREATE TABLE customers ( id INT PRIMARY KEY, name VARCHAR(50), customer_level VARCHAR(10) );CREATE TABLE orders ( order_id INT PRIMARY KEY, customer_id INT, discount DECIMAL(3, 2), amount DECIMAL(10, 2) );Insert test data.
Insert customer data into the
customerstable:INSERT INTO customers (id, name, customer_level) VALUES (1, 'Alice', 'VIP'), (2, 'Bob', 'NORMAL'), (3, 'Charlie', 'OTHER');Insert order data into the
orderstable:INSERT INTO orders (order_id, customer_id, discount, amount) VALUES (101, 1, NULL, 1000.00), (102, 2, NULL, 2000.00), (103, 3, NULL, 1500.00), (104, 1, NULL, 3000.00), (105, 2, NULL, 2500.00);
Perform an associated update by using parallel DML.
Run the
EXPLAINstatement to view the execution plan of the parallel DML operation. Specify the degree of parallelism by using thePARALLELhint. Update the values in thediscountcolumn of theorderstable based on the values in thecustomer_levelcolumn of thecustomerstable.EXPLAIN UPDATE /*+ PARALLEL(4) */ orders o JOIN customers c ON o.customer_id = c.id SET o.discount = CASE WHEN c.customer_level = 'VIP' THEN 0.9 WHEN c.customer_level = 'NORMAL' THEN 0.95 ELSE 1.0 END;The return result is as follows:
+------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Query Plan | +------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ========================================================================== | | |ID|OPERATOR |NAME |EST.ROWS|EST.TIME(us)| | | -------------------------------------------------------------------------- | | |0 |DISTRIBUTED UPDATE | |3 |50 | | | |1 |└─PX COORDINATOR | |3 |3 | | | |2 | └─EXCHANGE OUT DISTR |:EX10001|3 |3 | | | |3 | └─SHARED HASH JOIN | |3 |3 | | | |4 | ├─EXCHANGE IN DISTR | |3 |2 | | | |5 | │ └─EXCHANGE OUT DISTR (BC2HOST)|:EX10000|3 |2 | | | |6 | │ └─PX BLOCK ITERATOR | |3 |1 | | | |7 | │ └─TABLE FULL SCAN |c |3 |1 | | | |8 | └─PX BLOCK ITERATOR | |5 |1 | | | |9 | └─TABLE FULL SCAN |o |5 |1 | | | ========================================================================== | | Outputs & filters: | | ------------------------------------- | | 0 - output(nil), filter(nil) | | table_columns([{o: ({orders: (o.order_id, o.customer_id, o.discount, o.amount)})}]), | | update([o.discount=column_conv(DECIMAL_INT,PS:(3,2),NULL,CASE WHEN c.customer_level = 'VIP' THEN cast(0.9, DECIMAL_INT(3, 2)) WHEN c.customer_level | | = 'NORMAL' THEN 0.95 ELSE cast(1.0, DECIMAL_INT(3, 2)) END)]) | | 1 - output([o.order_id], [o.customer_id], [o.discount], [o.amount], [c.customer_level]), filter(nil), rowset=16 | | 2 - output([o.order_id], [o.customer_id], [o.discount], [o.amount], [c.customer_level]), filter(nil), rowset=16 | | dop=4 | | 3 - output([o.order_id], [o.customer_id], [o.discount], [o.amount], [c.customer_level]), filter(nil), rowset=16 | | equal_conds([o.customer_id = c.id]), other_conds(nil) | | 4 - output([c.id], [c.customer_level]), filter(nil), rowset=16 | | 5 - output([c.id], [c.customer_level]), filter(nil), rowset=16 | | dop=4 | | 6 - output([c.id], [c.customer_level]), filter(nil), rowset=16 | | 7 - output([c.id], [c.customer_level]), filter(nil), rowset=16 | | access([c.id], [c.customer_level]), partitions(p0) | | is_index_back=false, is_global_index=false, | | range_key([c.id]), range(MIN ; MAX)always true | | 8 - output([o.order_id], [o.customer_id], [o.discount], [o.amount]), filter(nil), rowset=16 | | 9 - output([o.order_id], [o.customer_id], [o.discount], [o.amount]), filter(nil), rowset=16 | | access([o.order_id], [o.customer_id], [o.discount], [o.amount]), partitions(p0) | | is_index_back=false, is_global_index=false, | | range_key([o.order_id]), range(MIN ; MAX)always true | +------------------------------------------------------------------------------------------------------------------------------------------------------------+ 38 rows in set
