In OceanBase Database's Oracle-compatible mode, a distributed DML hint controls whether a DML operation is executed in a distributed manner. Note the following two important concepts:
- Distributed DML operations execute DML statements across multiple nodes, whereas non-distributed DML operations are executed locally or partition-wise on a single node.
- Parallel DML operations execute a DML statement in multiple threads within a single node.
In addition, although these hints can be used for execution plan pinning, we recommend that you do not use them. These hints are generally enabled only in OUTLINE DATA.
| Hint | Description |
|---|---|
USE_DISTRIBUTED_DML |
Enables distributed DML operations. Its reverse operation is NO_USE_DISTRIBUTED_DML. |
NO_USE_DISTRIBUTED_DML |
Disables distributed DML operations. Its reverse operation is USE_DISTRIBUTED_DML. |
USE_DISTRIBUTED_DML hint
The USE_DISTRIBUTED_DML hint enables distributed DML operations. When this hint is enabled, DML operations are executed in a distributed manner. The NO_USE_DISTRIBUTED_DML hint is the reverse operation of the USE_DISTRIBUTED_DML hint.
Syntax
/*+ USE_DISTRIBUTED_DML [ ( [ @ qb_name ] ) ] */
Examples
-- Create a table named tp1 that has 10 HASH partitions.
CREATE TABLE tp1(c1 INT, c2 INT, c3 INT) PARTITION BY HASH(c1) PARTITIONS 10;
-- Use the USE_DISTRIBUTED_DML hint to update the tp1 table. Set the values of the c2 column to 4 based on the condition that the values of the c1 column are less than 4.
-- This update operation is performed in a distributed manner.
EXPLAIN BASIC UPDATE /*+ USE_DISTRIBUTED_DML */ tp1 SET c2 = 4 WHERE c1 < 4;
+--------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+--------------------------------------------------------------------------------------------------------------------------+
| ========================================= |
| |ID|OPERATOR |NAME | |
| ----------------------------------------- |
| |0 |DISTRIBUTED UPDATE | | |
| |1 |└─PX COORDINATOR | | |
| |2 | └─EXCHANGE OUT DISTR |:EX10000| |
| |3 | └─PX PARTITION ITERATOR| | |
| |4 | └─TABLE FULL SCAN |TP1 | |
| ========================================= |
| Outputs & filters: |
| ------------------------------------- |
| 0 - output(nil), filter(nil) |
| table_columns([{TP1: ({TP1: (TP1.__pk_increment, TP1.C1, TP1.C2, TP1.C3)})}]), |
| update([TP1.C2=column_conv(NUMBER,PS:(-1,0),NULL,cast(4, NUMBER(-1, -85)))]) |
| 1 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter(nil), rowset=16 |
| 2 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter(nil), rowset=16 |
| dop=1 |
| 3 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter(nil), rowset=16 |
| force partition granule |
| 4 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter([TP1.C1 < cast(4, NUMBER(-1, -85))]), rowset=16 |
| access([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), partitions(p[0-9]) |
| is_index_back=false, is_global_index=false, filter_before_indexback[false], |
| range_key([TP1.__pk_increment]), range(MIN ; MAX)always true |
+--------------------------------------------------------------------------------------------------------------------------+
23 rows in set
NO_USE_DISTRIBUTED_DML hint
The NO_USE_DISTRIBUTED_DML hint prevents a DML operation from being executed in a distributed manner. The USE_DISTRIBUTED_DML hint is the opposite of this hint.
Syntax
/*+ NO_USE_DISTRIBUTED_DML [ ( [ @ qb_name ] ) ] */
Examples
-- Create a table named tp1 with 10 HASH partitions.
CREATE TABLE tp1(c1 INT, c2 INT, c3 INT) PARTITION BY HASH(c1) PARTITIONS 10;
-- Use the NO_USE_DISTRIBUTED_DML hint to limit an update operation to a single node for local execution, rather than distributed execution.
-- The update operation will be executed non-distributedly.
EXPLAIN BASIC UPDATE /*+ NO_USE_DISTRIBUTED_DML */ tp1 SET c2 = 4 WHERE c1 < 4;
+--------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+--------------------------------------------------------------------------------------------------------------------------+
| ======================================= |
| |ID|OPERATOR |NAME | |
| --------------------------------------- |
| |0 |PX COORDINATOR | | |
| |1 |└─EXCHANGE OUT DISTR |:EX10000| |
| |2 | └─PX PARTITION ITERATOR| | |
| |3 | └─UPDATE | | |
| |4 | └─TABLE FULL SCAN |TP1 | |
| ======================================= |
| Outputs & filters: |
| ------------------------------------- |
| 0 - output(nil), filter(nil), rowset=16 |
| 1 - output(nil), filter(nil), rowset=16 |
| dop=1 |
| 2 - output(nil), filter(nil), rowset=16 |
| partition wise, force partition granule |
| 3 - output(nil), filter(nil) |
| table_columns([{TP1: ({TP1: (TP1.__pk_increment, TP1.C1, TP1.C2, TP1.C3)})}]), |
| update([TP1.C2=column_conv(NUMBER,PS:(-1,0),NULL,cast(4, NUMBER(-1, -85)))]) |
| 4 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter([TP1.C1 < cast(4, NUMBER(-1, -85))]), rowset=16 |
| access([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), partitions(p[0-9]) |
| is_index_back=false, is_global_index=false, filter_before_indexback[false], |
| range_key([TP1.__pk_increment]), range(MIN ; MAX)always true |
+--------------------------------------------------------------------------------------------------------------------------+
23 rows in set