In OceanBase Database in Oracle mode, you can use distributed DML hints to control whether DML operations are performed in a distributed manner. Please note the following two important concepts:
- Distributed DML: Refers specifically to DML operations executed across multiple nodes, which is fundamentally different from non-distributed DML operations executed locally or in a partition-wise manner on a single node.
- Parallel DML: Emphasizes the execution of a single DML operation in parallel using multiple threads on a single node.
Additionally, while these hints can be used for fixed execution plans, they are generally not recommended for use and are typically only enabled in OUTLINE DATA.
| Hint Name | Description |
|---|---|
USE_DISTRIBUTED_DML |
Enables distributed DML operations. The opposite operation is NO_USE_DISTRIBUTED_DML. |
NO_USE_DISTRIBUTED_DML |
Disables distributed DML operations. The opposite operation is USE_DISTRIBUTED_DML. |
USE_DISTRIBUTED_DML Hint
The USE_DISTRIBUTED_DML hint enables distributed DML operations. When this hint is enabled, DML operations are executed in a distributed manner. The reverse operation is the NO_USE_DISTRIBUTED_DML hint.
Syntax
/*+ USE_DISTRIBUTED_DML [ ( [ @ qb_name ] ) ] */
Examples
-- Create a table tp1 with 10 hash partitions.
CREATE TABLE tp1(c1 INT, c2 INT, c3 INT) PARTITION BY HASH(c1) PARTITIONS 10;
-- Use the USE_DISTRIBUTED_DML hint to update the tp1 table and set the value of column c2 to 4 where the value of column c1 is less than 4.
-- This update operation will be executed in a distributed manner.
EXPLAIN BASIC UPDATE /*+ USE_DISTRIBUTED_DML */ tp1 SET c2 = 4 WHERE c1 < 4;
+--------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+--------------------------------------------------------------------------------------------------------------------------+
| ========================================= |
| |ID|OPERATOR |NAME | |
| ----------------------------------------- |
| |0 |DISTRIBUTED UPDATE | | |
| |1 |└─PX COORDINATOR | | |
| |2 | └─EXCHANGE OUT DISTR |:EX10000| |
| |3 | └─PX PARTITION ITERATOR| | |
| |4 | └─TABLE FULL SCAN |TP1 | |
| ========================================= |
| Outputs & filters: |
| ------------------------------------- |
| 0 - output(nil), filter(nil) |
| table_columns([{TP1: ({TP1: (TP1.__pk_increment, TP1.C1, TP1.C2, TP1.C3)})}]), |
| update([TP1.C2=column_conv(NUMBER,PS:(-1,0),NULL,cast(4, NUMBER(-1, -85)))]) |
| 1 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter(nil), rowset=16 |
| 2 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter(nil), rowset=16 |
| dop=1 |
| 3 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter(nil), rowset=16 |
| force partition granule |
| 4 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter([TP1.C1 < cast(4, NUMBER(-1, -85))]), rowset=16 |
| access([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), partitions(p[0-9]) |
| is_index_back=false, is_global_index=false, filter_before_indexback[false], |
| range_key([TP1.__pk_increment]), range(MIN ; MAX)always true |
+--------------------------------------------------------------------------------------------------------------------------+
23 rows in set
NO_USE_DISTRIBUTED_DML Hint
The NO_USE_DISTRIBUTED_DML hint prevents DML operations from being executed in a distributed manner. Its opposite is the USE_DISTRIBUTED_DML hint.
Syntax
/*+ NO_USE_DISTRIBUTED_DML [ ( [ @ qb_name ] ) ] */
Examples
-- Create a hash-partitioned table tp1 with 10 partitions.
CREATE TABLE tp1(c1 INT, c2 INT, c3 INT) PARTITION BY HASH(c1) PARTITIONS 10;
-- Use the NO_USE_DISTRIBUTED_DML hint to restrict update operations to be executed locally on a single node, rather than distributedly.
-- This update operation will be executed non-distributedly.
EXPLAIN BASIC UPDATE /*+ NO_USE_DISTRIBUTED_DML */ tp1 SET c2 = 4 WHERE c1 < 4;
+--------------------------------------------------------------------------------------------------------------------------+
| Query Plan |
+--------------------------------------------------------------------------------------------------------------------------+
| ======================================= |
| |ID|OPERATOR |NAME | |
| --------------------------------------- |
| |0 |PX COORDINATOR | | |
| |1 |└─EXCHANGE OUT DISTR |:EX10000| |
| |2 | └─PX PARTITION ITERATOR| | |
| |3 | └─UPDATE | | |
| |4 | └─TABLE FULL SCAN |TP1 | |
| ======================================= |
| Outputs & filters: |
| ------------------------------------- |
| 0 - output(nil), filter(nil), rowset=16 |
| 1 - output(nil), filter(nil), rowset=16 |
| dop=1 |
| 2 - output(nil), filter(nil), rowset=16 |
| partition wise, force partition granule |
| 3 - output(nil), filter(nil) |
| table_columns([{TP1: ({TP1: (TP1.__pk_increment, TP1.C1, TP1.C2, TP1.C3)})}]), |
| update([TP1.C2=column_conv(NUMBER,PS:(-1,0),NULL,cast(4, NUMBER(-1, -85)))]) |
| 4 - output([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), filter([TP1.C1 < cast(4, NUMBER(-1, -85))]), rowset=16 |
| access([TP1.__pk_increment], [TP1.C1], [TP1.C2], [TP1.C3]), partitions(p[0-9]) |
| is_index_back=false, is_global_index=false, filter_before_indexback[false], |
| range_key([TP1.__pk_increment]), range(MIN ; MAX)always true |
+--------------------------------------------------------------------------------------------------------------------------+
23 rows in set