Purpose
This statement is used to create a new table in a database.
Syntax
CREATE [hint_options] [GLOBAL TEMPORARY] TABLE table_name
(table_definition_list) [table_option_list] [partition_option] [on_commit_option]
CREATE [GLOBAL TEMPORARY] TABLE table_name
(table_definition_list) [table_option_list] [partition_option] [[MERGE_ENGINE = {delete_insert | partial_update}] table_column_group_option] [AS] select;
table_definition_list:
table_definition [, table_definition ...]
table_definition:
column_definition
| [,
| [CONSTRAINT [constraint_name]] { PRIMARY KEY|UNIQUE } (column_name) //Add constraints after creating all columns
| [CONSTRAINT [constraint_name]] FOREIGN KEY (column_name, column_name ...) references_clause constraint_state
| [CONSTRAINT [constraint_name]] CHECK(expression) constraint_state
]
column_definition_list:
column_definition [, column_definition ...]
column_definition:
column_name data_type
[VISIBLE | INVISIBLE] [GENERATED BY DEFAULT AS IDENTITY | GENERATED ALWAYS AS IDENTITY]
{
[DEFAULT expression]
[NULL | NOT NULL]
[CONSTRAINT [constraint_name]] [ PRIMARY KEY|UNIQUE ] //Add constraints when creating columns
[CONSTRAINT [constraint_name] CHECK(expression) constraint_state]
[CONSTRAINT [constraint_name] references_clause]
|
[GENERATED ALWAYS] AS (expression) [VIRTUAL]
[NULL | NOT NULL] [UNIQUE KEY] [[PRIMARY] KEY] [UNIQUE LOWER_KEY] [SKIP_INDEX(skip_index_option_list)]
}
skip_index_option_list:
skip_index_option [,skip_index_option ...]
skip_index_option:
MIN_MAX
| SUM
references_clause:
REFERENCES table_name [ (column_name, column_name ...) ] [ON DELETE {SET NULL | CASCADE}]
constraint_state:
[RELY | NORELY] [USING INDEX index_option_list] [ENABLE | DISABLE] [VALIDATE | NOVALIDATE]
index_option_list:
index_option [ index_option ...]
index_option:
[GLOBAL | LOCAL]
| block_size
| compression
| STORING(column_name_list)
table_option_list:
table_option [ table_option ...]
table_option:
TABLEGROUP = tablegroup_name
| block_size
| compression
| ENABLE ROW MOVEMENT
| DISABLE ROW MOVEMENT
| physical_attribute
| parallel_clause
| DUPLICATE_SCOPE [=] 'none|cluster'
| TABLE_MODE [=] 'table_mode_value'
| enable_macro_block_bloom_filter [=] {True | False}
| DYNAMIC_PARTITION_POLICY [=] (dynamic_partition_policy_list)
physical_attribute_list:
physical_attribute [physical_attribute]
physical_attribute:
PCTFREE [=] num
| PCTUSED num
| INITRANS num
| MAXTRANS num
| STORAGE(storage_option [storage_option] ...)
| TABLESPACE tablespace
parallel_clause:
{NOPARALLEL | PARALLEL integer}
table_mode_value:
NORMAL
| QUEUING
| MODERATE
| SUPER
| EXTREME
dynamic_partition_policy_list:
dynamic_partition_policy_option [, dynamic_partition_policy_option ...]
dynamic_partition_policy_option:
ENABLE = {true | false}
| TIME_UNIT = {'hour' | 'day' | 'week' | 'month' | 'year'}
| PRECREATE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}
| EXPIRE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}
| TIME_ZONE = {'default' | 'time_zone'}
| BIGINT_PRECISION = {'none' | 'us' | 'ms' | 's'}
compression:
NOCOMPRESS
| COMPRESS { BASIC | FOR OLTP | FOR QUERY [LOW | HIGH] | FOR ARCHIVE [LOW | HIGH]}
storage_option:
INITIAL num [K|M|G|T|P|E]
| NEXT num [K|M|G|T|P|E]
| MINEXTENTS num [K|M|G|T|P|E]
| MAXEXTENTS num [K|M|G|T|P|E]
partition_option:
PARTITION BY HASH(column_name_list)
[subpartition_option] hash_partition_define
| PARTITION BY RANGE (column_name_list)
[subpartition_option] (range_partition_list)
| PARTITION BY LIST (column_name_list)
[subpartition_option] (list_partition_list)
| PARTITION BY RANGE([column_name_list]) [SIZE('size_value')] [range_partition_list]
/*Templated subpartitioning*/
subpartition_option:
SUBPARTITION BY HASH (column_name_list) hash_subpartition_define
| SUBPARTITION BY RANGE (column_name_list) SUBPARTITION TEMPLATE
(range_subpartition_list)
| SUBPARTITION BY LIST (column_name_list) SUBPARTITION TEMPLATE
(list_subpartition_list)
/*Non-templated subpartitioning*/
subpartition_option:
SUBPARTITION BY HASH (column_name_list)
| SUBPARTITION BY RANGE (column_name_list)
| SUBPARTITION BY LIST (column_name_list)
subpartition_list:
(hash_subpartition_list)
| (range_subpartition_list)
| (list_subpartition_list)
hash_partition_define:
PARTITIONS partition_count [TABLESPACE tablespace] [compression]
| (hash_partition_list)
hash_partition_list:
hash_partition [, hash_partition ...]
hash_partition:
partition [partition_name] [subpartition_list/*Only non-templated subpartitions can be defined*/]
hash_subpartition_define:
SUBPARTITIONS subpartition_count
| SUBPARTITION TEMPLATE (hash_subpartition_list)
hash_subpartition_list:
hash_subpartition [, hash_subpartition ...]
hash_subpartition:
subpartition [subpartition_name]
range_partition_list:
range_partition [, range_partition ...]
range_partition:
PARTITION [partition_name]
VALUES LESS THAN {(expression_list) | (MAXVALUE)}
[subpartition_list/*Only non-templated subpartitions can be defined*/]
[ID = num] [physical_attribute_list] [compression]
range_subpartition_list:
range_subpartition [, range_subpartition ...]
range_subpartition:
SUBPARTITION subpartition_name
VALUES LESS THAN {(expression_list) | MAXVALUE} [physical_attribute_list]
list_partition_list:
list_partition [, list_partition] ...
list_partition:
PARTITION [partition_name]
VALUES (DEFAULT | expression_list)
[subpartition_list /*Only non-templated subpartitions can be defined*/]
[ID num] [physical_attribute_list] [compression]
list_subpartition_list:
list_subpartition [, list_subpartition] ...
list_subpartition:
SUBPARTITION [partition_name] VALUES (DEFAULT | expression_list) [physical_attribute_list]
expression_list:
expression [, expression ...]
column_name_list:
column_name [, column_name ...]
partition_name_list:
partition_name [, partition_name ...]
partition_count | subpartition_count:
INT_VALUE
on_commit_option:
ON COMMIT DELETE ROWS
| ON COMMIT PRESERVE ROWS
table_column_group_option:
WITH COLUMN GROUP(all columns)
| WITH COLUMN GROUP(each column)
| WITH COLUMN GROUP(all columns, each column)
Parameters
| Parameter | Description |
|---|---|
| hint_options | Specifies the hint options. You can manually specify the direct load hint options of APPEND, DIRECT, and NO_DIRECT. The corresponding hint format is /+ [APPEND | DIRECT(need_sort,max_error,load_type)] parallel(N)| NO_DIRECT/. For more information, see the Direct load by using the CREATE TABLE AS SELECT statement section in Full direct load. |
| GLOBAL TEMPORARY | Creates a temporary table. |
| DEFAULT expression | Specifies the default value of a column. expression can contain a function that references a sequence.
NoticeAuto-increment columns cannot have default values. |
| BLOCK_SIZE | Specifies the microblock size of the table. |
| COMPRESSION | Specifies the storage format, compression method, and other parameters. Valid values:
|
| tablegroup_name | Specifies the table group to which the table belongs. |
| FOREIGN KEY | Creates a foreign key for the table. If you do not specify the foreign key name, the system names it by using the table name, OBFK, and the creation time. (For example, the name of the foreign key created for the t1 table at 00:00:00, August 1, 2021, is t1_OBFK_1627747200000000.) Foreign keys allow you to cross-reference related data across tables. The result of a DELETE operation depends on the ON DELETE clause and can be either:
|
| VISIBLE | Specifies that the column is visible. This is the default column status. |
| INVISIBLE | Specifies that the column is invisible. When a column is set to INVISIBLE, it is not displayed by default in queries. |
| GENERATED BY DEFAULT AS IDENTITY | GENERATED ALWAYS AS IDENTITY | Specifies that a column is an auto-increment column. The specific usage is as follows:
NoticeThe data type of the column must be a numeric type. |
| physical_attribute | PCTFREE: specifies the percentage of reserved space in macroblocks. Other attributes such as STORAGE and TABLESPACE are provided only for syntax compatibility during migration and take no effect. |
| ENABLE/DISABLE ROW MOVEMENT | Specifies whether to allow row movement between partitions for key updates. |
| ON COMMIT DELETE ROWS | A transactional temporary table that deletes data on commit. |
| ON COMMIT PRESERVE ROWS | A session-level temporary table that deletes data when the session ends. |
| parallel_clause | Specifies the parallelism at the table level:
NoticeWhen you specify the parallelism, the precedence is: parallelism specified by hint > parallelism specified by |
| DUPLICATE_SCOPE | Specifies the attribute of replicated tables. Valid values:
cluster-level replicated tables. |
| MERGE_ENGINE = {delete_insert | partial_update} | Optional. Specifies the major compaction trigger threshold and strategy, that is, the behavior after minor compactions. Valid values:
Note
|
| table_column_group_option | Specifies the table column store options. The options are described as follows:
|
| SKIP_INDEX | Specifies the skip index attribute of a column. Valid values:
Notice
|
| TABLE_MODE | Optional. Specifies the major compaction trigger threshold and strategy, that is, the behavior after minor compactions. For more information about the values, see table_mode_value. |
| enable_macro_block_bloom_filter [=] {True | False} | Specifies whether to persist the macro block-level bloom filter. The values are as follows:
NoteFor OceanBase Database V4.3.5, the |
| DYNAMIC_PARTITION_POLICY [=] (dynamic_partition_policy_list) | Specifies the dynamic partition management attribute of the table to implement automatic partition creation and deletion. dynamic_partition_policy_list is a list of parameters that configure dynamic partition policies, with parameters separated by commas. For more information, see dynamic_partition_policy_option.
NoteIn OceanBase Database V4.3.5, the specification of dynamic partition management attributes for tables is supported in V4.3.5 BP2 and later. |
| PARTITION BY RANGE([column_name_list]) [SIZE('size_value')] [range_partition_list] | Specifies to create an automatically partitioned table. For more information, see the automatic partitioning syntax described in Automatic partition splitting. |
table_mode_value
Note
In the listed TABLE_MODE modes except for NORMAL, all modes indicate QUEUING tables. QUEUING tables are the most fundamental type, and the other modes (excluding the NORMAL mode) indicate tables that apply more proactive merge strategies.
NORMAL: normal. This is the default value. In this mode, the probability of triggering a major compaction after a minor compaction is extremely low.QUEUING: queuing. In this mode, the probability of triggering a major compaction after a minor compaction is low.MODERATE: moderate. In this mode, the probability of triggering a major compaction after a minor compaction is moderate.SUPER: super. In this mode, the probability of triggering a major compaction after a minor compaction is high.EXTREME: extreme. In this mode, the probability of triggering a major compaction after a minor compaction is relatively high.
For more information about major compactions, see Adaptive compaction.
dynamic_partition_policy_option
ENABLE = {true | false}: Optional. Indicates whether to enable dynamic partition management. This option can be modified. The values are as follows:true: Default value. Enables dynamic partition management.false: Disables dynamic partition management.
TIME_UNIT = {'hour' | 'day' | 'week' | 'month' | 'year'}: Required. Specifies the time unit for partitions, that is, the interval for automatically creating partition boundaries. This option cannot be modified. The values are as follows:hour: Partitioned by hourday: Partitioned by dayweek: Partitioned by weekmonth: Partitioned by monthyear: Partitioned by year
PRECREATE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}: Optional. Specifies the pre-creation time. Each time dynamic partition management is scheduled, partitions will be pre-created so that the maximum partition upper bound > now() + precreate_time. This option can be modified. The values are as follows:-1: Default value. No partitions are pre-created.0: Only the current partition is pre-created.n {hour | day | week | month | year}: Pre-create partitions for the specified time span. For example,3 hourmeans pre-create partitions for the next 3 hours.
Note
- When multiple partitions need to be pre-created, the interval between partition boundaries is determined by
TIME_UNIT. - The first pre-created partition boundary is the current maximum partition boundary rounded up to the next
TIME_UNIT.
EXPIRE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}: Optional. Specifies the partition expiration time. Each time dynamic partition management is scheduled, all expired partitions with partition upper bound < now() - expire_time will be deleted. This option can be modified. The values are as follows:-1: Default value. Partitions never expire.0: All partitions before the current partition expire.n {hour | day | week | month | year}: Partition expiration time. For example,1 daymeans partitions expire after 1 day.
TIME_ZONE = {'default' | 'time_zone'}: Optional. Specifies the time zone used when comparing the current time and the partition key of time types (date,timestamp). This option cannot be modified. The values are as follows:default: Default value. No additional time zone is set, and the tenant time zone is used. For types other than those listed above, thetime_zonefield must be set todefault.time_zone: Custom time zone offset. For example,+8:00and other time zone offsets.
BIGINT_PRECISION = {'none' | 'us' | 'ms' | 's'}: Optional. Specifies the timestamp precision of anumbertype partition key. This option cannot be modified. The values are as follows:none: Default value. No precision (partition key is not of typenumber)us: Microsecond precisionms: Millisecond precisions: Second precision
For more information about creating dynamic partitioned tables, see Create a dynamic partitioned table.
Here is an example:
CREATE TABLE tbl2 (col1 INT, col2 TIMESTAMP)
DYNAMIC_PARTITION_POLICY(
ENABLE = true,
TIME_UNIT = 'hour',
PRECREATE_TIME = '3 hour',
EXPIRE_TIME = '1 day',
TIME_ZONE = '+8:00',
BIGINT_PRECISION = 'none')
PARTITION BY RANGE (col2)(
PARTITION P0 VALUES LESS THAN (TIMESTAMP '2024-11-11 13:30:00')
);
Examples
Create a table named
TEST_TBL1.obclient> CREATE TABLE TEST_TBL1 (col1 INT PRIMARY KEY, col2 VARCHAR(50));Create a table named
TEST_TBL2and set it to hash partitioned with 8 partitions.obclient> CREATE TABLE TEST_TBL2 (col1 INT PRIMARY KEY, col2 INT) PARTITION BY HASH(col1) PARTITIONS 8;Create a table named
TEST_TBL3and set the partitions to range partitioned and the partitions to hash partitioned.obclient> CREATE TABLE TEST_TBL3 (col1 INT, col2 INT, col3 INT) PARTITION BY RANGE(col1) SUBPARTITION BY HASH(col2) SUBPARTITIONS 5 (PARTITION p0 VALUES LESS THAN(0), PARTITION p1 VALUES LESS THAN(100));Create a table named
TEST_TBL4, enable encoding, set the compression algorithm tozstd, and set the macroblock reserved space to5%.obclient> CREATE TABLE tbl6 (col1 INT, col2 INT, col3 VARCHAR(64)) COMPRESS FOR ARCHIVE PCTFREE 5;Create a transaction-level temporary table named
TEST_TBL5.obclient> CREATE GLOBAL TEMPORARY TABLE TEST_TBL5(col1 INT) ON COMMIT DELETE ROWS;Create a table named
TEST_TBL6and add a constraint.obclient> CREATE TABLE TEST_TBL6 (col1 INT, col2 INT, col3 INT,CONSTRAINT equal_check1 CHECK(col2 = col3 * 2) ENABLE VALIDATE);Specify the
REF_T2table as a foreign key. When aDELETEoperation affects rows in the child table that match the key values in the parent table, the key values are set to NULL.obclient> CREATE TABLE REF_T1(c1 INT PRIMARY KEY,C2 INT);obclient> CREATE TABLE REF_T2(c1 INT PRIMARY KEY,C2 INT,FOREIGN KEY(c2) REFERENCES ref_t1(c1) ON DELETE SET NULL);Create a RANGE + RANGE subpartitioned table named
TEST_TBL7.obclient> CREATE TABLE TEST_TBL7 (col1 INT, col2 INT, col3 INT) PARTITION BY RANGE(col1) SUBPARTITION BY RANGE(col2) ( PARTITION p0 VALUES LESS THAN(100) ( SUBPARTITION p0_r1 VALUES LESS THAN(2019), SUBPARTITION p0_r2 VALUES LESS THAN(2020), SUBPARTITION p0_r3 VALUES LESS THAN(2021) ), PARTITION p1 VALUES LESS THAN(200) ( SUBPARTITION p1_r1 VALUES LESS THAN(2019), SUBPARTITION p1_r2 VALUES LESS THAN(2020), SUBPARTITION p1_r3 VALUES LESS THAN(2021) ), PARTITION p2 VALUES LESS THAN(300) ( SUBPARTITION p2_r1 VALUES LESS THAN(2019), SUBPARTITION p2_r2 VALUES LESS THAN(2020), SUBPARTITION p2_r3 VALUES LESS THAN(2021) ) );Create a table named
TEST_TBL8with a parallelism degree of3.obclient> CREATE TABLE TEST_TBL8(col1 INT PRIMARY KEY, col2 INT) PARALLEL 3;Set a function as the default value of a column.
obclient> CREATE SEQUENCE SEQ_PERSONIPTVSEQ START WITH 1 MINVALUE 1 MAXVALUE 10 INCREMENT BY 2 NOCYCLE NOORDER CACHE 30;obclient> SELECT LPAD(SEQ_PERSONIPTVSEQ.NEXTVAL,18,TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS')) FROM DUAL;The return result is as follows:
+----------------------------------------------------------------------------+ | LPAD(SEQ_PERSONIPTVSEQ.NEXTVAL,18,TO_CHAR(SYSDATE,'YYYY-MM-DDHH24:MI:SS')) | +----------------------------------------------------------------------------+ | 2025-04-08 19:35:1 | +----------------------------------------------------------------------------+obclient> CREATE TABLE FUNC_DEFAULT_TEST ( OID NUMBER(20,0) DEFAULT LPAD(SEQ_PERSONIPTVSEQ.NEXTVAL,18,TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS')) NOT NULL);Create a
cluster-level replicated table namedDUP_T1. You can insert data and read data from a replicated table in the same way as you do with a normal table. For a read request, if a proxy is used, the read request may be routed to any OBServer node. If direct connection to an OBServer node is used, the read request is executed on the OBServer node if the local replica is readable.obclient> CREATE TABLE DUP_T1(c1 int) DUPLICATE_SCOPE = 'cluster';obclient> INSERT INTO DUP_T1 VALUES(1);obclient> SELECT * FROM DUP_T1;The return result is as follows:
+------+ | C1 | +------+ | 1 | +------+Create a columnstore table named
TBL_CG.obclient> CREATE TABLE tbl1_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);Specify the Skip Index attribute for a column when you create a table.
obclient> CREATE TABLE TEST_INDEX( col1 NUMBER SKIP_INDEX(MIN_MAX, SUM), col2 FLOAT SKIP_INDEX(MIN_MAX), col3 VARCHAR2(1024) SKIP_INDEX(MIN_MAX), col4 CHAR(10) );Create a table named
TEST_TBL9with thecol1column of the integer data type. Specify that this operation be performed in five parallel threads. Specify that the data of the new tableTEST_TBL8be sourced from the query result of theTEST_TBL8table.obclient> CREATE /*+ parallel(5) */ TABLE TEST_TBL9 (col1 NUMBER) AS SELECT col1 FROM TEST_TBL8;Create a table named
TEST_TBL10with thecol1column as an auto-increment column and set it as the primary key.obclient> CREATE TABLE TEST_TBL10 ( col1 INT GENERATED BY DEFAULT AS IDENTITY, col2 VARCHAR2(50), PRIMARY KEY (col1) );Create a table
tbwith the persistent macro block-level bloom filter enabled.CREATE TABLE tb(c1 INT PRIMARY KEY, c2 INT) enable_macro_block_bloom_filter = True;
Limitations on the use of global temporary tables in Oracle compatible mode
- Global temporary tables are widely used in various business scenarios in Oracle compatible mode and provide basic correctness and functionality.
- The primary purpose of using temporary tables is to ensure compatibility and reduce the need for business modifications. They are suitable for limited scenarios where performance requirements are not high. However, if the business scenario can be changed to use regular tables, that would be preferable.
Performance and stability
- In terms of SQL execution efficiency, temporary tables are similar to regular tables, without any significant advantages.
- When a transaction ends or a session is disconnected, temporary tables require additional data cleanup, which introduces extra overhead.
- The checks and cleanups performed on temporary tables during login can put pressure on login threads, potentially extending login times. In severe cases, it may even prevent login.
Create a temporary table
When creating a temporary table, the system will rewrite the statement by default:
- Add the
SYS_SESSION_IDcolumn as the primary key. - Add the
SYS_SESS_CREATE_TIMEcolumn as a regular column. - Create a hash-partitioned table with
SYS_SESSION_IDas the partitioning key, containing a fixed number of 16 partitions.
For example, the statement
obclient> CREATE GLOBAL TEMPORARY TABLE TEST_TBL11(
c1 INT,
c2 INT,
PRIMARY KEY(c1)
);
will be rewritten as follows:
obclient> CREATE GLOBAL TEMPORARY TABLE TEST_TBL11(
SYS_SESSION_ID INT,
SYS_SESS_CREATE_TIME INT,
c1 INT,
c2 INT,
PRIMARY KEY(SYS_SESSION_ID, c1)
)
PARTITION BY HASH(SYS_SESSION_ID) PARTITIONS 16;
DML and query statements for temporary tables
When you execute the INSERT statement, the system will insert the session ID and session creation time of the current session into the SYS_SESSION_ID and SYS_SESS_CREATE_TIME columns by default.
When you execute the UPDATE, DELETE, or SELECT statement, the system will add the filter condition "SYS_SESSION_ID = session ID of the current session" to the statement by default. This condition allows the SQL optimizer to perform partition pruning and determine the query range.
Data cleanup for temporary tables
- For temporary tables with the
ON COMMIT DELETE ROWSoption (transactional temporary tables, which is also the default option), when a transaction ends, a new transaction is started to delete the data in the temporary table by executing theDELETEstatement. - For temporary tables with the
ON COMMIT PRESERVE ROWSoption (session-level temporary tables), when a session is disconnected, the data in the temporary table is deleted by executing theDELETEstatement. - Due to the possible reuse of session IDs, in OceanBase Database V3.2.4 BP4 and earlier versions, the system will check the data of the current session ID during login and perform additional cleanup if necessary.
- The login check and cleanup based on non-unique session IDs may cause issues, such as the inability to log in to the cluster.
Routing of temporary tables
ON COMMIT DELETE ROWS(transactional temporary tables) Access to transactional temporary tables within a transaction can only be routed to the node where the transaction was initiated.ON COMMIT PRESERVE ROWS(session-level temporary tables) After a session accesses a temporary table, the OBServer node notifies the proxy, which then directs subsequent requests from that session to the current node.
Drop a temporary table
You can drop a temporary table while executing a DML statement. All data in the temporary table will be permanently deleted. This behavior differs from that in Oracle, where the DROP operation is not allowed until all sessions release their resources on the temporary table.
Support status of cross-feature functionalities
| Feature | V3.2.4 BP4 and earlier | V3.2.4 BP5 and V3.2.x later | V4.2.0 |
|---|---|---|---|
| Plan sharing among sessions | No | No | Yes |
| Login without triggering checks and cleanups | No | Yes | Yes |
MERGE INTO statement |
No | No | Yes |
Mitigation measures for serious issues
Unable to log in
- Stop the services related to temporary tables, delete the temporary tables, or perform a major compaction. In most cases, the issue can be resolved this way. If not, proceed to step 2.
- Restart the server where login is not possible.
Inability to reuse plans leads to PL cache expansion
For example, statements that contain temporary tables in PL definitions:
obclient> CREATE OR REPLACE PROCEDURE PRO_1
AS
var1 VARCHAR2(100);
BEGIN
-- Use dynamic SQL to create a temporary table.
EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE temp_table (
col1 VARCHAR2(100)
) ON COMMIT DROP';
-- Dynamically insert data.
EXECUTE IMMEDIATE 'INSERT INTO temp_table VALUES (''xxx'')';
-- Dynamically query data.
EXECUTE IMMEDIATE 'SELECT col1 FROM temp_table WHERE ROWNUM = 1' INTO var1;
DBMS_OUTPUT.PUT_LINE(var1);
END PRO_1;
/
Different sessions cannot share plans when they access the temporary table. Therefore, each session must compile the procedure PRO_1 to generate a cache, which may cause stability issues. You can change the SQL statement for the temporary table to a dynamic SQL statement to avoid this issue.
Data not cleared after a failure
Failures may leave residual data. Currently, there is no automatic cleanup method, and the residual data generally do not affect usage. If too much residual data exists, you can drop the temporary table and rebuild it.
Read-only and read/write tables in an Oracle tenant
In an Oracle tenant, you can execute the CREATE TABLE statement to create tables with the READ ONLY or READ WRITE attribute. You can also execute the ALTER TABLE statement to change the read/write attribute of a table.
Notice
A user with the SUPER privilege cannot perform the related operations. We recommend that you use a normal user.
Procedure:
Create a normal user:
CREATE USER test1 IDENTIFIED BY "12345";Grant the connection and table creation privileges to the user:
GRANT CREATE SESSION TO test1; GRANT CREATE TABLE TO test1;Connect to OceanBase Database as the user:
obclient -hxxx.xx.xxx.xxx -P2881 -utest1@oracle001 -ACreate a read-only table:
CREATE TABLE tb_readonly1(id INT) READ ONLY;Try to insert data into the read-only table (expected to fail):
INSERT INTO tb_readonly1 VALUES (1); -- Expected error: ORA-00600: internal error code, arguments: -5235, The table 'TEST1.TB_READONLY1' is read only so it cannot execute this statementCreate a read/write table:
CREATE TABLE tb_readwrite1(id INT) READ WRITE;Insert data into the read/write table (expected to succeed):
INSERT INTO tb_readwrite1 VALUES (99),(98); -- Expected result: Query OK, 2 rows affected (0.002 sec) -- Records: 2 Duplicates: 0 Warnings: 0Convert the read/write table to a read-only table:
ALTER TABLE tb_readwrite1 READ ONLY;Try to insert data into the converted read-only table (expected to fail):
INSERT INTO tb_readwrite1 VALUES (96),(97); -- Expected error: ORA-00600: internal error code, arguments: -5235, The table 'TEST1.TB_READWRITE1' is read only so it cannot execute this statement