Purpose
This statement is used to create a new table in the database.
Syntax
CREATE [hint_options] [GLOBAL TEMPORARY] TABLE table_name
(table_definition_list) [table_option_list] [partition_option] [on_commit_option]
CREATE [GLOBAL TEMPORARY] TABLE table_name
(table_definition_list) [table_option_list] [partition_option] [[MERGE_ENGINE = {delete_insert | partial_update}] table_column_group_option] [AS] select;
table_definition_list:
table_definition [, table_definition ...]
table_definition:
column_definition
| [,
| [CONSTRAINT [constraint_name]] { PRIMARY KEY|UNIQUE } (column_name) //Add constraints after all columns are created
| [CONSTRAINT [constraint_name]] FOREIGN KEY (column_name, column_name ...) references_clause constraint_state
| [CONSTRAINT [constraint_name]] CHECK(expression) constraint_state
]
column_definition_list:
column_definition [, column_definition ...]
column_definition:
column_name data_type
[VISIBLE | INVISIBLE] [GENERATED BY DEFAULT AS IDENTITY | GENERATED ALWAYS AS IDENTITY]
{
[DEFAULT expression]
[NULL | NOT NULL]
[CONSTRAINT [constraint_name]] [ PRIMARY KEY|UNIQUE ] //Add constraints when creating columns
[CONSTRAINT [constraint_name] CHECK(expression) constraint_state]
[CONSTRAINT [constraint_name] references_clause]
|
[GENERATED ALWAYS] AS (expression) [VIRTUAL]
[NULL | NOT NULL] [UNIQUE KEY] [[PRIMARY] KEY] [UNIQUE LOWER_KEY] [SKIP_INDEX(skip_index_option_list)]
}
skip_index_option_list:
skip_index_option [,skip_index_option ...]
skip_index_option:
MIN_MAX
| SUM
references_clause:
REFERENCES table_name [ (column_name, column_name ...) ] [ON DELETE {SET NULL | CASCADE}]
constraint_state:
[RELY | NORELY] [USING INDEX index_option_list] [ENABLE | DISABLE] [VALIDATE | NOVALIDATE]
index_option_list:
index_option [ index_option ...]
index_option:
[GLOBAL | LOCAL]
| block_size
| compression
| STORING(column_name_list)
table_option_list:
table_option [ table_option ...]
table_option:
TABLEGROUP = tablegroup_name
| block_size
| compression
| ENABLE ROW MOVEMENT
| DISABLE ROW MOVEMENT
| physical_attribute
| parallel_clause
| DUPLICATE_SCOPE [=] 'none|cluster'
| TABLE_MODE [=] 'table_mode_value'
| enable_macro_block_bloom_filter [=] {True | False}
| DYNAMIC_PARTITION_POLICY [=] (dynamic_partition_policy_list)
| MICRO_BLOCK_FORMAT_VERSION [=] {1|2}
physical_attribute_list:
physical_attribute [physical_attribute]
physical_attribute:
PCTFREE [=] num
| PCTUSED num
| INITRANS num
| MAXTRANS num
| STORAGE(storage_option [storage_option] ...)
| TABLESPACE tablespace
parallel_clause:
{NOPARALLEL | PARALLEL integer}
table_mode_value:
NORMAL
| QUEUING
| MODERATE
| SUPER
| EXTREME
dynamic_partition_policy_list:
dynamic_partition_policy_option [, dynamic_partition_policy_option ...]
dynamic_partition_policy_option:
ENABLE = {true | false}
| TIME_UNIT = {'hour' | 'day' | 'week' | 'month' | 'year'}
| PRECREATE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}
| EXPIRE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}
| TIME_ZONE = {'default' | 'time_zone'}
| BIGINT_PRECISION = {'none' | 'us' | 'ms' | 's'}
compression:
NOCOMPRESS
| COMPRESS { BASIC | FOR OLTP | FOR QUERY [LOW | HIGH] | FOR ARCHIVE [LOW | HIGH]}
storage_option:
INITIAL num [K|M|G|T|P|E]
| NEXT num [K|M|G|T|P|E]
| MINEXTENTS num [K|M|G|T|P|E]
| MAXEXTENTS num [K|M|G|T|P|E]
partition_option:
PARTITION BY HASH(column_name_list)
[subpartition_option] hash_partition_define
| PARTITION BY RANGE (column_name_list)
[subpartition_option] (range_partition_list)
| PARTITION BY LIST (column_name_list)
[subpartition_option] (list_partition_list)
| PARTITION BY RANGE([column_name_list]) [SIZE('size_value')] [range_partition_list]
| PARTITION BY RANGE (column_name) INTERVAL (expr)
[subpartition_option]
(range_partition)
/*Template subpartitioning*/
subpartition_option:
SUBPARTITION BY HASH (column_name_list) hash_subpartition_define
| SUBPARTITION BY RANGE (column_name_list) SUBPARTITION TEMPLATE
(range_subpartition_list)
| SUBPARTITION BY LIST (column_name_list) SUBPARTITION TEMPLATE
(list_subpartition_list)
/*Non-template subpartitioning*/
subpartition_option:
SUBPARTITION BY HASH (column_name_list)
| SUBPARTITION BY RANGE (column_name_list)
| SUBPARTITION BY LIST (column_name_list)
subpartition_list:
(hash_subpartition_list)
| (range_subpartition_list)
| (list_subpartition_list)
hash_partition_define:
PARTITIONS partition_count [TABLESPACE tablespace] [compression]
| (hash_partition_list)
hash_partition_list:
hash_partition [, hash_partition ...]
hash_partition:
partition [partition_name] [subpartition_list/*Only applicable for non-template subpartitioning*/]
hash_subpartition_define:
SUBPARTITIONS subpartition_count
| SUBPARTITION TEMPLATE (hash_subpartition_list)
hash_subpartition_list:
hash_subpartition [, hash_subpartition ...]
hash_subpartition:
subpartition [subpartition_name]
range_partition_list:
range_partition [, range_partition ...]
range_partition:
PARTITION [partition_name]
VALUES LESS THAN {(expression_list) | (MAXVALUE)}
[subpartition_list/*Only applicable for non-template subpartitioning*/]
[ID = num] [physical_attribute_list] [compression]
range_subpartition_list:
range_subpartition [, range_subpartition ...]
range_subpartition:
SUBPARTITION subpartition_name
VALUES LESS THAN {(expression_list) | MAXVALUE} [physical_attribute_list]
list_partition_list:
list_partition [, list_partition] ...
list_partition:
PARTITION [partition_name]
VALUES (DEFAULT | expression_list)
[subpartition_list /*Only applicable for non-template subpartitioning*/]
[ID num] [physical_attribute_list] [compression]
list_subpartition_list:
list_subpartition [, list_subpartition] ...
list_subpartition:
SUBPARTITION [partition_name] VALUES (DEFAULT | expression_list) [physical_attribute_list]
expression_list:
expression [, expression ...]
column_name_list:
column_name [, column_name ...]
partition_name_list:
partition_name [, partition_name ...]
partition_count | subpartition_count:
INT_VALUE
on_commit_option:
ON COMMIT DELETE ROWS
| ON COMMIT PRESERVE ROWS
table_column_group_option:
WITH COLUMN GROUP(all columns)
| WITH COLUMN GROUP(each column)
| WITH COLUMN GROUP(all columns, each column)
Parameters
| Parameter | Description |
|---|---|
| hint_options | Optional. Specifies the hint options. You can manually specify the bypass import hint, including APPEND, DIRECT, and NO_DIRECT. The corresponding hint format is /+ [APPEND | DIRECT(need_sort,max_error,load_type)] parallel(N)| NO_DIRECT/. For more information about bypass importing data by using the CREATE TABLE AS SELECT statement, see the Use CREATE TABLE AS SELECT to bypass import data section in Full bypass import. |
| GLOBAL TEMPORARY | Creates a temporary table. |
| DEFAULT expression | Specifies the default value of a column. expression supports a function expression that contains a sequence.
NoticeAuto-increment columns cannot have default values. |
| BLOCK_SIZE | Specifies the microblock size of a table. |
| COMPRESSION | Specifies the storage format (Flat or Encoding) and compression method. Valid values:
|
| tablegroup_name | Specifies the table group to which the table belongs. |
| FOREIGN KEY | Specifies the foreign key of the created table. If you do not specify the foreign key name, the system automatically generates a name in the format of table name + _OBFK + creation time. For example, the foreign key created on August 1, 2021, for the t1 table is named t1_OBFK_1627747200000000. A foreign key allows cross-table cross-referencing. When a DELETE operation affects the key values of rows in the parent table that match the rows in the child table, the result depends on the ON DELETE clause of the foreign key:
|
| VISIBLE | Indicates that the column is visible. This is the default column state. |
| INVISIBLE | Indicates that the column is invisible. After a column is set to INVISIBLE, it will not be displayed by default in queries. |
| GENERATED BY DEFAULT AS IDENTITY | GENERATED ALWAYS AS IDENTITY | Optional. Specifies a column as an auto-increment column. Valid values:
NoticeThe data type of the column must be a numeric type. |
| physical_attribute | PCTFREE: specifies the percentage of space reserved for macroblocks. Other attributes such as STORAGE and TABLESPACE are only for syntax compatibility and do not take effect. |
| ENABLE/DISABLE ROW MOVEMENT | Specifies whether to allow row movement between partitions for partitioning key updates. |
| ON COMMIT DELETE ROWS | Specifies a transaction-level temporary table. The data is deleted when the transaction is committed. |
| ON COMMIT PRESERVE ROWS | Specifies a session-level temporary table. The data is deleted when the session ends. |
| parallel_clause | Specifies the parallelism of a table:
NoticeWhen you specify the parallelism, the priority is as follows: the parallelism specified by the hint > the parallelism specified by the |
| DUPLICATE_SCOPE | Specifies the replication attribute of the table. Valid values:
cluster-level replicated tables. |
| MERGE_ENGINE = {delete_insert | partial_update} | Optional. Specifies the update model of the table. Valid values:
NoteAfter you specify the |
| table_column_group_option | Specifies the columnar storage options for the table. Valid values:
|
| SKIP_INDEX | Specifies the Skip Index attribute of the column. Valid values:
Notice
|
| TABLE_MODE | Optional. Specifies the merge trigger threshold and merge strategy, which control the merge behavior after data is dumped. For more information about the values, see table_mode_value. |
| enable_macro_block_bloom_filter [=] {True | False} | Specifies whether to persist the macroblock-level Bloom filter. Valid values:
|
| DYNAMIC_PARTITION_POLICY [=] $dynamic_partition_policy_list$ | Specifies the dynamic partition management attribute of the table, enabling automatic creation and deletion of partitions. dynamic_partition_policy_list is a list of configurable parameters for dynamic partition strategies, separated by commas. For more information, see dynamic_partition_policy_option. |
| MICRO_BLOCK_FORMAT_VERSION | Optional. Specifies the microblock storage format version of the table. Valid values: [1, +∞)
NoteThis parameter was introduced in V4.4.1. |
| PARTITION BY RANGE([column_name_list]) [SIZE('size_value')] [range_partition_list] | Specifies the creation of an automatic partition table. For more information, see the automatic partition syntax in Create a table. |
| PARTITION BY RANGE (column_name) INTERVAL (expr) [subpartition_option] (range_partition) | Specifies the creation of an interval partition table. For more information, see the syntax for creating an interval partition table in Create a partition table. |
table_mode_value
Note
Among the listed TABLE_MODE modes, all except the NORMAL mode represent QUEUING tables. The QUEUING table is the most basic table type, and the other modes listed below (except the NORMAL mode) represent more aggressive compaction strategies.
NORMAL: The default value, indicating normal mode. In this mode, the probability of triggering a major compaction after a data dump is extremely low.QUEUING: In this mode, the probability of triggering a major compaction after a data dump is low.MODERATE: Indicates moderate mode. In this mode, the probability of triggering a major compaction after a data dump is moderate.SUPER: Indicates super mode. In this mode, the probability of triggering a major compaction after a data dump is high.EXTREME: Indicates extreme mode. In this mode, the probability of triggering a major compaction after a data dump is high.
For more information about major compactions, see Adaptive major compaction.
dynamic_partition_policy_option
ENABLE = {true | false}: Optional. Specifies whether to enable dynamic partition management. Valid values:true: The default value. Indicates that dynamic partition management is enabled.false: Indicates that dynamic partition management is disabled.
TIME_UNIT = {'hour' | 'day' | 'week' | 'month' | 'year'}: Required. Specifies the time unit for partitioning, which is the interval for automatically creating partition boundaries. Valid values:hour: Partitions are created by hour.day: Partitions are created by day.week: Partitions are created by week.month: Partitions are created by month.year: Partitions are created by year.
PRECREATE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}: Optional. Specifies the time for precreating partitions. When dynamic partition management is scheduled, partitions are precreated so that max_partition_upper_bound > now() + precreate_time. Valid values:-1: The default value. Indicates that no partitions are precreated.0: Indicates that only the current partition is precreated.n {hour | day | week | month | year}: Indicates the time span for precreating partitions. For example,3 hourindicates that partitions are precreated for the past 3 hours.
Note
- When multiple partitions are to be precreated, the partition boundaries are spaced by
TIME_UNIT. - The first precreated partition boundary is the ceiling of the existing maximum partition boundary, based on
TIME_UNIT.
EXPIRE_TIME = {'-1' | '0' | 'n {hour | day | week | month | year}'}: Optional. Specifies the expiration time for partitions. When dynamic partition management is scheduled, all partitions with upper bounds < now() - expire_time are deleted. Valid values:-1: The default value. Indicates that partitions never expire.0: Indicates that all partitions except the current one have expired.n {hour | day | week | month | year}: Indicates the expiration time for partitions. For example,1 dayindicates that partitions expire after 1 day.
TIME_ZONE = {'default' | 'time_zone'}: Optional. Specifies the timezone for determining the current time and the size of the partitioning key of thedateortimestamptype. Valid values:default: The default value. Indicates that the tenant's timezone is used without additional configuration. For other types, thetime_zonefield must bedefault.time_zone: Indicates a custom timezone offset. For example,+8:00.
BIGINT_PRECISION = {'none' | 'us' | 'ms' | 's'}: Optional. Specifies the timestamp precision for thenumbertype partitioning key. Valid values:none: The default value. Indicates no precision (the partitioning key is not of thenumbertype).us: Microsecond precision.ms: Millisecond precision.s: Second precision.
For more information about creating a dynamic partition table, see Create a dynamic partition table.
Here is an example:
CREATE TABLE tbl2 (col1 INT, col2 TIMESTAMP)
DYNAMIC_PARTITION_POLICY(
ENABLE = true,
TIME_UNIT = 'hour',
PRECREATE_TIME = '3 hour',
EXPIRE_TIME = '1 day',
TIME_ZONE = '+8:00',
BIGINT_PRECISION = 'none')
PARTITION BY RANGE (col2)(
PARTITION P0 VALUES LESS THAN (TIMESTAMP '2024-11-11 13:30:00')
);
Examples
Create a table named
TEST_TBL1.obclient> CREATE TABLE TEST_TBL1 (col1 INT PRIMARY KEY, col2 VARCHAR(50));Create a table named
TEST_TBL2and create 8 hash partitions.obclient> CREATE TABLE TEST_TBL2 (col1 INT PRIMARY KEY, col2 INT) PARTITION BY HASH(col1) PARTITIONS 8;Create a table named
TEST_TBL3and create range partitions as the primary partitions and hash partitions as the subpartitions.obclient> CREATE TABLE TEST_TBL3 (col1 INT, col2 INT, col3 INT) PARTITION BY RANGE(col1) SUBPARTITION BY HASH(col2) SUBPARTITIONS 5 (PARTITION p0 VALUES LESS THAN(0), PARTITION p1 VALUES LESS THAN(100));Create a table named
TEST_TBL4with thezstdcompression algorithm and 5% of the macroblock space reserved.obclient> CREATE TABLE tbl6 (col1 INT, col2 INT, col3 VARCHAR(64)) COMPRESS FOR ARCHIVE PCTFREE 5;Create a transaction-level temporary table named
TEST_TBL5.obclient> CREATE GLOBAL TEMPORARY TABLE TEST_TBL5(col1 INT) ON COMMIT DELETE ROWS;Create a table named
TEST_TBL6with constraints.obclient> CREATE TABLE TEST_TBL6 (col1 INT, col2 INT, col3 INT,CONSTRAINT equal_check1 CHECK(col2 = col3 * 2) ENABLE VALIDATE);Specify a foreign key for the
REF_T2table. When aDELETEoperation affects the key values in the parent table that match the rows in the child table, execute theSET NULLoperation.obclient> CREATE TABLE REF_T1(c1 INT PRIMARY KEY,C2 INT);obclient> CREATE TABLE REF_T2(c1 INT PRIMARY KEY,C2 INT,FOREIGN KEY(c2) REFERENCES ref_t1(c1) ON DELETE SET NULL);Create a non-templated range + range partitioned table named
TEST_TBL7.obclient> CREATE TABLE TEST_TBL7 (col1 INT, col2 INT, col3 INT) PARTITION BY RANGE(col1) SUBPARTITION BY RANGE(col2) ( PARTITION p0 VALUES LESS THAN(100) ( SUBPARTITION p0_r1 VALUES LESS THAN(2019), SUBPARTITION p0_r2 VALUES LESS THAN(2020), SUBPARTITION p0_r3 VALUES LESS THAN(2021) ), PARTITION p1 VALUES LESS THAN(200) ( SUBPARTITION p1_r1 VALUES LESS THAN(2019), SUBPARTITION p1_r2 VALUES LESS THAN(2020), SUBPARTITION p1_r3 VALUES LESS THAN(2021) ), PARTITION p2 VALUES LESS THAN(300) ( SUBPARTITION p2_r1 VALUES LESS THAN(2019), SUBPARTITION p2_r2 VALUES LESS THAN(2020), SUBPARTITION p2_r3 VALUES LESS THAN(2021) ) );Create a table named
TEST_TBL8with a parallelism level of3.obclient> CREATE TABLE TEST_TBL8(col1 INT PRIMARY KEY, col2 INT) PARALLEL 3;Use the function to define the default value of a column.
obclient> CREATE SEQUENCE SEQ_PERSONIPTVSEQ START WITH 1 MINVALUE 1 MAXVALUE 10 INCREMENT BY 2 NOCYCLE NOORDER CACHE 30;obclient> SELECT LPAD(SEQ_PERSONIPTVSEQ.NEXTVAL,18,TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS')) FROM DUAL;The result is as follows:
+----------------------------------------------------------------------------+ | LPAD(SEQ_PERSONIPTVSEQ.NEXTVAL,18,TO_CHAR(SYSDATE,'YYYY-MM-DDHH24:MI:SS')) | +----------------------------------------------------------------------------+ | 2025-04-08 19:35:1 | +----------------------------------------------------------------------------+obclient> CREATE TABLE FUNC_DEFAULT_TEST ( OID NUMBER(20,0) DEFAULT LPAD(SEQ_PERSONIPTVSEQ.NEXTVAL,18,TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS')) NOT NULL);Create a
cluster-level replicated table namedDUP_T1. You can insert and read data from a replicated table in the same way as from a regular table. For a read request, if you use a Proxy, the request may be routed to any OBServer node. If you directly connect to an OBServer node, the read request is executed on the connected OBServer node as long as the local replica is readable.obclient> CREATE TABLE DUP_T1(c1 int) DUPLICATE_SCOPE = 'cluster';obclient> INSERT INTO DUP_T1 VALUES(1);obclient> SELECT * FROM DUP_T1;The result is as follows:
+------+ | C1 | +------+ | 1 | +------+Create a columnstore table named
TBL_CG.obclient> CREATE TABLE tbl1_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);Specify the Skip Index attribute for a column when you create a table.
obclient> CREATE TABLE TEST_INDEX( col1 NUMBER SKIP_INDEX(MIN_MAX, SUM), col2 FLOAT SKIP_INDEX(MIN_MAX), col3 VARCHAR2(1024) SKIP_INDEX(MIN_MAX), col4 CHAR(10) );Create a table named
TEST_TBL9with an integer column namedcol1, and specify that the operation uses 5 parallel threads to complete. Also, specify that the data in the new tableTEST_TBL8will be sourced from the query results of the tableTEST_TBL8.obclient> CREATE /*+ parallel(5) */ TABLE TEST_TBL9 (col1 NUMBER) AS SELECT col1 FROM TEST_TBL8;Create a table named
TEST_TBL10with an auto-increment column namedcol1, and specify that the column is the primary key.obclient> CREATE TABLE TEST_TBL10 ( col1 INT GENERATED BY DEFAULT AS IDENTITY, col2 VARCHAR2(50), PRIMARY KEY (col1) );Create a table named
tbwith a persistent bloom filter at the macroblock level.obclient> CREATE TABLE tb(c1 INT PRIMARY KEY, c2 INT) enable_macro_block_bloom_filter = True;Create a table named
tbwith the new flat row storage format (version 2) enabled.obclient> CREATE TABLE tb(c1 INT PRIMARY KEY, c2 INT) micro_block_format_version = 2;
Limitations of global temporary tables in Oracle compatible mode
- Temporary tables in Oracle compatible mode are widely used in various business scenarios and provide basic correctness and functionality guarantees.
- The primary purpose of using temporary tables is often for compatibility and to minimize business modifications. They can be used in limited scenarios where performance requirements are not high. If the business scenario can be converted to a regular table, it is better to do so.
Performance and stability
- The SQL execution efficiency of temporary tables is generally similar to that of regular tables, without significant advantages.
- Temporary tables require additional work, such as data cleanup, upon transaction completion or session disconnection, which incurs additional overhead.
- The checks and cleanup actions performed during login for temporary tables may put pressure on the login thread, leading to longer login times, and in severe cases, login failures.
Creating temporary tables
When creating a temporary table, the system automatically modifies the create statement:
- Adds a
SYS_SESSION_IDcolumn as the primary key. - Adds a
SYS_SESS_CREATE_TIMEcolumn as a regular column. - Creates a hash-partitioned table using
SYS_SESSION_IDas the partitioning key, with a fixed number of 16 partitions.
For example:
obclient> CREATE GLOBAL TEMPORARY TABLE TEST_TBL11(
c1 INT,
c2 INT,
PRIMARY KEY(c1)
);
is rewritten to the following form:
obclient> CREATE GLOBAL TEMPORARY TABLE TEST_TBL11(
SYS_SESSION_ID INT,
SYS_SESS_CREATE_TIME INT,
c1 INT,
c2 INT,
PRIMARY KEY(SYS_SESSION_ID, c1)
)
PARTITION BY HASH(SYS_SESSION_ID) PARTITIONS 16;
DML and query statements for temporary tables
When executing an INSERT statement, the system automatically inserts the current session's session ID and session creation time into the SYS_SESSION_ID column and SYS_SESS_CREATE_TIME column, respectively.
When executing UPDATE, DELETE, or SELECT statements, the system automatically adds a filter condition " SYS_SESSION_ID = current session session_id " to the statements containing temporary tables through SQL rewriting. This condition allows the SQL optimizer to perform partition pruning and query range extraction.
Data cleanup for temporary tables
- For temporary tables with the
ON COMMIT DELETE ROWSoption (transactional temporary tables, which is the default option), a new transaction is started upon transaction completion, and theDELETEstatement is executed to remove temporary table data. - For temporary tables with the
ON COMMIT PRESERVE ROWSoption (session-level temporary tables), a new transaction is started upon session disconnection, and theDELETEstatement is executed to remove temporary table data. - Due to the potential reuse of session IDs, OceanBase Database V3.2.4 BP4 and earlier versions perform checks on data associated with the current session ID upon login. If additional cleanup is required, it is performed.
- Due to the non-uniqueness of session IDs, login checks and cleanup operations may lead to failures (such as cluster login failures).
Routing for temporary tables
Transactional temporary tables (
ON COMMIT DELETE ROWS) Access to transactional temporary tables within a transaction can only be routed to the node where the transaction was initiated.Session-level temporary tables (
ON COMMIT PRESERVE ROWS) After a session accesses a temporary table, the OBServer node notifies the Proxy, restricting subsequent requests to the current session.
Dropping temporary tables
Similar to regular tables, you can successfully execute the DROP statement while executing DML operations. All data in the temporary table will be deleted. This behavior differs from Oracle, where the DROP statement can only be executed after all sessions no longer hold resources from the temporary table.
Support for cross-functional features
| Feature | Supported in V3.2.4 BP4 and earlier versions | Supported in V3.2.4 BP5 and V3.2.x later versions | Supported in V4.2.0 |
|---|---|---|---|
| Plan sharing across sessions | No | No | Yes |
| Login without triggering checks and cleanup | No | Yes | Yes |
MERGE INTO statement |
No | No | Yes |
Workarounds for critical issues
Login failure
- Stop temporary table-related business operations, delete the temporary table, or perform a major compaction. This usually resolves the issue. If the issue persists, proceed to step 2.
- Restart the machine that cannot log in.
PL Cache expansion due to plan reuse failure
For example, consider a procedure definition that includes a temporary table:
obclient> CREATE OR REPLACE PROCEDURE PRO_1
AS
var1 VARCHAR2(100);
BEGIN
-- Use dynamic SQL to create a temporary table
EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE temp_table (
col1 VARCHAR2(100)
) ON COMMIT DROP';
-- Dynamically insert data
EXECUTE IMMEDIATE 'INSERT INTO temp_table VALUES (''xxx'')';
-- Dynamically query
EXECUTE IMMEDIATE 'SELECT col1 FROM temp_table WHERE ROWNUM = 1' INTO var1;
DBMS_OUTPUT.PUT_LINE(var1);
END PRO_1;
/
Since plans cannot be shared across sessions for temporary tables, each session must compile the Procedure PRO_1 to generate a corresponding cache, potentially leading to stability issues. Modifying the temporary table SQL to use dynamic SQL can bypass this issue.
Data not cleaned up during failures
Failures may result in data residue, and there is currently no automatic cleanup method. This generally does not affect usage. If excessive residual data persists, you can drop the temporary table and recreate it.
Read-only and read/write tables in an Oracle-compatible tenant
In an Oracle-compatible tenant, you can use the CREATE TABLE statement to create a read-only table or a read/write table. You can also use the ALTER TABLE statement to change the read/write attribute of a table.
Notice
Users with the SUPER privilege cannot perform the operations. We recommend that you use a regular user.
Procedure:
Create a regular user:
CREATE USER test1 IDENTIFIED BY "12345";Grant the user the CONNECT and CREATE TABLE privileges:
GRANT CREATE SESSION TO test1; GRANT CREATE TABLE TO test1;Connect to OceanBase Database as the regular user:
obclient -hxxx.xx.xxx.xxx -P2881 -utest1@oracle001 -ACreate a read-only table:
CREATE TABLE tb_readonly1(id INT) READ ONLY;Attempt to insert data into the read-only table (expected to fail):
INSERT INTO tb_readonly1 VALUES (1); -- Expected error: ORA-00600: internal error code, arguments: -5235, The table 'TEST1.TB_READONLY1' is read only so it cannot execute this statementCreate a read/write table:
CREATE TABLE tb_readwrite1(id INT) READ WRITE;Insert data into the read/write table (expected to succeed):
INSERT INTO tb_readwrite1 VALUES (99),(98); -- Expected result: Query OK, 2 rows affected (0.002 sec) -- Records: 2 Duplicates: 0 Warnings: 0Convert the read/write table to a read-only table:
ALTER TABLE tb_readwrite1 READ ONLY;Attempt to insert data into the converted read-only table (expected to fail):
INSERT INTO tb_readwrite1 VALUES (96),(97); -- Expected error: ORA-00600: internal error code, arguments: -5235, The table 'TEST1.TB_READWRITE1' is read only so it cannot execute this statement