OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Database

SQL - V4.6.0

    Download PDF

    OceanBase logo

    The Unified Distributed Database for the AI Era.

    Follow Us
    Products
    OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
    Resources
    DocsBlogLive DemosTraining & Certification
    Company
    About OceanBaseTrust CenterLegalPartnerContact Us
    Follow Us

    © OceanBase 2026. All rights reserved

    Cloud Service AgreementPrivacy PolicySecurity
    Contact Us
    Document Feedback
    1. Documentation Center
    2. OceanBase Database
    3. SQL
    4. V4.6.0
    iconOceanBase Database
    SQL - V 4.6.0
    SQL
    KV
    • V 4.6.0
    • V 4.4.2
    • V 4.3.5
    • V 4.3.3
    • V 4.3.1
    • V 4.3.0
    • V 4.2.5
    • V 4.2.2
    • V 4.2.1
    • V 4.2.0
    • V 4.1.0
    • V 4.0.0
    • V 3.1.4 and earlier

    Create a table

    Last Updated:2026-05-07 11:26:25  Updated
    share
    What is on this page
    Create a non-partitioned table
    Create a replicated table
    Limitations of replicated tables
    Syntax for creating a replicated table
    Create a new table by replicating data from an existing table
    Create a rowstore table
    Create a columnstore table
    Update model of the table
    Related documents

    folded

    share

    You can use the CREATE TABLE statement to create a table.

    This topic describes how to create a non-partitioned table. For information about how to create and use a partitioned table, see Create a partitioned table.

    Create a non-partitioned table

    Creating a non-partitioned table means creating a table with a single partition.

    Here are some example statements for creating a non-partitioned table:

    obclient> CREATE TABLE table_name1(w_id int,
    w_ytd decimal(12,2),
    w_tax decimal(4,4),
    w_name varchar(10),
    w_street_1 varchar(20),
    w_street_2 varchar(20),
    w_city varchar(20),
    w_state char(2),
    w_zip char(9),
    unique(w_name, w_city),
    primary key(w_id)
    );
    
    obclient> CREATE TABLE table_name2 (c_w_id int NOT NULL,
    c_d_id int NOT null,
    c_id int NOT null,
    c_discount decimal(4, 4),
    c_credit char(2),
    c_last varchar(16),
    c_first varchar(16),
    c_middle char(2),
    c_balance decimal(12, 2),
    c_ytd_payment decimal(12, 2),
    c_payment_cnt int,
    c_credit_lim decimal(12, 2),
    c_street_1 varchar(20),
    c_street_2 varchar(20),
    c_city varchar(20),
    c_state char(2),
    c_zip char(9),
    c_phone char(16),
    c_since date,
    c_delivery_cnt int,
    c_data varchar(500),
    FOREIGN KEY (c_w_id) REFERENCES table_name1(w_id),
    primary key (c_w_id, c_d_id, c_id)
    );
    

    The example creates two tables and defines constraints for the columns in the tables, such as primary keys and foreign keys on different columns. For more information about primary keys, foreign keys, and other constraints, see Define the constraint type for a column.

    When you create columns for a table, choose the correct data type. For more information about SQL data types, see Data types.

    Note

    To optimize performance and ease future maintenance, we recommend that you design a primary key or unique key for a table. If no suitable field is available for the primary key, you can add a numeric column to the table and use the sequence of the Oracle tenant to fill in values for this column. For more information about sequences, see Manage sequences.

    Create a replicated table

    A replicated table is a special type of table in OceanBase Database. This type of table can read the latest data modifications from any "healthy" replica. For users who have low write requirements and high requirements for read latency and load balancing, a replicated table is a good choice.

    After you create a replicated table, a replica of the table is created in each OBServer node of the tenant. One of the replicas is selected as the leader to accept write requests, and the remaining replicas can only accept read requests.

    All replicas need to report their status to the leader, primarily their replay progress, which indicates the synchronization progress of data. Generally, the replay progress of a follower replica lags slightly behind that of the leader. As long as the lag does not exceed a certain threshold, the leader considers the replica to be "healthy" and can quickly replay the modifications made on the leader. Once the leader determines that a replica is "healthy" for a certain period, it grants the follower a lease. In simpler terms, the leader trusts the follower to remain "healthy" during this period and to provide strongly consistent read services. During this "trust" period, the leader will verify the replay progress of the follower before committing each transaction in the replicated table. Only after the follower has replayed the modifications of the transaction will the leader inform the user that the transaction has been successfully committed. At this point, the user can read the modifications of the just-committed transaction from the follower.

    The replicated table feature was introduced in OceanBase Database V3.x. In OceanBase Database V4.x, due to significant changes in the architecture, the replicated table feature has been adapted to the new single-machine log stream architecture. It has introduced a partition-based readable version number verification mechanism and a log stream-based lease granting mechanism to ensure the correctness of strongly consistent reads.

    Additionally, the replicated table feature in OceanBase Database V4.x has improved the ability to switch leaders without killing transactions. When a leader switch is initiated by the user or the load balancer, uncommitted transactions of the replicated table can continue to execute after the leader switch, unlike in OceanBase Database V3.x. Compared with OceanBase Database V3.x, the replicated table feature in OceanBase Database V4.x also offers better write transaction performance and stronger disaster recovery capabilities. The impact of a replica failure on read operations is lower.

    Limitations of replicated tables

    • Replicated tables:

      • Limitations on creation: The sys and meta tenants do not support creating replicated tables because they do not have broadcast log streams.

      • Write performance is affected by the number of nodes: Since write operations on a replicated table need to be synchronized to all replicas, the more nodes there are, the greater the impact on write performance.

        • Workaround: Try to avoid writing and reading on the same transaction for a replicated table. Pure write or pure read operations on a replicated table are fine.
      • Attribute conversion:

        • Replicated tables and table groups are mutually exclusive. Modifying the table group attribute of a replicated table will result in an error. When converting a regular table to a replicated table, if the regular table belongs to a table group, the attribute change command will also result in an error.
        • Converting a replicated table depends on load balancing and transfer, so ensure that the relevant configuration items are enabled.
      • Routing:

        • If a transaction contains write operations on a replicated table, subsequent queries on the replicated table may be randomly routed to a follower, which may result in the replica being unreadable. The observer internally forwards the query to the leader of the replicated table, which may affect query performance.

          • In ODP V4.3.3, the routing strategy has been adjusted. If there are write operations on a replicated table, subsequent queries will be routed to the leader of the replicated table.
        • When querying a replicated table JOIN a regular table, the query will be randomly routed based on the replicated table (for JOIN queries, ODP routes based on the first table), which may route to a non-leader node of the regular table, resulting in a remote plan.

        • When converting a regular table to a replicated table, ODP cannot detect the change and cannot route the query to the replicated table to balance the load.

      • In a transaction, if a replicated table has been modified, subsequent queries will generate an execution plan that selects the local replica. However, due to the modification, the system will report an error indicating that the replica is unreadable. The SQL retry will select the leader replica, and the plan cache will not hit, resulting in poor query performance.

        Note

        In a transaction, if a replicated table has been modified, OceanBase Database will always prioritize selecting the leader replica of the replicated table over the local replica to ensure efficient query performance and avoid selecting the wrong replica.

    • Broadcast log streams:

      • Each user tenant can have at most one broadcast log stream.
      • Attribute conversion between broadcast log streams and regular log streams is not supported.
      • Manual deletion of broadcast log streams is not supported. They are deleted along with the tenant.

    Syntax for creating a replicated table

    To create a replicated table, add the DUPLICATE_SCOPE option to the CREATE TABLE statement. Only user tenants can create replicated tables, and the sys tenant cannot create replicated tables. The SQL statement for creating a replicated table is as follows:

    CREATE TABLE table_name column_definition DUPLICATE_SCOPE='none | cluster';
    

    The DUPLICATE_SCOPE parameter specifies the attribute of the replicated table. Valid values are as follows:

    • none: Indicates that the table is a regular table.
    • cluster: Indicates that the table is a replicated table. The leader needs to replicate transactions to all F and R replicas of the current tenant.

    If you do not specify DUPLICATE_SCOPE when creating a table, the default value is none.

    CREATE TABLE dup_t1 (c1 NUMBER,c2 NUMBER) DUPLICATE_SCOPE= 'cluster';
    

    When the first replicated table is created in a tenant, the system will also create a special log stream called a broadcast log stream. Subsequent replicated tables will be created on this broadcast log stream. The difference between a broadcast log stream and a regular log stream is that a broadcast log stream automatically deploys a replica on every OBServer node within the tenant. This ensures that, under ideal conditions, a replicated table can provide strongly consistent reads on any OBServer node. You can use the following SQL statement to view the broadcast log stream where the replicated tables of the tenant are located:

    SELECT * FROM SYS.DBA_OB_LS WHERE flag LIKE "%DUPLICATE%";
    

    The query result is as follows.

    +-------+--------+--------------+---------------+-------------+---------------------+----------+---------------------+---------------------+-----------+-----------+
    | LS_ID | STATUS | PRIMARY_ZONE | UNIT_GROUP_ID | LS_GROUP_ID | CREATE_SCN          | DROP_SCN | SYNC_SCN            | READABLE_SCN        | FLAG      | UNIT_LIST |
    +-------+--------+--------------+---------------+-------------+---------------------+----------+---------------------+---------------------+-----------+-----------+
    |  1003 | NORMAL | z1;z2        |             0 |           0 | 1684982852976428261 |     NULL | 1684983282912048623 | 1684983282912048623 | DUPLICATE |           |
    +-------+--------+--------------+---------------+-------------+---------------------+----------+---------------------+---------------------+-----------+-----------+
    1 rows in set
    

    In the example, the log stream with LS_ID of 1003 is the broadcast log stream. All replicated tables of the current tenant are created on this log stream. For more information about broadcast log streams, see Replicas.

    After a replicated table is created, you can perform insert and read/write operations on it, just like a regular table. The difference is that for read requests, if you connect to the database using a proxy, the read request may be routed to any OBServer node for execution. If you connect to the database directly, the system will execute the read request on the directly connected OBServer node as long as the local replica is readable. For more information about database connection methods, see Overview of connection methods.

    Create a new table by replicating data from an existing table

    You can use the CREATE TABLE AS SELECT statement to replicate the basic data types and data from an existing table, but not the constraints, indexes, or non-null attributes.

    Example statement:

    CREATE TABLE t2_copy AS SELECT * FROM t2;
    

    The CREATE TABLE LIKE statement is not supported for replicating the structure of a table.

    Create a rowstore table

    OceanBase Database supports creating rowstore tables and converting rowstore tables to columnstore tables.

    When the configuration parameter default_table_store_format=‘row’ (the default value) is set, the default table type is rowstore. If default_table_store_format is not set to row, you can create a rowstore table by specifying the WITH COLUMN GROUP(all columns) option.

    For more information about converting rowstore tables to columnstore tables, see Change a table. For more information about creating columnstore indexes, see Create an index.

    You can create a rowstore table by specifying the WITH COLUMN GROUP(all columns) option.

    Here is an example:

    CREATE TABLE tbl1_cg (col1 INT PRIMARY KEY, col2 VARCHAR(50)) WITH COLUMN GROUP(all columns);
    

    Note

    Even if you later execute the DROP COLUMN GROUP(all columns) statement to drop the column group, the table remains in rowstore format when you specify the WITH COLUMN GROUP(all columns) option.

    Create a columnstore table

    OceanBase Database supports creating columnstore tables, converting rowstore tables to columnstore tables, and creating columnstore indexes. You can explicitly specify the storage format as columnstore or rowstore/columnstore redundant by using the WITH COLUMN GROUP option. You can also set the default_table_store_format parameter to specify columnstore or rowstore/columnstore redundant as the default storage format.

    For more information about converting rowstore tables to columnstore tables, see Change a table. For more information about creating columnstore indexes, see Create an index.

    You can create a rowstore/columnstore redundant table by specifying the WITH COLUMN GROUP(all columns, each column) option.

    Here is an example:

    CREATE TABLE tbl1_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(all columns, each column);
    

    You can create a columnstore table by specifying the WITH COLUMN GROUP(each column) option.

    Here is an example:

    CREATE TABLE tbl2_cg (col1 NUMBER PRIMARY KEY, col2 VARCHAR2(50)) WITH COLUMN GROUP(each column);
    

    When you create and use a columnstore table, you need to perform a major compaction to improve the read performance and perform statistics collection to adjust the execution strategy after you import a large amount of data.

    • Major compaction: After you import data in batches, we recommend that you perform a major compaction. This helps improve the read performance because the major compaction organizes fragmented data into more contiguous data in physical storage, thereby reducing disk I/O during read operations. After you import data, you can trigger a major compaction in the tenant to ensure that all data is compacted to the baseline layer. For more information, see MAJOR and MINOR.

    • Statistics collection: After a major compaction is performed, we recommend that you collect statistics. This is important for the optimizer to generate effective query plans and execution strategies. You can execute GATHER_SCHEMA_STATS to collect statistics for all tables and monitor the progress by querying the GV$OB_OPT_STAT_GATHER_MONITOR view.

    Note that the speed of a major compaction may decrease as the amount of data in the columnstore table increases.

    Update model of the table

    When you create a table, specify the update model of the table by using the MERGE_ENGINE table option in the CREATE TABLE statement. The SQL statement is as follows:

    CREATE TABLE table_name column_definition
        MERGE_ENGINE = {delete_insert | partial_update | append_only};
    

    Note

    • After you specify the MERGE_ENGINE parameter when you create a table, the configuration value of the parameter cannot be modified.
    • For OceanBase Database V4.3.5, the MERGE_ENGINE parameter is supported starting from V4.3.5 BP3.

    Here, the MERGE_ENGINE option specifies the update model of the table. Valid values:

    • partial_update: specifies to use a partial update mode. Each update records only the modified columns (delta) to save storage space, but queries require merging data to obtain the latest value. This is suitable for OLTP scenarios with frequent updates and lower query demands.
    • delete_insert: This option indicates to use the full-column update mode. Each update write will include a complete line (delete the old row and insert the new row) to prioritize query performance. This mode supports skip index for incremental data (Memtable/Delta SSTable). During queries, incremental data can be filtered and pushed down. If the filtering result does not involve updating the baseline data, baseline data and incremental data can be respectively batch-processed to reduce read amplification. This mode is suitable for OLAP scenarios with a high proportion of incremental data that frequently execute complex queries or batch processing analysis.
    • append_only: specifies that INSERT operations are the only allowed operations for the table. No other DML or DDL operations that would modify existing data storage can be performed. For more information, see Limitations at the end of this topic.

    Note

    The core difference between delete_insert and partial_update lies in the granularity of updates: delete_insert updates the entire row during each update, while partial_update only updates the changed columns. This full-row update enables delete_insert to support Skip Indexes for Delta SSTables, enhancing filtering pushdown performance in analytical queries.

    If you do not specify the MERGE_ENGINE option, the value of this option is the same as the value of the default_table_merge_engine parameter.

    Scenarios

    Scenario Recommended value Description
    OLAP, analytical queries, batch processing delete_insert When incremental data can be filtered, query performance is optimized. However, more storage space is occupied by incremental data.
    OLTP, frequent updates, storage-sensitive partial_update This mode saves storage space and is suitable for scenarios with frequent updates but low query requirements.

    For more information about the configuration scenarios, see Configuration best practices.

    Here are some examples:

    • Create a pure columnstore table named mer_tbl1 with the delete_insert update model.

      obclient> CREATE TABLE mer_tbl1 (col1 NUMBER, col2 NUMBER)
          MERGE_ENGINE = delete_insert;
      
    • Create the columnar table mer_tbl2, and set the update mode to delete_insert.

      obclient> CREATE TABLE mer_tbl2 (col1 NUMBER, col2 NUMBER)
          merge_engine = delete_insert
          WITH COLUMN GROUP(each column);
      
    • Create a hybrid row-column table named mer_tbl2 with the delete_insert update model.

      CREATE TABLE mer_tbl2 (col1 NUMBER, col2 NUMBER) merge_engine = delete_insert WITH COLUMN GROUP(all columns, each column);
      

    Related documents

    • CREATE TABLE

    • ALTER TABLE

    Previous topic

    Overview
    Last

    Next topic

    Skip index attribute
    Next
    What is on this page
    Create a non-partitioned table
    Create a replicated table
    Limitations of replicated tables
    Syntax for creating a replicated table
    Create a new table by replicating data from an existing table
    Create a rowstore table
    Create a columnstore table
    Update model of the table
    Related documents