OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Database

SQL - V4.0.0

  • Document overview
  • Overview
    • Overview
    • System architecture
    • Compatibility with MySQL
    • Limits
  • Get Started
    • Quick experience
    • Hands on for OceanBase SQL
      • Before you start
      • Basic SQL operations
    • Create a sample application
      • Connect to OceanBase Database by using a Python driver
      • Connect to OceanBase Database by using Go MySQL Driver
      • Connect a Java application to OceanBase Database
      • Connect a C application to OceanBase Database
    • Experience OceanBase advanced features
      • Experience scalable OLTP
        • Run the TPC-C benchmark test on OceanBase Database
        • Experience the hot row update capability of OceanBase Database
      • Experience operational OLAP
      • Experience parallel import and data compression
      • Experience the multitenancy feature
  • Develop
    • Connect to Oceanbase Database with client
      • Overview
      • Connect to OceanBase Database by using a MySQL client
      • Connect to OceanBase Database by using OBClient
      • Connect to OceanBase Database by using ODC
      • Connect Java applications to OceanBase Database
      • Connect to OceanBase Database by using Go MySQL Driver
      • Connect to OceanBase Database by using Unix ODBC
      • C application
        • OceanBase Connector/C
        • C API functions
        • Connect C applications to OceanBase Database
      • Connect Python applications to OceanBase Database
      • SpringBoot connection example
      • SpringBatch connection example
      • SpringJDBC connection example
      • SpringJPA connection example
      • Hibernate connection example
      • MyBatis connection example
      • Example of Database connection pool configuration
        • Overview of database connection pool configuration
        • Example of configuring a Tomcat connection pool
        • Example of configuring a C3P0 connection pool
        • Example of configuring a Proxool connection pool
        • Example of configuring a HiKariCP connection pool
        • Example of configuring a DBCP connection pool
        • CommonPool configuration example
        • Example of configuring a JDBC connection pool
    • Create and manage database objects
      • About DDL statements
      • View the currently connected database
      • Change the password of a user
      • Data type
        • General data types
        • Unsupported data types
      • Create and manage tables
        • About tables
        • Create a table
        • About auto-increment columns
        • About types of column constraints
        • About table structure modification
        • About table clearing
        • About table dropping
        • Flash back a dropped table
        • About table privileges
      • Create and manage partition tables
        • About partitioned tables
        • Create a partitioned table
        • Manage a partitioned table
        • Create a subpartitioned table
        • Manage a subpartitioned table
        • Partition routing
        • Indexes on partitioned tables
        • Suggestions on using partitioned tables
      • Create and manage indexes
        • About indexes
        • Create an index
        • Drop an index
      • Create and manage views
        • About views
        • Create a view
        • Modify a view
        • Delete a view
      • Create and manage sequences
        • About sequences
        • Create a sequence
        • Modify a sequence
        • Delete a sequence
    • Query
      • About queries
      • Single-table queries
      • Conditional queries
      • ORDER BY queries
      • GROUP BY queries
      • Use the LIMIT clause in queries
      • Query data from multiple tables
        • About multi-table join queries
        • INNER JOIN queries
        • OUTER JOIN queries
        • Subqueries
      • Use operators and functions in a query
        • Use arithmetic operators in queries
        • Use numerical functions in queries
        • Use string connectors in queries
        • Use string functions in queries
        • Use datetime functions in queries
        • Use type conversion functions in queries
        • Use aggregate functions in queries
        • Use NULL-related functions in queries
        • Use the CASE conditional operator in queries
        • Use the SELECT FOR UPDATE statement to lock query results
      • Execution plan
        • View an execution plan
        • Understand an execution plan
      • Use SQL hints in queries
      • Variables of query timeout
    • DML statements and transactions
      • DML statement
        • About DML statements
        • About the INSERT statement
        • UPDATE statements
        • About the DELETE statement
        • About the REPLACE INTO statement
      • Transactions
        • About transaction control statements
        • Start a transaction
        • Transaction savepoints
        • Commit a transaction
        • Roll back a transaction
        • About transaction timeout
    • Common errors and solutions
      • About error codes
      • Database connection error
      • About timeout
        • Idle session timeout
        • Transaction timeout errors
      • About user
        • Locked user
        • Incorrect user password
      • About table
        • Table already exists
        • Table does not exist
        • Invalid use of NULL value
      • About constraint
        • Unique key conflict
        • Foreign key conflict
      • About SQL commands
        • Data truncation
  • Deploy
    • Overview
    • On-premises deployment
      • Software and hardware requirements
      • Configuration before deployment
      • Deploy OceanBase Database online
      • Deploy OceanBase Database offline
    • Deploy OceanBase Database in a Kubernetes cluster
    • High availability deployment
      • Use Alibaba Otter to implement remote active-active disaster recovery
  • Migrate
    • Data Migration Overview
    • Migrate data from MySQL Database to OceanBase
      • Use Canal to synchronize MySQL data to OceanBase Database in real time
      • Use DataX to migrate MySQL data to OceanBase Database
      • Use DBCAT to migrate MySQL table schemas to OceanBase Database
      • Migrate MySQL table schemas to OceanBase Database by using mysqldump
      • Migrate MySQL table data to OceanBase Database by using mysqldump
    • Use OBDUMPER to export data from or OBLOADER to import data to OceanBase Database
    • Migrate data from CSV-file to OceanBase
      • Use DataX to load CSV data files to OceanBase Database
      • Use the LOAD DATA statement to load CSV data files to OceanBase Database
    • Migrate data from SQL files to OceanBase Database
    • Migrate data and resource units between tables
    • Migrate data from OceanBase Database to MySQL
      • Use Canal to synchronize OceanBase Database data to MySQL in real time

Download PDF

Document overview Overview System architecture Compatibility with MySQL Limits Quick experience Before you start Basic SQL operations Connect to OceanBase Database by using a Python driver Connect to OceanBase Database by using Go MySQL Driver Connect a Java application to OceanBase Database Connect a C application to OceanBase Database Experience operational OLAP Experience parallel import and data compression Experience the multitenancy feature Overview Connect to OceanBase Database by using a MySQL client Connect to OceanBase Database by using OBClient Connect to OceanBase Database by using ODC Connect Java applications to OceanBase Database Connect to OceanBase Database by using Go MySQL Driver Connect to OceanBase Database by using Unix ODBC Connect Python applications to OceanBase Database SpringBoot connection example SpringBatch connection example SpringJDBC connection example SpringJPA connection example Hibernate connection example MyBatis connection example About DDL statements View the currently connected database Change the password of a user About queries Single-table queries Conditional queries ORDER BY queries GROUP BY queries Use the LIMIT clause in queries Use SQL hints in queries Variables of query timeout About error codes Database connection error Overview Software and hardware requirements Configuration before deployment Deploy OceanBase Database online Deploy OceanBase Database offline Deploy OceanBase Database in a Kubernetes cluster Use Alibaba Otter to implement remote active-active disaster recovery Data Migration Overview Use Canal to synchronize MySQL data to OceanBase Database in real time Use DataX to migrate MySQL data to OceanBase Database Use DBCAT to migrate MySQL table schemas to OceanBase Database Migrate MySQL table schemas to OceanBase Database by using mysqldump Migrate MySQL table data to OceanBase Database by using mysqldump Use OBDUMPER to export data from or OBLOADER to import data to OceanBase Database Use DataX to load CSV data files to OceanBase Database Use the LOAD DATA statement to load CSV data files to OceanBase Database Migrate data from SQL files to OceanBase Database Migrate data and resource units between tables Use Canal to synchronize OceanBase Database data to MySQL in real time
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Database
  3. SQL
  4. V4.0.0
iconOceanBase Database
SQL - V 4.0.0
SQL
KV
  • V 4.4.2
  • V 4.3.5
  • V 4.3.3
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.2
  • V 4.2.1
  • V 4.2.0
  • V 4.1.0
  • V 4.0.0
  • V 3.1.4 and earlier

Create a table

Last Updated:2023-07-21 09:11:01  Updated
share
What is on this page
Prerequisites
Create a table
Syntax
Parameters
Create a non-partitioned table
Create a table by copying the data in an existing table
Copy table data
Copy the table structure
Create a replica table

folded

share

You can execute the CREATE TABLE statement to create a table. This topic describes how to create a non-partitioned table.

Prerequisites

The current user has the user-level or database-level CREATE privilege.

Create a table

Syntax

CREATE TABLE [IF NOT EXISTS] table_name
      (table_definition_list) [table_option_list] [partition_option] [AS] select;

CREATE TABLE [IF NOT EXISTS] table_name
      LIKE table_name;

table_definition_list:
    table_definition [, table_definition ...]

table_definition:
      column_definition
    | [CONSTRAINT [constraint_name]] PRIMARY KEY index_desc
    | [CONSTRAINT [constraint_name]] UNIQUE {INDEX | KEY}
            [index_name] index_desc
    | [CONSTRAINT [constraint_name]] FOREIGN KEY
            [index_name] index_desc
            REFERENCES reference_definition
            [match_action][opt_reference_option_list]
    | {INDEX | KEY} [index_name] index_desc

column_definition_list:
    column_definition [, column_definition ...]

column_definition:
     column_name data_type
         [DEFAULT const_value] [AUTO_INCREMENT]
         [NULL | NOT NULL] [[PRIMARY] KEY] [UNIQUE [KEY]] comment
   | column_name data_type
         [GENERATED ALWAYS] AS (expr) [VIRTUAL | STORED]
         [opt_generated_column_attribute]

index_desc:
   (column_desc_list) [index_type] [index_option_list]

match_action:
   MATCH {SIMPLE | FULL | PARTIAL}

opt_reference_option_list:
   reference_option [,reference_option ...]

reference_option:
   ON {DELETE | UPDATE} {RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT}

column_desc_list:
    column_desc [, column_desc ...]

column_desc:
     column_name [(length)] [ASC | DESC]

index_type:
    USING BTREE

index_option_list:
    index_option [ index_option ...]

index_option:
      [GLOBAL | LOCAL]
    | block_size
    | compression
    | STORING(column_name_list)
    | comment

table_option_list:
    table_option [ table_option ...]

table_option:
      [DEFAULT] {CHARSET | CHARACTER SET} [=] charset_name
    | [DEFAULT] COLLATE [=] collation_name
    | primary_zone
    | replica_num
    | table_tablegroup
    | block_size
    | compression
    | AUTO_INCREMENT [=] INT_VALUE
    | comment
    | DUPLICATE_SCOPE [=] "none|zone|region|cluster"
    | LOCALITY [=] "locality description"
    | ROW_FORMAT [=] REDUNDANT|COMPACT|DYNAMIC|COMPRESSED|DEFAULT
    | PCTFREE [=] num
    | parallel_clause

parallel_clause:
    {NOPARALLEL | PARALLEL integer}

partition_option:
      PARTITION BY HASH(expression)
      [subpartition_option] PARTITIONS partition_count
    | PARTITION BY KEY([column_name_list])
      [subpartition_option] PARTITIONS partition_count
    | PARTITION BY RANGE {(expression) | COLUMNS (column_name_list)}
      [subpartition_option] (range_partition_list)
    | PARTITION BY LIST {(expression) | COLUMNS (column_name_list)}
      [subpartition_option] PARTITIONS partition_count

subpartition_option:
      SUBPARTITION BY HASH(expression)
      SUBPARTITIONS subpartition_count
    | SUBPARTITION BY KEY(column_name_list)
      SUBPARTITIONS subpartition_count
    | SUBPARTITION BY RANGE {(expression) | COLUMNS (column_name_list)}
      (range_subpartition_list)
    | SUBPARTITION BY LIST(expression)

range_partition_list:
    range_partition [, range_partition ...]

range_partition:
    PARTITION partition_name
    VALUES LESS THAN {(expression_list) | MAXVALUE}

range_subpartition_list:
    range_subpartition [, range_subpartition ...]

range_subpartition:
    SUBPARTITION subpartition_name
    VALUES LESS THAN {(expression_list) | MAXVALUE}

expression_list:
    expression [, expression ...]

column_name_list:
    column_name [, column_name ...]

partition_name_list:
    partition_name [, partition_name ...]

partition_count | subpartition_count:
    INT_VALUE

Parameters

Parameter Description
PRIMARY KEY The primary key of the created table. If this parameter is not specified, a hidden primary key is used. OceanBase Database does not allow you to modify the primary key of a table or use the ALTER TABLE statement to add a primary key to a table. Therefore, we recommend that you specify a primary key when creating a table.
FOREIGN KEY The foreign key of the created table. If you do not specify the name of the foreign key, it will be named in the format of table name + OBFK+ time when the foreign key was created. For example, the foreign key created for table t1 at 00:00:00 on August 1, 2021 is named as t1_OBFK_1627747200000000.
KEY | INDEX The key or index of the created table. If you do not specify the name of the index, the name of the first column referenced by the index is used as the index name. If duplicate index names exist, the index will be named in the format of underscore(_) + sequence number. For example, if the name of the index created based on column c1 conflicts with an existing index name, the index will be named as c1_2. You can execute theSHOW INDEX statement to query the indexes of a table.
DUPLICATE_SCOPE The attribute of a replica table. Valid values:
  • none: The table is a normal table.
  • zone: The table is a replica table. The leader must copy transactions to all F and R replicas in this zone.
  • region: The table is a replica table. The leader must copy transactions to all F and R replicas in this region.
  • cluster: The table is a replica table. The leader must copy transactions to all F and R replicas in this cluster.
If the DUPLICATE_SCOPE parameter is not specified, the default value none is used. Currently, OceanBase Database supports only ``cluster-level replica tables.
ROW_FORMAT Specifies whether to enable the encoding storage format.
  • redundant: indicates that the encoding storage format is not enabled.
  • compact: indicates that the encoding storage format is not enabled.
  • dynamic: an encoding storage format.
  • compressed: an encoding storage format.
  • default: This value is equivalent to dynamic.
[GENERATED ALWAYS] AS (expr) [VIRTUAL | STORED] Creates a generated column. expr specifies the expression used to calculate the column value.
  • VIRTUAL: indicates that column values are not stored, but are immediately calculated after any BEFORE trigger when a row is read. Virtual columns do not occupy storage space.
  • STORED: evaluates and stores column values when you insert or update a row. Stored columns occupy storage space and can be indexed.
BLOCK_SIZE The microblock size for the table.
COMPRESSION The compression algorithm for the table. Valid values:
  • none: indicates that no compression algorithm is used.
  • lz4_1.0: indicates that the lz4 compression algorithm is used.
  • zstd_1.0: indicates that the zstd compression algorithm is used.
  • snappy_1.0: indicates that the snappy compression algorithm is used.
CHARSET | CHARACTER SET The default character set for columns in the table. Valid values:
  • utf8
  • utf8mb4
  • gbk
  • utf16
  • gb18030
COLLATE The default collation for columns in the table. Valid values:
  • utf8_bin
  • utf8_general_ci
  • utf8_unicode_ci
  • gbk_bin
  • gbk_chinese_ci
  • utf8mb4_general_ci
  • utf8mb4__general_cs
  • utf8mb4_bin
  • utf8mb4_unicode_ci
  • utf16_general_ci
  • utf16_bin
  • utf16_unicode_ci
  • gb18030_chinese_ci
  • gb18030_bin
    primary_zone The primary zone where the leader resides.
    replica_num The number of replicas.
    Note
    The current version does not support this parameter.
    table_tablegroup The table group to which the table belongs.
    AUTO_INCREMENT The start value of an auto-increment column in the table. OceanBase Database allows you to use auto-increment columns as the partitioning key.
    comment The comment.
    LOCALITY The distribution of replicas across zones. For example, F@z1, F@z2, F@z3, and R@z4 indicate that z1, z2, and z3 are full-featured replicas, and z4 is a read-only replica.
    PCTFREE The percentage of space reserved for macroblocks.
    parallel_clause The degree of parallelism at the table level.
    • NOPARALLEL: sets the degree of parallelism to 1, which is the default value.
    • PARALLEL integer: sets the degree of parallelism to an integer greater than or equal to 1.

    Create a non-partitioned table

    A non-partitioned table is a table that has only one partition.

    Examples:

    Example 1: Create a table named student.

    CREATE TABLE student(
      id int,
      name varchar(18),
      sex char(1),
      age int,
      address varchar(200),
      email varchar(100),
      date date,
      PRIMARY KEY (id)
    );
    

    Example 2: Create a table named staff.

    CREATE TABLE staff(
      id int not null auto_increment,
      name varchar(18),
      sex char(1) default '0' check(
        sex = '0'
        or sex = '1'
      ),
      age int not null,
      address varchar(200),
      email varchar(100) unique,
      date date,
      wages number(7, 3),
      Entry_time TIMESTAMP(6),
      PRIMARY KEY (id),
      index idx (date)
    );
    

    In the preceding examples, two tables are created. Some constraints, including the primary and foreign keys, are defined on different columns.

    Select the correct data types for columns when creating a table. For information about data types, see SQL Reference.

    Note

    • To ensure performance and facilitate maintenance, we recommend that you specify a primary key or a unique key when you create a table. If no column can be used as the primary key, the system generates an auto-increment column as the hidden primary key column after table creation.
    • You cannot add a primary key to a table by using the ALTER TABLE statement. Therefore, you must specify the primary key when you create a table.

    Create a table by copying the data in an existing table

    Copy table data

    You can execute the CREATE TABLE AS SELECT statement to copy data from a table. The constraints, indexes, default values, and partitions are lost when you copy the data.

    Examples

    Copy the data in the staff table to the staff_copy table. Statement:

    CREATE TABLE staff_copy AS SELECT * FROM staff;
    
    • Comparison of table data

      • Data in the staff table:

        obclient> select * from staff;
        +----+--------+------+-----+-------------------+----------------------+------------+---------+----------------------------+
        | id | name   | sex  | age | address           | email                | date       | wages   | Entry_time                 |
        +----+--------+------+-----+-------------------+----------------------+------------+---------+----------------------------+
        |  1 | Zhang San   | 0    |  15 | XX Community in Hangzhou      | 1********@QQ.com     | 2022-02-22 | 100.000 | 2022-02-25 23:59:59.999990 |
        |  2 | Li Si   | 1    |  15 | XX Community in Beijing      | 2********@163.com    | 2022-02-28 | 100.000 | 2022-02-27 23:59:59.999990 |
        |  3 | Wang Wu   | 1    |  15 | XX Community in Shanghai      | 3********@shouhu.com | 2022-02-28 | 100.000 | 2022-04-28 23:59:59.999990 |
        +----+--------+------+-----+-------------------+----------------------+------------+---------+----------------------------+
        3 rows in set
        
      • Data in the staff_copy table:

        obclient> select * from staff_copy;
        +----+--------+------+-----+-------------------+----------------------+------------+---------+----------------------------+
        | id | name   | sex  | age | address           | email                | date       | wages   | Entry_time                 |
        +----+--------+------+-----+-------------------+----------------------+------------+---------+----------------------------+
        |  1 | Zhang San   | 0    |  15 | XX Community in Hangzhou      | 1********@QQ.com     | 2022-02-22 | 100.000 | 2022-02-25 23:59:59.999990 |
        |  2 | Li Si   | 1    |  15 | XX Community in Beijing      | 2********@163.com    | 2022-02-28 | 100.000 | 2022-02-27 23:59:59.999990 |
        |  3 | Wang Wu   | 1    |  15 | XX Community in Shanghai      | 3********@shouhu.com | 2022-02-28 | 100.000 | 2022-04-28 23:59:59.999990 |
        +----+--------+------+-----+-------------------+----------------------+------------+---------+----------------------------+
        3 rows in set
        
    • Comparison of table structures

      • Structure of the staff table:

        obclient> desc staff;
        +------------+--------------+------+-----+---------+----------------+
        | Field      | Type         | Null | Key | Default | Extra          |
        +------------+--------------+------+-----+---------+----------------+
        | id         | int(11)      | NO   | PRI | NULL    | auto_increment |
        | name       | varchar(18)  | YES  |     | NULL    |                |
        | sex        | char(1)      | YES  |     | 0       |                |
        | age        | int(11)      | NO   |     | NULL    |                |
        | address    | varchar(200) | YES  |     | NULL    |                |
        | email      | varchar(100) | YES  | UNI | NULL    |                |
        | date       | date         | YES  | MUL | NULL    |                |
        | wages      | decimal(7,3) | YES  |     | NULL    |                |
        | Entry_time | timestamp(6) | YES  |     | NULL    |                |
        +------------+--------------+------+-----+---------+----------------+
        9 rows in set
        
      • Structure of the staff_copy table:

        obclient> desc staff_copy;
        +------------+--------------+------+-----+---------+-------+
        | Field      | Type         | Null | Key | Default | Extra |
        +------------+--------------+------+-----+---------+-------+
        | id         | int(11)      | NO   |     | NULL    |       |
        | name       | varchar(18)  | YES  |     | NULL    |       |
        | sex        | char(1)      | YES  |     | NULL    |       |
        | age        | int(11)      | NO   |     | NULL    |       |
        | address    | varchar(200) | YES  |     | NULL    |       |
        | email      | varchar(100) | YES  |     | NULL    |       |
        | date       | date         | YES  |     | NULL    |       |
        | wages      | decimal(7,3) | YES  |     | NULL    |       |
        | Entry_time | timestamp(6) | YES  |     | NULL    |       |
        +------------+--------------+------+-----+---------+-------+
        9 rows in set
        

    Note

    The comparison results show that data in the destination staff_copy table is the same as that in the source staff table, but the constraints, indexes, and default values are missing in the structure of the staff_copy table.

    Copy the table structure

    You can also execute the CREATE TABLE LIKE statement to copy the structure but not the data of a table. Example:

    Copy the structure of the staff table to the staff_like table.

    Sample statement:

    CREATE TABLE staff_like like staff;
    
    • Comparison of table structures

      • Structure of the staff table:

        obclient> desc staff;
        +------------+--------------+------+-----+---------+----------------+
        | Field      | Type         | Null | Key | Default | Extra          |
        +------------+--------------+------+-----+---------+----------------+
        | id         | int(11)      | NO   | PRI | NULL    | auto_increment |
        | name       | varchar(18)  | YES  |     | NULL    |                |
        | sex        | char(1)      | YES  |     | 0       |                |
        | age        | int(11)      | NO   |     | NULL    |                |
        | address    | varchar(200) | YES  |     | NULL    |                |
        | email      | varchar(100) | YES  | UNI | NULL    |                |
        | date       | date         | YES  | MUL | NULL    |                |
        | wages      | decimal(7,3) | YES  |     | NULL    |                |
        | Entry_time | timestamp(6) | YES  |     | NULL    |                |
        +------------+--------------+------+-----+---------+----------------+
        9 rows in set
        
      • Structure of the staff_like table:

        obclient> desc staff_like;
        +------------+--------------+------+-----+---------+----------------+
        | Field      | Type         | Null | Key | Default | Extra          |
        +------------+--------------+------+-----+---------+----------------+
        | id         | int(11)      | NO   | PRI | NULL    | auto_increment |
        | name       | varchar(18)  | YES  |     | NULL    |                |
        | sex        | char(1)      | YES  |     | 0       |                |
        | age        | int(11)      | NO   |     | NULL    |                |
        | address    | varchar(200) | YES  |     | NULL    |                |
        | email      | varchar(100) | YES  | UNI | NULL    |                |
        | date       | date         | YES  | MUL | NULL    |                |
        | wages      | decimal(7,3) | YES  |     | NULL    |                |
        | Entry_time | timestamp(6) | YES  |     | NULL    |                |
        +------------+--------------+------+-----+---------+----------------+
        9 rows in set
        
    • Comparison of the number of rows

      • Number of rows in the staff table:

        obclient> select count(*) from staff;
        +----------+
        | count(*) |
        +----------+
        |        3 |
        +----------+
        1 row in set
        
      • Number of rows in the staff_like table:

        obclient> select count(*) from staff_like;
        +----------+
        | count(*) |
        +----------+
        |        0 |
        +----------+
        1 row in set (0.01 sec)
        

    Create a replica table

    Table replication is an advanced tuning technique in OceanBase Database.

    In general, OceanBase uses a three-replica architecture. By default, each partition of a table exists in three replicas. One of the replicas is the leader, and the others are followers. By default, the leader provides write and read services.

    You can create a replica table on every server in the tenant. You can have only one leader and two or more followers in the tenant. Full data synchronization is implemented to maintain strong consistency between the leader and followers. This allows your application to execute specific SQL JOIN queries on the same server for better performance.

    To create a replica table, add a DUPLICATE_SCOPE clause to the CREATE TABLE statement.

    Example:

    Create a table named staff_DC that uses the zstd compression algorithm. Enable the encoding storage format, and set the reserved space for macroblocks to 0, the microblock size to 16384, the attribute of the table to cluster-level replica table, and the locality to F@zone1.

    CREATE TABLE staff_DC(
      id int not null auto_increment,
      name varchar(18),
      sex char(1) default '0' check(
        sex = '0'
        or sex = '1'
      ),
      age int not null,
      address varchar(200),
      email varchar(100) unique,
      date date,
      wages number(7, 3),
      Entry_time TIMESTAMP(6),
      PRIMARY KEY (id)
    ) COMPRESSION 'zstd_1.0' FORM ROW_FORMAT DYNAMIC PCTFREE 0 BLOCK_SIZE = 16384 DUPLICATE_SCOPE = 'cluster' locality = 'F@zone1';
    

    Previous topic

    About tables
    Last

    Next topic

    About auto-increment columns
    Next
    What is on this page
    Prerequisites
    Create a table
    Syntax
    Parameters
    Create a non-partitioned table
    Create a table by copying the data in an existing table
    Copy table data
    Copy the table structure
    Create a replica table