OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

Product Overview
DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Database

SQL - V4.6.0

    Download PDF

    OceanBase logo

    The Unified Distributed Database for the AI Era.

    Follow Us
    Products
    OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
    Resources
    DocsBlogLive DemosTraining & CertificationTicket
    Company
    About OceanBaseTrust CenterLegalPartnerContact Us
    Follow Us

    © OceanBase 2026. All rights reserved

    Cloud Service AgreementPrivacy PolicySecurity
    Contact Us
    Document Feedback
    1. Documentation Center
    2. OceanBase Database
    3. SQL
    4. V4.6.0
    iconOceanBase Database
    SQL - V 4.6.0
    SQL
    KV
    • V 4.6.0
    • V 4.4.2
    • V 4.3.5
    • V 4.3.3
    • V 4.3.1
    • V 4.3.0
    • V 4.2.5
    • V 4.2.2
    • V 4.2.1
    • V 4.2.0
    • V 4.1.0
    • V 4.0.0
    • V 3.1.4 and earlier

    Examples

    Last Updated:2026-05-07 11:26:25  Updated
    share
    What is on this page
    Analyze online logs
    Analyze archived logs
    Sample analysis results
    Directory
    Data file

    folded

    share

    Analyze online logs

    In the example of this section, the IP address and port of the OBServer node are 10.10.10.1 and 2881, the logs of the root@mysql user between 2024-01-09 16:23:00 and the current time are analyzed, and the analysis results are written to the /data/logminer_output/ directory. The command is as follows:

    [admin@test001 ~]$ oblogminer -c "10.10.10.1:2881" -u "root@mysql" -p "***" -s "2024-01-09 16:23:00" -o "file:///data/logminer_output/"
    

    Analyze archived logs

    In the example of this section, the archived logs under the /data/log_archive/ directory between 2024-01-09 16:23:00 and 2024-01-10 10:00:00 are analyzed, and the analysis results are written to the /data/logminer_output/ directory. The command is as follows:

    [admin@test001 ~]$ oblogminer -a "file:///data/log_archive/" -s "2024-01-09 16:23:00" -e "2024-01-10 10:00:00" -o "file:///data/logminer_output/"
    

    Sample analysis results

    After running the preceding commands, you can find the following files under the output directory:

    ./logminer_output/
    ├── 0.csv
    ├── CHECKPOINT
    ├── COMMIT_INDEX
    ├── CONFIG
    └── META
        └── 0.meta
    

    Directory

    This section describes the files in the directory.

    • 0.csv: The output data file. The default format is CSV. You can add the -f or --record_format option to the command to specify a format. The file names are consecutive integers starting from zero. After a data file is generated, a corresponding meta file with the same name is written to the META directory.

    • CHECKPOINT: The analysis progress.

    • COMMIT_INDEX: The index file that records the timestamps of data files.

    • CONFIG: The configuration for this log analysis.

    • META: The directory that stores the meta files of output data files. A meta file has the same name as its data file, with .meta as the file extension. A meta file records the timestamp and data size of its data file. The sample content is as follows:

      MIN_COMMIT_TS=1704788579999999
      MAX_COMMIT_TS=1704788633200501
      DATA_LEN=1066
      

    Data file

    You can add the -f or --record_format option to the command to specify one of the following formats for the data file: CSV, JSON, REDO_ONLY, or UNDO_ONLY. The formats are described as follows:

    • CSV

      The data file extension is .csv. The sample content is as follows. For more information about the fields, see Fields.

      TENANT_ID,TRANS_ID,PRIMARY_KEY,TENANT_NAME,DATABASE_NAME,TABLE_NAME,OPERATION,OPERATION_CODE,COMMIT_SCN,COMMIT_TIMESTAMP,SQL_REDO,SQL_UNDO,ORG_CLUSTER_ID
      1004,0,"","mysql","test","","DDL",4,1704788606147445567,"2024-01-09 16:23:26.147445","create table t(a int, b varchar(100))","",1
      1004,42590,"","","","","BEGIN",5,1704788614963917100,"2024-01-09 16:23:34.963917","","",1
      1004,42590,"","mysql","test","t","INSERT",1,1704788614963917100,"2024-01-09 16:23:34.963917","INSERT INTO `test`.`t` (`a`, `b`) VALUES (1, 'abc');","DELETE FROM `test`.`t` WHERE `a`=1 AND `b`='abc' LIMIT 1;",1
      1004,42590,"","","","","COMMIT",6,1704788614963917100,"2024-01-09 16:23:34.963917","","",1
      1004,42604,"","","","","BEGIN",5,1704788619155575003,"2024-01-09 16:23:39.155575","","",1
      1004,42604,"","mysql","test","t","DELETE",3,1704788619155575003,"2024-01-09 16:23:39.155575","DELETE FROM `test`.`t` WHERE `a`=1 AND `b`='abc' LIMIT 1;","INSERT INTO `test`.`t` (`a`, `b`) VALUES (1, 'abc');",1
      1004,42604,"","","","","COMMIT",6,1704788619155575003,"2024-01-09 16:23:39.155575","","",1
      
    • JSON

      The data file extension is .json. Records are generated as rows separated with line breaks (\n). The sample content is as follows. For more information about the fields, see Fields.

      {"TENANT_ID":1002,"TRANS_ID":4415602,"PRIMARY_KEY":"","TENANT_NAME":"","DATABASE_NAME":"","TABLE_NAME":"","OPERATION":"BEGIN","OPERATION_CODE":5,"COMMIT_SCN":1708391913195415000,"COMMIT_TIMESTAMP":"2024-02-20 01:18:33.195415","SQL_REDO":"","SQL_UNDO":"","ORG_CLUSTER_ID":1}
      {"TENANT_ID":1002,"TRANS_ID":4415602,"PRIMARY_KEY":"","TENANT_NAME":"mysql","DATABASE_NAME":"test","TABLE_NAME":"t1","OPERATION":"INSERT","OPERATION_CODE":1,"COMMIT_SCN":1708391913195415000,"COMMIT_TIMESTAMP":"2024-02-20 01:18:33.195415","SQL_REDO":"INSERT INTO `test`.`t1` (`id`, `name`) VALUES (1, 'aaaa');","SQL_UNDO":"DELETE FROM `test`.`t1` WHERE `id`=1 AND `name`='aaaa' LIMIT 1;","ORG_CLUSTER_ID":1}
      {"TENANT_ID":1002,"TRANS_ID":4415602,"PRIMARY_KEY":"","TENANT_NAME":"","DATABASE_NAME":"","TABLE_NAME":"","OPERATION":"COMMIT","OPERATION_CODE":6,"COMMIT_SCN":1708391913195415000,"COMMIT_TIMESTAMP":"2024-02-20 01:18:33.195415","SQL_REDO":"","SQL_UNDO":"","ORG_CLUSTER_ID":1}
      
    • REDO_ONLY

      The file extension is .sql. Only SQL_REDO results are generated.

    • UNDO_ONLY

      The file extension is .sql. Only SQL_UNDO results are generated.

    The following table describes the fields.

    Field
    Type
    Description
    TENANT_ID Number The ID of the tenant.
    TRANS_ID Number The ID of the transaction to which the modification belongs. TRANS_ID uniquely identifies a transaction in a tenant.
    PRIMARY_KEY String The primary keys of the modified row, which are separated by forward slashes (/) by default.
    TENANT_NAME String The name of the tenant.
    DATABASE_NAME String The name of the database.
    TABLE_NAME String The name of the table.
    OPERATION String The type of the SQL operation. Valid values:
    • INSERT: The INSERT operation.
    • UPDATE: The UPDATE operation.
    • DELETE: The DELETE operation.
    • DDL: The DDL operations.
    • BEGIN: The start of a transaction.
    • COMMIT: The end of the transaction.
    OPERATION_CODE Number The code of the operation. Valid values:
    • 1: INSERT
    • 2: UPDATE
    • 3: DELETE
    • 4: DDL
    • 5: BEGIN
    • 6: COMMIT
    COMMIT_SCN Number The system change number (SCN) of the transaction commit.
    COMMIT_TIMESTAMP Date The time when the transaction was committed.
    SQL_REDO String The redo SQL statement generated based on commit logs (clogs), which is equivalent to the SQL statement executed by the user in the database.
    SQL_UNDO String The redo SQL statement for data flashback. SQL_UNDO is not generated for DDL transactions.
    ORG_CLUSTER_ID Number The cluster id information specified by running the set ob_org_cluster_id=*** command.

    Previous topic

    Command-line options
    Last

    Next topic

    FAQ
    Next
    What is on this page
    Analyze online logs
    Analyze archived logs
    Sample analysis results
    Directory
    Data file