Analyze online logs
In the example of this section, the IP address and port of the OBServer node are 10.10.10.1 and 2881, the logs of the root@mysql user between 2024-01-09 16:23:00 and the current time are analyzed, and the analysis results are written to the /data/logminer_output/ directory. The command is as follows:
[admin@test001 ~]$ oblogminer -c "10.10.10.1:2881" -u "root@mysql" -p "***" -s "2024-01-09 16:23:00" -o "file:///data/logminer_output/"
Analyze archived logs
In the example of this section, the archived logs under the /data/log_archive/ directory between 2024-01-09 16:23:00 and 2024-01-10 10:00:00 are analyzed, and the analysis results are written to the /data/logminer_output/ directory. The command is as follows:
[admin@test001 ~]$ oblogminer -a "file:///data/log_archive/" -s "2024-01-09 16:23:00" -e "2024-01-10 10:00:00" -o "file:///data/logminer_output/"
Sample analysis results
After running the preceding commands, you can find the following files under the output directory:
./logminer_output/
├── 0.csv
├── CHECKPOINT
├── COMMIT_INDEX
├── CONFIG
└── META
└── 0.meta
Directory
This section describes the files in the directory.
0.csv: The output data file. The default format is CSV. You can add the
-for--record_formatoption to the command to specify a format. The file names are consecutive integers starting from zero. After a data file is generated, a corresponding meta file with the same name is written to the META directory.CHECKPOINT: The analysis progress.
COMMIT_INDEX: The index file that records the timestamps of data files.
CONFIG: The configuration for this log analysis.
META: The directory that stores the meta files of output data files. A meta file has the same name as its data file, with
.metaas the file extension. A meta file records the timestamp and data size of its data file. The sample content is as follows:MIN_COMMIT_TS=1704788579999999 MAX_COMMIT_TS=1704788633200501 DATA_LEN=1066
Data file
You can add the -f or --record_format option to the command to specify one of the following formats for the data file: CSV, JSON, REDO_ONLY, or UNDO_ONLY. The formats are described as follows:
CSV
The data file extension is
.csv. The sample content is as follows. For more information about the fields, see Fields.TENANT_ID,TRANS_ID,PRIMARY_KEY,TENANT_NAME,DATABASE_NAME,TABLE_NAME,OPERATION,OPERATION_CODE,COMMIT_SCN,COMMIT_TIMESTAMP,SQL_REDO,SQL_UNDO,ORG_CLUSTER_ID 1004,0,"","mysql","test","","DDL",4,1704788606147445567,"2024-01-09 16:23:26.147445","create table t(a int, b varchar(100))","",1 1004,42590,"","","","","BEGIN",5,1704788614963917100,"2024-01-09 16:23:34.963917","","",1 1004,42590,"","mysql","test","t","INSERT",1,1704788614963917100,"2024-01-09 16:23:34.963917","INSERT INTO `test`.`t` (`a`, `b`) VALUES (1, 'abc');","DELETE FROM `test`.`t` WHERE `a`=1 AND `b`='abc' LIMIT 1;",1 1004,42590,"","","","","COMMIT",6,1704788614963917100,"2024-01-09 16:23:34.963917","","",1 1004,42604,"","","","","BEGIN",5,1704788619155575003,"2024-01-09 16:23:39.155575","","",1 1004,42604,"","mysql","test","t","DELETE",3,1704788619155575003,"2024-01-09 16:23:39.155575","DELETE FROM `test`.`t` WHERE `a`=1 AND `b`='abc' LIMIT 1;","INSERT INTO `test`.`t` (`a`, `b`) VALUES (1, 'abc');",1 1004,42604,"","","","","COMMIT",6,1704788619155575003,"2024-01-09 16:23:39.155575","","",1JSON
The data file extension is
.json. Records are generated as rows separated with line breaks (\n). The sample content is as follows. For more information about the fields, see Fields.{"TENANT_ID":1002,"TRANS_ID":4415602,"PRIMARY_KEY":"","TENANT_NAME":"","DATABASE_NAME":"","TABLE_NAME":"","OPERATION":"BEGIN","OPERATION_CODE":5,"COMMIT_SCN":1708391913195415000,"COMMIT_TIMESTAMP":"2024-02-20 01:18:33.195415","SQL_REDO":"","SQL_UNDO":"","ORG_CLUSTER_ID":1} {"TENANT_ID":1002,"TRANS_ID":4415602,"PRIMARY_KEY":"","TENANT_NAME":"mysql","DATABASE_NAME":"test","TABLE_NAME":"t1","OPERATION":"INSERT","OPERATION_CODE":1,"COMMIT_SCN":1708391913195415000,"COMMIT_TIMESTAMP":"2024-02-20 01:18:33.195415","SQL_REDO":"INSERT INTO `test`.`t1` (`id`, `name`) VALUES (1, 'aaaa');","SQL_UNDO":"DELETE FROM `test`.`t1` WHERE `id`=1 AND `name`='aaaa' LIMIT 1;","ORG_CLUSTER_ID":1} {"TENANT_ID":1002,"TRANS_ID":4415602,"PRIMARY_KEY":"","TENANT_NAME":"","DATABASE_NAME":"","TABLE_NAME":"","OPERATION":"COMMIT","OPERATION_CODE":6,"COMMIT_SCN":1708391913195415000,"COMMIT_TIMESTAMP":"2024-02-20 01:18:33.195415","SQL_REDO":"","SQL_UNDO":"","ORG_CLUSTER_ID":1}REDO_ONLY
The file extension is
.sql. Only SQL_REDO results are generated.UNDO_ONLY
The file extension is
.sql. Only SQL_UNDO results are generated.
The following table describes the fields.
| Field | Type | Description |
|---|---|---|
| TENANT_ID | Number | The ID of the tenant. |
| TRANS_ID | Number | The ID of the transaction to which the modification belongs. TRANS_ID uniquely identifies a transaction in a tenant. |
| PRIMARY_KEY | String | The primary keys of the modified row, which are separated by forward slashes (/) by default. |
| TENANT_NAME | String | The name of the tenant. |
| DATABASE_NAME | String | The name of the database. |
| TABLE_NAME | String | The name of the table. |
| OPERATION | String | The type of the SQL operation. Valid values:
|
| OPERATION_CODE | Number | The code of the operation. Valid values:
|
| COMMIT_SCN | Number | The system change number (SCN) of the transaction commit. |
| COMMIT_TIMESTAMP | Date | The time when the transaction was committed. |
| SQL_REDO | String | The redo SQL statement generated based on commit logs (clogs), which is equivalent to the SQL statement executed by the user in the database. |
| SQL_UNDO | String | The redo SQL statement for data flashback. SQL_UNDO is not generated for DDL transactions. |
| ORG_CLUSTER_ID | Number | The cluster id information specified by running the set ob_org_cluster_id=*** command. |