As the core component to ensure the high availability of OceanBase Database, the backup and restore module protects data security by preventing misoperations and damages to the storage media. If data is lost due to misoperations or damages to the storage media, you can restore the data.
Overview
The backup and restore module of OceanBase Database provides the backup, restore, and cleanup features.
OceanBase Database V4.2.5 provides the backup, restore, and cleanup features, and supports the following backup media: Alibaba Cloud Object Storage Service (OSS), Tencent Cloud Object Storage (COS), Network File System (NFS), Amazon Simple Storage Service (S3), and object storage services that are compatible with the S3 protocol, such as Huawei Object Storage Service (OBS) and Google Cloud Storage (GCS). S3, OBS, and GCS are supported since OceanBase Database V4.2.1 BP7.
OceanBase Database supports tenant-level physical backup. Physical backup data includes backup data and archive log data. Therefore, physical backup consists of log archiving and data backup. The tenants mentioned in this topic are user tenants. Physical backup is not supported for sys tenants or meta tenants.
Data backup refers to the backup of data, and includes full backup and incremental backup:
Full backup refers to the backup of all macroblocks.
Incremental backup refers to the backup of macroblocks that are added or modified since the last backup.
Notice
To perform physical backup, you must first enable log archiving.
Data backup covers the following data:
Tenant information, including the tenant name, cluster name, time zone, locality, and tenant mode (MySQL or Oracle)
All user table data
Note
Data backup backs up system variables, but does not back up tenant parameters or private system table data.
Log archiving refers to the automatic backup of log data. OBServer nodes regularly archive log data to the specified backup path without manual triggering.
OceanBase Database supports tenant-level and table-level restore. Tenant-level restore is the process of rebuilding a new tenant based on existing backup data. The entire restore process can be completed after you execute the ALTER SYSTEM RESTORE TENANT statement. The tenant-level restore process includes the restore and recovery of tenant system tables and user tables. Restore returns the required baseline data to the OBServer node of the destination tenant. Recovery returns the logs corresponding to the baseline data to the OBServer node.
Table-level restore is the process of restoring a specified table from backup data to an existing tenant, which can be the source tenant, another tenant in the same cluster, or a tenant in a different cluster.
Backup media
OceanBase Database V4.2.5 allows you to use OSS, COS, NFS, S3, and object storage services compatible with the S3 protocol, such as OBS and GCS, as backup media. S3, OBS, and GCS are supported in OceanBase Database V4.2.1 BP7 and later.
Some of the backup media must meet the following requirements:
OSS
OSS must support the following API operations:
Operation Description PutObject Uploads a single object. DeleteObject Deletes a single object. DeleteObjects Deletes multiple objects at a time. GetObject Gets the specified object. ListObjects Lists all objects in the bucket. This operation relies on strong data consistency. HeadObject Gets the metadata of the specified object. AppendObject Appends objects. PutObjectTagging (optional) Sets and updates the tags of objects. GetObjectTagging (optional) Gets the tags of objects. InitiateMultipartUpload Initializes a multipart upload task. UploadPart Uploads parts. CompleteMultipartUpload Combines uploaded parts into an object. AbortMultipartUpload Aborts a multipart upload task and deletes uploaded parts. ListMultipartUploads Lists multipart upload tasks that have been initialized but not completed or aborted. ListParts Lists uploaded parts in an upload task. Only the V1 signature algorithm is supported.
NFS: NFS 4.1 and later are supported.
COS
The list cache of the bucket must be disabled. Otherwise, a backup inconsistency error occurs. For guidance on how to disable the list cache of a bucket, contact the technical support of COS.
To use the
APPENDObjectoperation for a bucket, you must disable the multi-availability zone (AZ) feature. If the multi-AZ feature is enabled, an error is reported during backup.
Object storage services compatible with the S3 protocol, such as OBS and GCS
The object storage services must support the following S3 API operations:
Operation Description PutObject Uploads a single object. DeleteObject Deletes a single object. DeleteObjects Deletes multiple objects at a time. GetObject Downloads a single object. ListObjects Lists all objects in the path. HeadObject Gets the metadata of the specified object. PutObjectTagging (optional) Sets the tags of objects. GetObjectTagging (optional) Gets the tags of objects. CreateMultipartUpload Initializes a multipart upload task. UploadPart Uploads a single part. CompleteMultipartUpload Combines uploaded parts into an object. AbortMultipartUpload Aborts a multipart upload task and deletes uploaded parts. ListMultipartUploads Lists multipart upload tasks. ListParts Lists uploaded parts in an upload task. The object storage services support URLs of the virtual-hosted style by default. For more information about requests of the virtual-hosted style, visit the official website of AWS. For more information about how to configure another archive destination, see SET LOG_ARCHIVE_DEST.
Before you select a backup medium, you can run the test_io_device command in ob_admin to check whether the I/O operations and privileges supported by the backup medium meet the backup and restore requirements. You can also run the io_adapter_benchmark command in ob_admin to query the performance of read/write between an OBServer node and the backup medium to estimate the backup performance.
Directory structure
Data backup directories
The directories created by the backup feature in the backup destination and types of files in the directories are as follows:
data_backup_dest
├── format.obbak // The format information of the backup path.
├── check_file
│ └── 1002_connect_file_20230111T193020.obbak // The connectivity check file.
├── backup_sets // The summary directory that contains all data backup sets.
│ ├── backup_set_1_full_end_success_20230111T193420.obbak // The end placeholder for a full backup.
│ ├── backup_set_1_full_start.obbak // The start placeholder for a full backup.
│ ├── backup_set_2_inc_start.obbak // The start placeholder for an incremental backup.
│ └── backup_set_2_inc_end_success_20230111T194420.obbak // The end placeholder for an incremental backup.
└── backup_set_1_full // A full backup set. A directory whose name ends with "full" is a full backup set, and a directory whose name ends with "inc" is an incremental backup set.
├── backup_set_1_full_20230111T193330_20230111T193420.obbak // The placeholder that represents the start and end time of a full backup.
├── single_backup_set_info.obbak // The metadata of the current backup set.
├── tenant_backup_set_infos.obbak // The full backup set information of the current tenant.
├── infos
│ ├── table_list // The list of table name files.
│ │ ├── table_list.1702352553000000000.1.obbak // Table name file 1.
│ │ ├── table_list.1702352553000000000.2.obbak // Table name file 2.
│ │ └── table_list_meta_info.1702352553000000000.obbak // The table name metadata file.
│ ├── major_data_info_turn_1 // The tenant-level baseline data backup file when the turn ID is 1.
│ │ ├── tablet_log_stream_info.obbak // The file describing the mapping between tablets and log streams.
│ │ ├── tenant_major_data_macro_range_index.0.obbak // The macroblock index in the baseline data backup file.
│ │ ├── tenant_major_data_meta_index.0.obbak // The meta index in the baseline data backup file.
│ │ └── tenant_major_data_sec_meta_index.0.obbak // The mapping between the logic ID and the physical ID of the meta index in the baseline data backup file.
│ ├── minor_data_info_turn_1 // The tenant-level minor-compacted backup file when the turn ID is 1.
│ │ ├── tablet_log_stream_info.obbak // The file describing the mapping between tablets and log streams.
│ │ ├── tenant_minor_data_macro_range_index.0.obbak // The macroblock index in the minor-compacted backup file.
│ │ ├── tenant_minor_data_meta_index.0.obbak // The meta index in the minor-compacted backup file.
│ │ └── tenant_minor_data_sec_meta_index.0.obbak // The mapping between the logic ID and the physical ID of the meta index in the minor-compacted backup file.
│ ├── diagnose_info.obbak // The diagnosis information file of the backup set.
├── tenant_parameter.obbak // The user-specified tenant-level parameters of the current tenant.
│ ├── locality_info.obbak // The tenant locality information of the current backup set, including the resource configuration and replica distribution information of the tenant.
│ └── meta_info // The tenant-level log stream metadata file, which contains the metadata of all log streams.
│ ├── ls_attr_info.1.obbak // The snapshot of log streams during backup.
│ └── ls_meta_infos.obbak // The collection of metadata of all log streams.
├── logstream_1 // Log stream 1.
│ ├── major_data_turn_1_retry_0 // The baseline data when the turn ID is 1 and retry ID is 0.
│ │ ├── macro_block_data.0.obbak // A data file sized between 512 MB and 4 GB.
│ │ ├── macro_range_index.obbak // The macro index.
│ │ ├── meta_index.obbak // The meta index.
│ │ └── sec_meta_index.obbak // The file describing the mapping between the logical ID and the physical ID.
│ ├── meta_info_turn_1_retry_0 // The metadata file of the log stream when the turn ID is 1 and retry ID is 0.
│ │ ├── ls_meta_info.obbak // The metadata of the log stream.
│ │ └── tablet_info.1.obbak // The metadata of log stream tablets.
│ ├── minor_data_turn_1_retry_0 // The minor-compacted data when the turn ID is 1 and retry ID is 0.
│ │ ├── macro_block_data.0.obbak
│ │ ├── macro_range_index.obbak
│ │ ├── meta_index.obbak
│ │ └── sec_meta_index.obbak
│ └── sys_data_turn_1_retry_0 // The system tablet data when the turn ID is 1 and retry ID is 0.
│ ├── macro_block_data.0.obbak
│ ├── macro_range_index.obbak
│ ├── meta_index.obbak
│ └── sec_meta_index.obbak
└── logstream_1001 // Log stream 1001.
├── major_data_turn_1_retry_0
│ ├── macro_block_data.0.obbak
│ ├── macro_range_index.obbak
│ ├── meta_index.obbak
│ └── sec_meta_index.obbak
├── meta_info_turn_1_retry_0
│ ├── ls_meta_info.obbak
│ └── tablet_info.1.obbak
├── minor_data_turn_1_retry_0
│ ├── macro_block_data.0.obbak
│ ├── macro_range_index.obbak
│ ├── meta_index.obbak
│ └── sec_meta_index.obbak
└── sys_data_turn_1_retry_0
├── macro_block_data.0.obbak
├── macro_range_index.obbak
├── meta_index.obbak
└── sec_meta_index.obbak
Top-level data backup directories contain the following types of data:
format.obbak: This directory records the metadata of the backup path.check_file: This directory records the checks on the connectivity to the user data backup directory.backup_sets: This directory contains all data backup sets.backup_set_1_full: This directory represents a data backup set, where a directory whose name ends withfullis a full backup set, and a directory whose name ends withincis an incremental backup set. Each data backup generates a corresponding backup set, and the backup set will not be modified after the data backup is completed.A data backup set mainly includes the following data:
backup_set_1_full_20230111T193330_20230111T193420.obbak: This file records the ID, start time, and end time of the current backup set. This file is used only for display purposes.single_backup_set_info.obbak: This file records the metadata of the current backup set, including the backup timestamp, dependent logs, and other information.tenant_backup_set_infos.obbak: This file records the metadata of all existing backup sets for the current tenant.infos: This directory records the metadata of the data backup set.logstream_1: This directory records all the data of log stream 1, which is the system log stream of an OceanBase Database tenant.logstream_1001: This directory records all the data of log stream 1001, where log streams with numbers greater than 1000 are user log streams of an OceanBase Database tenant.
In addition, each log stream backup has four types of directories. Directories whose names contain
retryrecord information about log stream-level retries, and directories whose names containturnrecord information about tenant-level retries.meta_info_xx: This directory records log stream metadata and tablet metadata.sys_data_xx: This directory records the data of internal system tablets in log streams.minor_data_xx: This directory records the minor-compacted data of regular tablets.major_data_xx: This directory records the baseline data of regular tablets.
Cluster-level parameter backup directories
Each time you initiate a backup for cluster-level parameters, the system generates a backup file for the cluster-level parameters in the specified directory. The directory structure is as follows:
cluster_parameters_backup_dest
├── cluster_parameter.20240710T103610.obbak # The user-specified cluster-level parameters. The file is named in the `cluster_parameter.[timestamp]` format.
└── cluster_parameter.20241018T140609.obbak
Log archive directories
When you use OSS, NFS, or COS as the backup media, directories at the backup destination and file types under each directory are as follows:
log_archive_dest
├── check_file
│ └── 1002_connect_file_20230111T193049.obbak // The connectivity check file.
├── format.obbak // The format information of the backup path.
├── rounds // The round placeholder directory.
│ └── round_d1002r1_start.obarc // The round start placeholder.
├── pieces // The piece placeholder directory.
│ ├── piece_d1002r1p1_start_20230111T193049.obarc // The piece start placeholder, in the format of piece_DESTID_ROUNDID_PIECEID_start_DATE.
│ └── piece_d1002r1p1_end_20230111T193249.obarc // The piece end placeholder, in the format of piece_DESTID_ROUNDID_PIECEID_end_DATE.
└── piece_d1002r1p1 // The piece directory, named in the format of piece_DESTID_ROUNDID_PIECEID.
├── piece_d1002r1p1_20230111T193049_20230111T193249.obarc // The contiguous interval of a piece.
├── checkpoint
│ └── checkpoint.1673436649723677822.obarc // The checkpoint_scn of the piece. This file is named in the format of `checkpoint.checkpoint_scn`.
│ └── checkpoint_info.0.obarc // The metadata of the checkpoint.
├── single_piece_info.obarc // The metadata of the piece.
├── tenant_archive_piece_infos.obarc // The metadata of all frozen pieces before this piece.
├── file_info.obarc // The list of files in all log streams.
├── logstream_1 // Log stream 1.
│ ├── file_info.obarc // The list of files in log stream 1.
│ ├── log
│ │ └── 1.obarc // The archive file in log stream 1.
│ └── schema_meta // The metadata of data dictionaries. This file is generated only in log stream 1.
│ └── 1677588501408765915.obarc
└── logstream_1001 // Log stream 1001.
├── file_info.obarc // The list of files in log stream 1001.
└── log
└── 1.obarc // The archive file in log stream 1001.
Top-level log archive directories contain the following types of data:
format.obbak: This directory records the metadata of the archive path, including the tenant that uses the path.check_file: This directory records the checks on the connectivity to the user log archive directory.rounds: This directory records all rounds of log archiving.pieces: This directory records all pieces of log archiving.piece_d1002r1p1: This directory records a specific piece of log archiving. It is named in the format ofpiece_DESTID_ROUNDID_PIECEID. Here,DESTIDrefers to the ID corresponding tolog_archive_dest,ROUNDIDrefers to the ID of the log archiving round, which is a monotonically increasing integer, andPIECEIDrefers to the ID of the log archiving piece, which is also a monotonically increasing integer.A log archiving piece directory includes the following data:
piece_d1002r1p1_20230111T193049_20230111T193249.obarc: This file records the ID, start time, and end time of the current piece and is used only for display purposes.checkpoint: This directory records the archiving checkpoints of active pieces. The ObArchiveScheduler module periodically updates the checkpoint information in this directory. Files in this directory are described as follows:checkpoint.1673436649723677822.obarc: The file name contains the checkpoint SCN of the piece, for example,1673436649723677822.checkpoint_info.0.obarc: This file records the checkpoint metadata of active pieces, includingtenant_id,dest_id,round_id, andpiece_id`. The metadata in a piece remains unchanged.
single_piece_info.obarc: This file records the metadata of the current piece.tenant_archive_piece_infos.obarc: This file records the metadata of all frozen pieces in the current tenant.file_info.obarc: This file records the list of log streams within the piece.logstream_1: This directory records the log files of log stream 1, which is the system log stream of an OceanBase Database tenant.logstream_1001: This directory records the log files of log stream 1001, where log streams with IDs greater than 1000 are user log streams of an OceanBase Database tenant.
Additionally, each log stream backup contains three types of data:
file_info.obarc: This file records the list of files in the log stream.log: This directory contains all the archive files of the current log stream, with file names consistent with those in the source cluster.schema_meta: This directory records the metadata of data dictionaries. It is only present in the system log stream but not in user log streams.
For S3 or a backup medium compatible with the S3 protocol, the structure of log archive directories is different from that of OSS, NFS, or COS. A single archive file consists of multiple small files and corresponding metadata files. The directory structure is as follows.
Note
Despite the differences in the structures of log archive directories, the backup files migrated from S3 or backup media compatible with the S3 protocol to OSS, NFS, or COS can be used for data restore. For example, if you copy the archived data in native S3 to OSS, you can restore data by using the archive path on OSS.
log_archive_dest
├── ......
└── piece_d1002r1p1 // The piece directory, named in the format of piece_DESTID_ROUNDID_PIECEID.
├── ...... // The list of files in all log streams.
├── logstream_1 // Log stream 1.
│ ├── file_info.obarc // The list of files in log stream 1.
│ ├── log
│ │ └── 1.obarc // The archive file in log stream 1, which is identified by a prefix.
| | └── @APD_PART@0-32472973.obarc // The actual data in the archive file, including the data from byte 0 to byte 32472973 in the log file.
| | └── ......
| | └── @APD_PART@FORMAT_META.obarc // The format of the archive file.
| | └── @APD_PART@SEAL_META.obarc // The metadata of the archive file.
│ └── schema_meta // The metadata of the data dictionary. This file is generated only in log stream 1.
│ └── 1677588501408765915.obarc
└── logstream_1001 // Log stream 1001.
├── file_info.obarc // The list of files in log stream 1001.
└── log
└── 1.obarc // The archive file in log stream 1001.
In the preceding directory structure, 1.obarc indicates a single archive file that is identified by a prefix 1. The prefix and the name of the archive file are the same. An archive file contains the following three types of data:
@APD_PART@FORMAT_META.obarc: When data is written to the archive file for the first time, theformat_metafile is generated in this directory to record the format of the archive file.@APD_PART@0-32472973.obarc: The actual data in the archive file is written to the file named with this prefix, and the start offset and the end offset of each write are recorded in the file name.@APD_PART@SEAL_META.obarc: After data is written to the archive file for the last time, theseal_metafile is generated in this directory to record the metadata in the archive file.
Feature differences between OceanBase Database V4.x and V3.x/V2.2x
Log archiving
| Item | V3.x/V2.2x | V4.x |
|---|---|---|
| Log archiving level | Cluster level | Tenant level |
| Log archiving granularity | Partition | Log stream |
| Required privileges | Operations such as setting the archive path, enabling archiving, and viewing the archive progress can be performed only in the sys tenant. |
These operations can be performed either in the sys tenant or by an administrator user in a user tenant. |
| Usage |
|
You can use the ALTER SYSTEM SET LOG_ARCHIVE_DEST statement to set the archive path and the piece switching interval for a tenant. The default piece switching interval is 1d, which indicates one day. |
| Log splitting | Log splitting can be disabled and is disabled by default. | Log splitting must be enabled, and the default piece switching interval is one day. |
| Setting the lag of log archiving | Use the ALTER SYSTEM SET LOG_ARCHIVE_CHECKPOINT_INTERVAL statement. |
Use the ALTER SYSTEM SET ARCHIVE_LAG_TARGET statement. |
Result returned by the ALTER SYSTEM ARCHIVELOG statement in the sys tenant |
If archiving is enabled for all tenants in the current cluster, archiving is automatically enabled for new tenants created later. | If archiving is enabled for all tenants in the current cluster, archiving is not automatically enabled for new tenants created later. |
| Compression of archive logs | Use the ALTER SYSTEM SET BACKUP_LOG_ARCHIVE_OPTION statement to enable this feature. |
Not supported |
| Archive-related views | The following three archive-related views are available:
|
The following eight archive-related views are available:
|
| Archive media | SSD only | HDD or SSD |
| Number of archive files | The number of archive files is proportional to the number of partitions. In a scenario with millions of partitions, a large number of small files are generated. | The number of files is small and is irrelevant to the number of partitions. |
| Standby tenant archiving | Not supported | Supported |
| Forced termination of archiving | Use the ALTER SYSTEM CANCEL ALL BACKUP FORCE statement to trigger this operation. |
When you stop archiving, if the archive medium is continuously unavailable, for example, the archive medium is full or the archive path is inaccessible, the system automatically triggers this operation 10 minutes later. |
Data backup
| Item | V3.x/V2.2x | V4.x |
|---|---|---|
| Backup level | Cluster level | Tenant level |
| Required privileges | Operations such as setting a backup path, starting backup, and viewing the backup progress can be performed only in the sys tenant. |
These operations can be performed either in the sys tenant or by an administrator user in a user tenant. |
| Setting a backup path | You can use the ALTER SYSTEM SET BACKUP_DEST statement to set a backup path for a cluster. |
You can use the ALTER SYSTEM SET DATA_BACKUP_DEST statement to set a backup path for a tenant. |
| Backing up data of a specified path | You can use the ALTER SYSTEM BACKUP TENANT tenant_name_list TO backup_destination; statement to initiate the backup from the sys tenant. |
Not supported |
| BACKUP PLUS ARCHIVELOG | Not supported | Supported |
| Storage space expansion | Snapshot points are retained during backup, which causes storage space expansion during backup. | Snapshot points are not retained, thereby avoiding the storage space expansion issue. |
| Standby tenant backup | Not supported | Supported |
| Views | The following five backup-related views are available:
|
The following 10 backup-related views are available:
|
Physical restore
| Item | V3.x/V2.2x | V4.x |
|---|---|---|
| Data path | The cluster-level backup path must be specified in the restore command. | Both the data backup path and log archive path must be specified in the restore command. |
| Restore concurrency settings | Execute the ALTER SYSTEM SET RESTORE_CONCURRENCY statement to set the restore concurrency before you run the restore command. |
Specify concurrency in the restore command. |
| Key management |
|
|
| Roles of restored tenants | Primary tenants, namely primary databases | Standby tenants, namely standby databases |
| Upgrade | Tenants are automatically upgraded during the restore process. | You must manually upgrade the tenants after they are restored. |
| Allowlist-based restore (restore of specified tables in the tenant) | Supported | Not supported |
| Table-level restore | Not supported | Supported in V4.2.1 and later |
Restore by using the ADD RESTORE SOURCE statement |
Supported | Not supported |
References
For more information about physical backup and restore, see Backup and restore.