We recommend that you use Alibaba Cloud Object Storage Service (OSS) as the backup destination. OSS is stateless and more stable than the stateful Network File System Version 4 (NFS4).
To use NFS as the backup destination, deploy NFS first by referring to this topic. NFS is available in software and hardware. We recommend that you use the dedicated NFS hardware device.
Considerations
To use an NFS environment, make sure that you have mounted NFS before enabling the backup service. If NFS fails during backup, stop data backup and log archiving first and then resolve the NFS issue.
Concurrent backup control in OceanBase Database depends on the file lock feature of NFS version 4. Therefore, we recommend that you mount NFS version 4.1 or later.
When you use NFS to back up data and logs, ensure that all OBServer nodes are mounted to NFS of the same server and use the parameter settings recommended in this topic. For more information about the procedure of mounting NFS, see Deploy the NFS client.
When you restart an OBServer node, you need to start NFS first.
After you add a new server, make sure that you have mounted NFS or that you can back up data to another media before you start the new OBServer node.
Deploy the NFS server
Notice
If you use an NFS hardware device, skip this operation and directly deploy the NFS client.
Log on to the NFS server.
Run the following command to install NFS by using the YUM package manager:
sudo yum install nfs-utilsConfigure the exports file.
Select a directory as the shared directory. Before you select the directory, ensure that the directory size meets the backup requirements for space and performance.
The selected shared directory in the following sample code is
/data/nfs_server/.Run the
sudo vim /etc/exportscommand to open the configuration file. Then, set the following information:/data/nfs_server/ xx.xx.xx.xx/16(rw,sync,all_squash)Here,
xx.xx.xx.xxspecifies the accessible CIDR block.Run the following command to grant privileges to
nfsnobodyto allownfsnobodyto access the directory specified in theexportsfile.sudo chown nfsnobody:nfsnobody -R /data/nfs_server
Set the NFS parameters.
Run the
sudo vim /etc/sysconfig/nfscommand to open the configuration file.Set the following parameters:
RPCNFSDCOUNT=8 RPCNFSDARGS="-N 2 -N 3 -U" NFSD_V4_GRACE=90 NFSD_V4_LEASE=90Run the following commands to restart NFS:
sudo systemctl restart nfs-config sudo systemctl restart nfs-server
Set the slot table.
Run the
sudo vim /etc/sysctl.confcommand to open thesysctl.conffile. Then, add the following string to the file:sunrpc.tcp_max_slot_table_entries=128Run the following command to change the maximum number of concurrent NFS requests to
128.sudo sysctl -w sunrpc.tcp_max_slot_table_entries=128After the command is executed, you can run the
cat /proc/sys/sunrpc/tcp_max_slot_table_entriescommand to check whether the settings have taken effect. If the return value is128, the modification is successful.(Optional) Restart the server.
Deploy the NFS client
The NFS client must be deployed on all OBServer nodes.
This section describes how to deploy the NFS client on an OBServer node.
Log on to an OBServer node.
Run the following command to install NFS by using the YUM package manager:
sudo yum install nfs-utilsSet the slot table.
Run the
sudo vim /etc/sysctl.confcommand to open thesysctl.conffile. Then, add the following string to the file:sunrpc.tcp_max_slot_table_entries=128Run the following command to change the maximum number of concurrent NFS requests to
128.sudo sysctl -w sunrpc.tcp_max_slot_table_entries=128After the command is executed, you can run the
cat /proc/sys/sunrpc/tcp_max_slot_table_entriescommand to check whether the settings have taken effect. If the return value is128, the modification is successful.(Optional) Restart the server.
Select a directory as the mount point and run the following command to mount NFS.
In the following sample code, NFS is mounted to the
/data/nfsdirectory. You can customize a directory if no appropriate mounting directory is available.sudo mount -tnfs4 -o rw,nfsvers=4.1,sync,lookupcache=positive,hard,timeo=600,wsize=1048576,rsize=1048576,namlen=255 10.10.10.1:/data/1 /data/nfswhere
nfsvers=4.1specifies the NFS version. We recommend that you use NFS version 4.1 or later because the backup depends on the native file lock of NFS version 4. NFS version 4.0 has a bug, namely, the old file may be read after it is renamed.syncindicates that synchronous write is adopted, which ensures that data can be promptly synchronized to the server, thereby ensuring data consistency.lookupcache=positiveprevents the system to prompt that the accessed directory or file does not exist during concurrent access, thereby ensuring data consistency.hardindicates that the system will block the read and write requests from applications when the NFS is unavailable, thereby ensuring data consistency. Do not specify thesoftoption because it may cause data errors.timeospecifies the time to wait for a retry. Unit: 0.1s. We recommend that you do not set it to a large value. The recommended value is600.wsizespecifies the size of the blocks to write. The recommended value is1048576.rsizespecifies the size of the blocks to read. The recommended value is1048576.namlenspecifies the name length. The recommended value is255.10.10.10.1specifies the IP address of the NFS server.
Notice
- When you mount the NFS, make ensure that the following parameters are set for the backup mount environment:
nfsvers=4.1,sync,lookupcache=positive, andhard. - In a Docker container, the NFS must be mounted to the destination host and then mapped to the Docker container. If the NFS is directly mounted in the Docker container, the NFS client may be hung.
Run the following command to verify the performance of NFS:
fio -filename=/data/nfs/fio_test -direct=1 -rw=randwrite -bs=2048K -size=100G -runtime=300 -group_reporting -name=mytest -ioengine=libaio -numjobs=1 -iodepth=64 -iodepth_batch=8 -iodepth_low=8 -iodepth_batch_complete=8The result is as follows:
Run status group 0 (all jobs): WRITE: io=322240MB, aggrb=1074.2MB/s, minb=1074.2MB/s, maxb=1074.2MB/s, mint=300006msec, maxt=300006msec