This topic describes how to deploy a three-replica OceanBase cluster using the CLI.
Prerequisites
Before installing OceanBase Database, make sure that the following conditions are met:
The OBServer node has been configured. For more information, see Server configuration, (Optional) Configure a clock source, and Initialize the OBServer node using oat-cli.
You have obtained the RPM package of OceanBase Database. For more information, see Prepare installation packages.
Procedure
Step 1: Install the RPM package
Install the OceanBase Database RPM package.
In this step,
$rpm_dirindicates the directory where the RPM package is stored, and$rpm_nameindicates the name of the RPM package.[root@xxx /]# cd $rpm_dir [root@xxx $rpm_dir]# rpm -ivh $rpm_nameNote
By default, OceanBase Database is installed in the
/home/admin/oceanbasedirectory.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:oceanbase-4.2.0.0-100000052023073################################# [100%](Optional) Install OceanBase Client.
OceanBase Client OBClient is a dedicated command-line client tool for OceanBase Database. With OBClient, you can connect to MySQL and Oracle tenants in OceanBase Database. If you only want to connect to a MySQL tenant, you can also use the mysql client to connect to OceanBase Database.
Notice
Due to the dependency of
obclientV2.2.1 and earlier onlibobclient, you need to installlibobclientfirst. Contact OceanBase Technical Support to obtain the OBClient RPM package and the libobclient RPM package.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh obclient-2.2.1-20221122151945.el7.alios7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:obclient-2.2.1-20221122151945.el7################################# [100%] ##Check whether OceanBase Client is installed## [root@xxx /home/admin/rpm]# which obclient /usr/bin/obclient
Step 2: Configure directories
Clear the OceanBase directory (not needed for the first deployment).
If you want to clear the previous OceanBase environment or the environment is messy or files that will affect the next installation are generated during the installation and deployment of OceanBase, you can choose to clear the old OceanBase directory. In this case,
$cluster_nameis the name of the cluster.[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/$cluster_name -bash-4.2$ rm -rf /data/log1/$cluster_name -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/obdemo -bash-4.2$ rm -rf /data/log1/obdemo -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerInitialize the OceanBase directory.
We recommend that you locate the data directory of OceanBase on an independent disk and then create a soft link between the disk and the software home directory. In this case,
$cluster_nameis the name of the cluster.Note
Starting from OceanBase Database V4.3.0, you can independently configure the
slogdisk from the data (data) disk. In other words,slogand data files do not need to be on the same disk. You can configure the same SSD to share forslogandclog. For more information about the installation directories of OceanBase Database, see OBServer node installation directory structure.[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; doneHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; doneNote
The
obdemodirectory is created based on the cluster name and can be customized. It is used by the startup process.The check result is as follows:
-bash-4.2$ cd /home/admin/oceanbase -bash-4.2$ tree store/ store/ `-- obdemo |-- clog -> /data/log1/obdemo/clog |-- etc2 -> /data/log1/obdemo/etc2 |-- etc3 -> /data/1/obdemo/etc3 |-- slog -> /data/1/obdemo/slog `-- sstable -> /data/1/obdemo/sstable 6 directories, 0 files
Step 3: Initialize the OceanBase cluster
Note
The example IP address has been obfuscated and does not reflect the actual installation requirement. You must specify the real IP address of your server when you deploy OceanBase Database.
Start the observer process on each node.
Start the observer process on each node as the
adminuser.Notice
In a three-replica setup, the startup parameters for each node are different. When you start the observer process, specify the IP addresses of three (or more) servers where RootService resides. You do not need to specify all servers when you create a cluster. After the cluster is created, you can add more servers.
cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"The parameters are described as follows:
Parameter Description -I|-i-I: the IP address of the node to be started. In a multi-server deployment scenario, 127.0.0.1 cannot be used as the target IP address. We recommend that you use the IP address to start the node (-I 10.10.10.1).-i: the name of the network card. You can view the network card name by using theifconfigcommand.
Note
You can start the node by specifying both the IP address and the network card name (
-I 10.10.10.1 -i eth0), but we recommend that you do not use this method.-pthe service port number, which is generally set to 2881.-Pthe RPC port number, which is generally set to 2882.-nthe cluster name. Custom cluster names must not be repeated. -zthe zone to which the observer process belongs. -dthe primary directory of the cluster. The directory is created when the cluster is initialized. Except for the cluster name $cluster_name, other parameters must not be changed.-cthe cluster ID. It is a group of numbers, and custom cluster IDs must not be repeated. -lthe log level. -rthe RootService list in the format of $ip:2882:2881; the list is separated by semicolons.-othe startup parameters (options) of the cluster. It is an optional parameter. You can specify values for multiple parameters, and separate the values with commas. Choose appropriate startup parameters and specify values for each parameter based on your business needs to optimize the performance and resource efficiency of the cluster. Here is a list of commonly used startup parameters for clusters: - cpu_count: the total number of CPU cores.
- system_memory: the memory size reserved for tenant ID
500, namely, the memory space internally reserved for OceanBase Database. If the system memory is limited, you can decrease the value of this parameter. Note that, during performance tests, the system may run out of memory. - memory_limit: the total size of available memory.
- datafile_size: the size of disk space occupied by data files. Specifically, it is the size of the
sstablefile in OceanBase Database. The value of this parameter is assessed based on the available space in/data/1/. We recommend that you set the value to at least100G. - datafile_disk_percentage: the percentage of total disk space occupied by data files.
- datafile_next: the automatic expansion step length of data files on disk.
- datafile_maxsize: the maximum space for automatic expansion of data files on disk.
- config_additional_dir: multiple directories for storing local configuration files for redundancy.
- log_disk_size: the size of the redo log disk.
- log_disk_percentage: the percentage of total disk space occupied by the redo log.
- syslog_level: the system log level.
- syslog_io_bandwidth_limit: the upper limit on disk I/O bandwidth available for system logs. System logs exceeding the bandwidth upper limit will be discarded.
- max_syslog_file_count: the maximum number of log files that can be stored before old log files are recycled.
- enable_syslog_recycle: the switch to enable log file recycling. It is used in conjunction with the
max_syslog_file_countparameter to determine whether to recycle old log files.
datafile_size,datafile_disk_percentage,datafile_next, anddatafile_maxsizeparameters are used together to enable automatic expansion of data files on disk. For more information, see Enable dynamic expansion for disk data files. For more information about cluster configuration, see Overview of system configuration items.Here is an example:
zone1:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone2:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.2 -P 2882 -p 2881 -z zone2 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone3:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.3 -P 2882 -p 2881 -z zone3 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"Run the following command to check whether the observer process has started successfully:
- Use the
netstat -ntlpcommand. If the process listens on ports2881and2882, the process has started successfully. - Use the
ps -ef|grep observercommand to return the observer process information.
Here is an example:
-bash-4.2$ netstat -ntlp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 11114/observer tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 11114/observer ... ... ... ... ... ... -bash-4.2$ ps -ef|grep observer admin 11114 0 40 16:18 ? 00:00:17 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2Perform the bootstrap operation to initialize the cluster.
Use OBClient to connect to any server. The password is empty.
[root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p****** obclient> SET SESSION ob_query_timeout=1000000000; Query OK, 0 rows affected obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882',ZONE 'zone2' SERVER '10.10.10.2:2882',ZONE 'zone3' SERVER '10.10.10.3:2882'; Query OK, 0 rows affectedNotice
If this step fails, check whether the observer process startup parameters are correct, whether the permissions of the observer-related directories are correct, whether the log directory has sufficient space (if the data directory and log directory are on the same large directory, the data directory occupies the log directory's space), whether the node time is synchronized, and whether the node has sufficient memory resources. If any of these checks fail, clear the OceanBase directory and deploy OceanBase Database again.
Verify that the cluster has been initialized.
After the cluster is initialized, execute the
SHOW DATABASES;command to verify it. If the query result shows that theoceanbasedatabase is included in the database list, the cluster has been initialized successfully.Change the password.
The password of the
rootuser in thesystenant is empty by default. Change the password after the initialization is completed.ALTER USER root IDENTIFIED BY '******';
Next steps
After the cluster is created, you can create user tenants as needed.
For more information about how to create a user tenant using the CLI, see Create a tenant.