This topic describes how to deploy a three-replica OceanBase cluster using the CLI.
Prerequisites
Before installing OceanBase Database, make sure that the following conditions are met:
The OBServer node has been configured. For more information, see Server configuration, (Optional) Configure a clock source, and Initialize the OBServer node using oat-cli.
You have obtained the RPM package of OceanBase Database. For more information, see Prepare installation packages.
Procedure
Step 1: Install the RPM package
Install the OceanBase Database RPM package.
In this example,
$rpm_dirindicates the directory where the RPM package is stored, and$rpm_nameindicates the name of the RPM package.[root@xxx /]# cd $rpm_dir [root@xxx $rpm_dir]# rpm -ivh $rpm_nameNote
By default, OceanBase Database will be installed in the
/home/admin/oceanbasedirectory.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:oceanbase-4.2.0.0-100000052023073################################# [100%](Optional) Install OceanBase Client.
OceanBase Client OBClient is a dedicated command-line client tool for OceanBase Database. With OBClient, you can connect to MySQL and Oracle tenants in OceanBase Database. If you only connect to a MySQL tenant, you can also use the MySQL client (mysql) to connect to OceanBase Database.
Notice
Due to the dependency of
obclientV2.2.1 and earlier onlibobclient, you need to installlibobclientfirst. Contact the technical support team to obtain the RPM packages of OBClient and libobclient.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh obclient-2.2.1-20221122151945.el7.alios7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:obclient-2.2.1-20221122151945.el7################################# [100%] ##Check whether OceanBase Client is installed## [root@xxx /home/admin/rpm]# which obclient /usr/bin/obclient
Step 2: Configure the directories
Clear the OceanBase directory (if it exists) (not required on the first deployment).
If you want to clear the previous OceanBase environment or if the OceanBase installation or deployment encounters errors that cause a messy environment or files that will affect the next installation, choose to clear the old OceanBase directory. In this case, the
$cluster_namevariable is the name of the cluster.[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/$cluster_name -bash-4.2$ rm -rf /data/log1/$cluster_name -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/obdemo -bash-4.2$ rm -rf /data/log1/obdemo -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerInitialize the OceanBase directory.
We recommend that you use an independent disk for storing OceanBase data and create a soft link between the disk and the software home directory. In this case, the
$cluster_namevariable is the name of the cluster.Note
Starting from OceanBase Database V4.3.0, you can separate the
slogdisk from the data (data) disk. In other words,slogand data files do not need to be on the same disk. You can configure theslogandclogfiles to share an SSD. For more information about the installation directories of OceanBase Database, see OBServer node installation directory structure.[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; doneHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; doneNote
The
obdemodirectory is created with the name of the cluster and is used for storing parameters when the process is started.The check result is as follows:
-bash-4.2$ cd /home/admin/oceanbase -bash-4.2$ tree store/ store/ `-- obdemo |-- clog -> /data/log1/obdemo/clog |-- etc2 -> /data/log1/obdemo/etc2 |-- etc3 -> /data/1/obdemo/etc3 |-- slog -> /data/1/obdemo/slog `-- sstable -> /data/1/obdemo/sstable 6 directories, 0 files
Step 3: Initialize the OceanBase cluster
Note
- The example IP addresses have been desensitized and are not the IP addresses for installation. You must specify your server's real IP addresses when you deploy OceanBase Database.
- OceanBase Database allows you to start the observer process on a server that is listening on a specified IPv4 or IPv6 address. The following example shows how to start the observer process on an IPv4 address. If you want to start the observer process on an IPv6 address, see Deploy an OceanBase cluster with one replica using the CLI.
Start the observer process on each node.
Start the observer process on each node as the
adminuser.Note
In a three-replica setup, the startup parameters of each node are not the same. When you start the observer process, you need to specify the IP addresses of the three (or more) servers hosting RootService. You do not need to specify all servers when you create the cluster. After the cluster is created, you can add more servers.
cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"The parameters are described as follows:
Parameter Description -I|-i-I: the IP address on which the node to be started listens. In a multi-server deployment, 127.0.0.1 cannot be specified as the target IP address. We recommend that you use the specified IP address (-I 10.10.10.1) to start the node.-i: the name of the network interface card. You can view the available network interface card names by using theifconfigcommand.
Note
You can start the node by specifying both the IP address and the network interface card name (
-I 10.10.10.1 -i eth0), but we recommend that you do not use this method.-pthe service port number, which is usually set to 2881.-Pthe RPC port number, which is usually set to 2882.-nthe cluster name. Custom cluster names must not be repeated. -zthe zone to which the observer process belongs. -dthe primary directory of the cluster. The directory is created when you initialize the cluster. Except for the cluster name $cluster_name, other parameters cannot be changed.-cthe cluster ID. It is a group of numbers, which can be customized as long as they are not repeated among clusters. -lthe log level. -rthe RootService information. The format is $ip:2882:2881, with semicolons (;) to separate the RootServices of multiple servers.-othe startup parameters (configuration items) of the cluster. The parameters are listed as follows, though you do not need to specify all of them: - cpu_count: the total number of CPU cores in the system.
- system_memory: the memory size reserved for tenant ID
500, namely, the memory space internally reserved for OceanBase Database. If the system memory is limited, you can set this parameter to a smaller value. Note that, during performance tests, the system may run out of memory. - memory_limit: the total size of available memory.
- datafile_size: the size of disk space occupied by data files. In other words, it is the size of the
sstablefile in OceanBase Database. It is recommended that you assess the available space in/data/1/and set this parameter to at least100G. - datafile_disk_percentage: the percentage of total disk space occupied by data files.
- datafile_next: the automatic expansion step length of disk data files.
- datafile_maxsize: the maximum space for automatic expansion of disk data files.
- config_additional_dir: the directories for storing additional configuration files for redundancy.
- log_disk_size: the size of the redo log disk.
- log_disk_percentage: the percentage of total disk space occupied by the redo log.
- syslog_level: the system log level.
- syslog_io_bandwidth_limit: the upper limit on the disk I/O bandwidth available to system logs. System logs exceeding the bandwidth upper limit will be discarded.
- max_syslog_file_count: the maximum number of log files that can be stored before old log files are recycled.
- enable_syslog_recycle: the switch to enable recycling of old log files before the startup of the cluster. It is used in conjunction with the
max_syslog_file_countparameter to determine whether to recycle old log files.
datafile_size,datafile_disk_percentage,datafile_next, anddatafile_maxsizeparameters are used together to enable automatic expansion of disk data files. For more information, see Enable dynamic expansion for disk data files. For more information about cluster configuration, see Overview of configuration items.Here is an example:
zone1:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone2:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.2 -P 2882 -p 2881 -z zone2 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone3:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.3 -P 2882 -p 2881 -z zone3 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"Run the following command to check whether the observer process has started successfully:
- Use the
netstat -ntlpcommand. If the process listens on the2881and2882ports, the process has started successfully. - Use the
ps -ef|grep observercommand to return the observer process information.
Here is an example:
-bash-4.2$ netstat -ntlp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 11114/observer tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 11114/observer ... ... ... ... ... ... -bash-4.2$ ps -ef|grep observer admin 11114 0 40 16:18 ? 00:00:17 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2Perform the bootstrap operation of the cluster.
Use OBClient to connect to any server in the cluster. The password is empty.
[root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p****** obclient> SET SESSION ob_query_timeout=1000000000; Query OK, 0 rows affected obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882',ZONE 'zone2' SERVER '10.10.10.2:2882',ZONE 'zone3' SERVER '10.10.10.3:2882'; Query OK, 0 rows affectedNote
If an error occurs during this step, check whether the observer process startup parameters are correct, whether the permissions on the observer-related directories are correct, whether the log directory has sufficient space (if it is the same directory as the data directory, the data directory occupies the majority of the space in the log directory), whether the node time is synchronized, and whether the node has sufficient memory resources. If the problem persists, clear the OceanBase directory and deploy OceanBase Database again.
Verify that the cluster has been initialized.
After the bootstrap initialization of the cluster is completed, execute the
SHOW DATABASES;command to verify that the initialization is successful. If the query result shows theoceanbasedatabase in the database list, the initialization is successful.Change the password.
The password of the
rootuser in thesystenant is empty by default. Change the password after the initialization is successful.ALTER USER root IDENTIFIED BY '******';
Next steps
After the cluster is created, you can create user tenants as needed.
For more information about how to create a user tenant, see Create a tenant.