This topic describes how to deploy a three-replica OceanBase cluster using the CLI.
Prerequisites
Before installing OceanBase Database, make sure that the following conditions are met:
The OBServer node has been configured. For more information, see Server configuration, (Optional) Configure a clock source, and Initialize the OBServer node using oat-cli.
You have obtained the RPM package of OceanBase Database. For more information, see Prepare installation packages.
Procedure
Step 1: Install the RPM package
Install the OceanBase Database RPM package.
In this step,
$rpm_dirindicates the directory where the RPM package is stored, and$rpm_nameindicates the name of the RPM package.[root@xxx /]# cd $rpm_dir [root@xxx $rpm_dir]# rpm -ivh $rpm_nameNote
By default, OceanBase Database is installed in the
/home/admin/oceanbasedirectory.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:oceanbase-4.2.0.0-100000052023073################################# [100%](Optional) Install OceanBase Client.
OceanBase Client OBClient is a dedicated command-line client tool for OceanBase Database. With OBClient, you can connect to MySQL and Oracle tenants in OceanBase Database. If you only connect to a MySQL tenant, you can also use the MySQL client (mysql) to connect to OceanBase Database.
Notice
Due to the dependency of
obclientV2.2.1 and earlier onlibobclient, you need to installlibobclientfirst. Contact the technical support team to obtain the OBClient RPM package and the libobclient RPM package.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh obclient-2.2.1-20221122151945.el7.alios7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:obclient-2.2.1-20221122151945.el7################################# [100%] ##Check whether OceanBase Client is installed## [root@xxx /home/admin/rpm]# which obclient /usr/bin/obclient
Step 2: Configure directories
Clear the OceanBase directory (if it was cleared during the first deployment).
If you want to clear the previous OceanBase environment or the environment is messy or contains files that will affect the next installation due to an issue that occurred during the OceanBase installation and deployment, you can choose to clear the old OceanBase directory. In this case, you must specify the cluster name
$cluster_name.[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/$cluster_name -bash-4.2$ rm -rf /data/log1/$cluster_name -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/obdemo -bash-4.2$ rm -rf /data/log1/obdemo -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerInitialize the OceanBase directory.
We recommend that you install OceanBase Database's data directory on an independent disk and then link it to the software home directory through a soft link. In this case, you must specify the cluster name
$cluster_name.[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; doneHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; doneNote
The
obdemodirectory is created based on the cluster name and can be customized. It is used by the startup process.Check the results:
-bash-4.2$ cd /home/admin/oceanbase -bash-4.2$ tree store/ store/ `-- obdemo |-- clog -> /data/log1/obdemo/clog |-- etc2 -> /data/log1/obdemo/etc2 |-- etc3 -> /data/1/obdemo/etc3 |-- slog -> /data/1/obdemo/slog `-- sstable -> /data/1/obdemo/sstable 6 directories, 0 files
Step 3: Initialize the OceanBase cluster
Note
The example IP address has been desensitized and is not a requirement for installation. You need to specify your own server IP addresses during deployment.
Start the observer process on each node.
Start the observer process on each node as the
adminuser.Notice
In a three-replica setup, the startup parameters for each node are different. When you start the observer process, specify the IP addresses of three (or more) servers where RootService resides. You do not need to specify all servers when you create a cluster. After the cluster is created, you can add more servers.
cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"The parameters are described as follows:
Parameter Description -I|-i-I: the IP address of the node to be started. In a multi-server deployment scenario, 127.0.0.1 cannot be specified as the target IP address. We recommend that you use the specified IP address (-I 10.10.10.1) to start the node.-i: the name of the network interface card. You can view the available network interface card names by using theifconfigcommand.
Note
You can specify both the IP address and the network interface card name (
-I 10.10.10.1 -i eth0) to start the node, but we recommend that you do not specify both parameters.-pthe service port number, which is generally set to 2881.-Pthe RPC port number, which is generally set to 2882.-nthe cluster name. Custom cluster names must not be repeated. -zthe zone to which the observer process belongs. -dthe directory where the cluster is located. The directory is created when the cluster is initialized. Except for the cluster name $cluster_name, other parameters must not be changed.-cthe cluster ID. It is a group of numbers, and custom cluster IDs must not be repeated. -lthe log level. -rthe RootService list in the format of $ip:2882:2881; the list is separated by semicolons.-othe startup parameters (configuration items) of the cluster. Multiple configuration items can be specified, and the values of the configuration items are separated by commas. You can specify appropriate startup parameters and values based on your business requirements to optimize the performance and resource utilization of the cluster. For more information, see the followingcommon cluster startup configuration items. Here is an example:
zone1:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone2:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.2 -P 2882 -p 2881 -z zone2 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone3:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.3 -P 2882 -p 2881 -z zone3 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"You can run the following command to check whether the observer process has been started:
- Use the
netstat -ntlpcommand. If the2881and2882ports are listened to, the process has been started. - Use the
ps -ef|grep observercommand to return the observer process information.
Here is an example:
-bash-4.2$ netstat -ntlp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 11114/observer tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 11114/observer ... ... ... ... ... ... -bash-4.2$ ps -ef|grep observer admin 11114 0 40 16:18 ? 00:00:17 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2Perform the bootstrap operation to initialize the cluster.
Use OBClient to connect to any server. The password is empty.
[root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p****** obclient> SET SESSION ob_query_timeout=1000000000; Query OK, 0 rows affected obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882',ZONE 'zone2' SERVER '10.10.10.2:2882',ZONE 'zone3' SERVER '10.10.10.3:2882'; Query OK, 0 rows affectedNotice
If an error occurs during this step, check whether the startup parameters of the observer process are correctly specified, whether the directory where the observer process resides has the correct permissions, whether the log directory has sufficient space (if it is the same directory as the data directory, check whether the data directory has sufficient space), whether the node time is synchronized, and whether the node has sufficient memory resources. If the problem persists, clear the OceanBase directory and deploy the software again.
Verify whether the cluster has been initialized.
After the bootstrap initialization is completed, execute the
SHOW DATABASES;command to verify whether the initialization is successful. If the query result shows a database namedoceanbasein the database list, the initialization is successful.Change the password.
By default, the
rootuser of thesystenant has an empty password. We recommend that you change the password after the initialization is successful.ALTER USER root IDENTIFIED BY '******';
Next steps
After the cluster is created, you can create user tenants as needed.
For more information about how to create a user tenant using the CLI, see Create a tenant.