This topic describes how to deploy a three-replica OceanBase cluster using the CLI.
Prerequisites
Before installing OceanBase Database, make sure that the following conditions are met:
The OBServer node has been configured. For more information, see Server configuration, (Optional) Configure a clock source, and Initialize the OBServer node using oat-cli.
You have obtained the RPM package of OceanBase Database. For more information, see Prepare installation packages.
Procedure
Step 1: Install the RPM package
Install the OceanBase Database RPM package.
In this example,
$rpm_dirindicates the directory where the RPM package is stored, and$rpm_nameindicates the name of the RPM package.[root@xxx /]# cd $rpm_dir [root@xxx $rpm_dir]# rpm -ivh $rpm_nameNote
By default, OceanBase Database is installed in the
/home/admin/oceanbasedirectory.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:oceanbase-4.2.0.0-100000052023073################################# [100%](Optional) Install OceanBase Client.
OceanBase Client OBClient is a command-line client tool dedicated to OceanBase Database. With OBClient, you can connect to MySQL and Oracle tenants in OceanBase Database. If you only connect to a MySQL tenant, you can also use the MySQL client (mysql) to connect to OceanBase Database.
Notice
Due to the dependency of
obclientV2.2.1 and earlier onlibobclient, you must installlibobclientfirst. Contact technical support to obtain the OBClient RPM package and the libobclient RPM package.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh obclient-2.2.1-20221122151945.el7.alios7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:obclient-2.2.1-20221122151945.el7################################# [100%] ##Check whether OceanBase Client is installed## [root@xxx /home/admin/rpm]# which obclient /usr/bin/obclient
Step 2: Configure directories
Clear the OceanBase directory (if it has been deployed).
If you want to clear the previous OceanBase environment or the environment is messy or contains files that will affect the next installation due to an issue that occurred during the installation and deployment of OceanBase, you can choose to clear the old OceanBase directory. In this case, you must specify the name of the cluster,
$cluster_name.[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/$cluster_name -bash-4.2$ rm -rf /data/log1/$cluster_name -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/obdemo -bash-4.2$ rm -rf /data/log1/obdemo -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerInitialize the OceanBase directory.
We recommend that you install the data of OceanBase Database on an independent disk and then soft-link the data to the software home directory. In this case, you must specify the name of the cluster,
$cluster_name.[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; doneHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; doneNote
The
obdemodirectory is created based on the name of the cluster and can be customized. It is used by the startup process.The check result is as follows:
-bash-4.2$ cd /home/admin/oceanbase -bash-4.2$ tree store/ store/ `-- obdemo |-- clog -> /data/log1/obdemo/clog |-- etc2 -> /data/log1/obdemo/etc2 |-- etc3 -> /data/1/obdemo/etc3 |-- slog -> /data/1/obdemo/slog `-- sstable -> /data/1/obdemo/sstable 6 directories, 0 files
Step 3: Initialize the OceanBase cluster
Note
The example IP address has been desensitized and is not a requirement for installation. You must specify your own server IP addresses when you deploy OceanBase Database.
Start the observer process on each node.
Start the observer process on each node as the
adminuser.Note
In a three-replica setup, the startup parameters of each node are different. When you start the observer process, you only need to specify the three (or more) servers that host RootService. You do not need to specify all servers when you create a cluster. After the cluster is created, you can add more servers.
cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"The parameters are described as follows:
Parameter Description -I|-i-I: the IP address of the server to start. In a multi-server deployment, 127.0.0.1 cannot be specified as the target IP address. We recommend that you use the IP address to start the server (-I 10.10.10.1).-i: the name of the network interface card. You can view the available network interface card names by using theifconfigcommand.
Note
You can start the server by specifying both the IP address and the network interface card name (
-I 10.10.10.1 -i eth0), but we recommend that you do not use this method.-pthe service port number, which is generally set to 2881.-Pthe RPC port number, which is generally set to 2882.-nthe cluster name. Custom cluster names must not be repeated. -zthe zone to which the observer process belongs. -dthe primary directory of the cluster. The directory is created when the cluster is initialized. Apart from the cluster name $cluster_name, other parameters should not be changed.-cthe cluster ID. It is a group of numbers. Custom cluster IDs must not be repeated. -lthe log level. -rthe RootService list in the format of $ip:2882:2881; the list is separated by semicolons.-oSpecifies the list of cluster startup parameters (configuration items), which is optional. You can specify values for multiple configuration items, with values separated by commas. Based on actual needs, select appropriate startup parameters and assign suitable values to each parameter to optimize cluster performance and resource utilization. Below are some commonly used cluster startup configuration items: - cpu_count: Sets the total number of CPUs in the system.
- system_memory: Sets the memory capacity reserved for tenant ID
500, i.e., specifies the internal reserved memory of OceanBase Database. When machine memory is limited, consider reducing this value; note that memory shortages may occur during performance testing. - memory_limit: Sets the total available memory size.
- datafile_size: Sets the disk space occupied by data files. This specifies the size of OceanBase Database data file
sstable(initialized once). Based on the available space in/data/1/, it is recommended to set it to no less than100G. - datafile_disk_percentage: The percentage of disk space occupied by disk data files out of total disk space.
- datafile_next: Sets the step size for automatic expansion of disk data files.
- datafile_maxsize: Sets the maximum space for automatic expansion of disk data files.
- config_additional_dir: Sets multiple directories for storing local configuration files redundantly.
- log_disk_size: Sets the size of the Redo log disk.
- log_disk_percentage: Sets the percentage of disk space occupied by Redo logs relative to the total disk space.
- syslog_level: Sets the system log level.
- syslog_io_bandwidth_limit: Sets the upper limit of disk IO bandwidth that system logs can occupy; system logs exceeding this limit will be discarded.
- max_syslog_file_count: Sets the number of log files that can be accommodated before log file recycling.
- enable_syslog_recycle: Sets whether to enable the switch for recording old logs prior to startup. Used together with
max_syslog_file_count, it determines whether the recycling logic considers old log files.
datafile_size,datafile_disk_percentage,datafile_next, anddatafile_maxsizework together to achieve automatic expansion of disk data files. For more details, see Configuring Dynamic Expansion of Disk Data Files. For more cluster configuration information, see Configuration Items Overview - Cluster-Level Configuration Items.Here is an example:
zone1:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone2:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.2 -P 2882 -p 2881 -z zone2 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"zone3:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.3 -P 2882 -p 2881 -z zone3 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"You can run the following command to check whether the observer process has started successfully:
- The
netstat -ntlpcommand. If the command detects that the2881and2882ports are listening, the observer process has started successfully. - The
ps -ef|grep observercommand. This command will return information about the observer process.
Here is an example:
-bash-4.2$ netstat -ntlp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 11114/observer tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 11114/observer ... ... ... ... ... ... -bash-4.2$ ps -ef|grep observer admin 11114 0 40 16:18 ? 00:00:17 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2Perform the bootstrap operation to initialize the cluster.
Use OBClient to connect to any server. The password is empty.
[root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p****** obclient> SET SESSION ob_query_timeout=1000000000; Query OK, 0 rows affected obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882',ZONE 'zone2' SERVER '10.10.10.2:2882',ZONE 'zone3' SERVER '10.10.10.3:2882'; Query OK, 0 rows affectedNote
If this step fails, check whether the observer process startup parameters are correctly specified, whether the permissions of the observer-related directories are correct, whether the log directory has sufficient space (if the same large directory is used for data, the space of the data directory is occupied), whether the node time is synchronized, and whether the node has sufficient memory resources. If any of these checks fail, clear the OceanBase directory and deploy OceanBase Database again.
Verify that the cluster has been initialized.
After the cluster is initialized, execute the
SHOW DATABASES;statement to verify it. If the query result shows that theoceanbasedatabase is in the database list, the cluster has been initialized.Change the password.
The default password of the
rootuser in thesystenant is empty. You must change the password after the initialization is completed.ALTER USER root IDENTIFIED BY '******';
Next steps
After the cluster is created, you can create user tenants as needed.
For more information about how to create a user tenant, see Create a tenant.