OceanBase Database can be deployed with a single replica. A single-replica OceanBase cluster can also be scaled out (by adding nodes), so it is also referred to as a standalone cluster.
This topic will guide you through the process of deploying a single-replica OceanBase cluster using the CLI.
Prerequisites
Before installing OceanBase Database, make sure that the following conditions are met:
The OBServer node has been configured. For more information, see Server configuration and Initialize the OBServer node using oat-cli.
You have obtained the RPM package of OceanBase Database. For more information, see Prepare the installation packages.
Procedure
Step 1: Install the RPM package
Install the OceanBase Database RPM package.
In this example,
$rpm_dirindicates the directory where the RPM package is stored, and$rpm_nameindicates the name of the RPM package.[root@xxx /]# cd $rpm_dir [root@xxx $rpm_dir]# rpm -ivh $rpm_nameNote
OceanBase Database will be installed in the
/home/admin/oceanbasedirectory.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:oceanbase-4.2.0.0-100000052023073################################# [100%]Install OceanBase Client.
An OceanBase instance is compatible with Oracle or MySQL. If an Oracle tenant is created, the Java program connecting to the tenant needs to use the Java driver file (oceanbase-client-*.jar) provided by OceanBase. If you want to access an Oracle tenant from the command line, you also need to install OBClient.
OBClient is an OceanBase command-line client that can access both MySQL and Oracle tenants of OceanBase.
Here is an example:
[root@xxx $rpm_dir]# rpm -ivh obclient-1.2.6-20210510164331.el7.alios7.x86_64.rpm ##Check whether OBClient is installed.## [root@xxx $rpm_dir]# which obclient /usr/bin/obclient
Step 2: Configure directories
Clear the OceanBase directory (do not perform this step on the first deployment).
When cleaning up the previous OceanBase environment, or when issues occur during the OceanBase installation and deployment process that cause disorder in the environment or generate files affecting the next installation, choose to directly clean up the old OceanBase directory. Here,
$cluster_namerefers to the cluster name.[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/$cluster_name -bash-4.2$ rm -rf /data/log1/$cluster_name -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/obdemo -bash-4.2$ rm -rf /data/log1/obdemo -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerInitialize the OceanBase directory.
We recommend that you install OceanBase Database data files on an independent disk and then link the disk to the software home directory through a soft link. Here,
$cluster_namestands for the name of the cluster.Note
Starting from V4.3.0, OceanBase Database allows you to independently install the
slogcomponent from the data (data) disk. In other words, theslogcomponent and data files do not need to be on the same disk. You can configure theslogandclogcomponents to use the same SSD solid-state disk. For more information about the installation directories of OceanBase Database, see OBServer node installation directory structure.[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; doneHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; doneNote
The
obdemodirectory is created based on the cluster name and can be customized. It is used by the startup process.The check result is as follows:
-bash-4.2$ cd /home/admin/oceanbase -bash-4.2$ tree store/ store/ `-- obdemo |-- clog -> /data/log1/obdemo/clog |-- etc2 -> /data/log1/obdemo/etc2 |-- etc3 -> /data/1/obdemo/etc3 |-- slog -> /data/1/obdemo/slog `-- sstable -> /data/1/obdemo/sstable 6 directories, 0 files
Step 3: Initialize the OceanBase cluster
Note
The example IP addresses have been desensitized and are not the actual IP addresses for installation. You must use your own server IP addresses during deployment.
Start the observer process.
Notice
You must start the observer process as the
adminuser.OceanBase Database allows you to start the observer process with an IPv4 or IPv6 address.
To start the observer process with an IPv4 address:
cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"To start the observer process with an IPv6 address:
cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -6 {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '[$ip]:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"The parameters are described as follows:
Parameter Description -6If you start the observer process with an IPv6 address, you must specify -6.-I|-i-I: the IP address of the node to be started. In a multi-server deployment, you cannot specify 127.0.0.1 as the target IP address. We recommend that you specify the IP address (-I 10.10.10.1) to start the node.-i: the name of the network interface card. You can use theifconfigcommand to view the network interface card names.
Note
You can also specify both the IP address and the network interface card name (
-I 10.10.10.1 -i eth0) to start the node, but we do not recommend that you do so.-pthe service port number, which is generally set to 2881.-Pthe RPC port number, which is generally set to 2882.-nthe cluster name. Custom cluster names must not be repeated. -zthe zone to which the started observer process belongs. -dthe directory where the cluster is located. The directory is created when you initialize the cluster. Except for the cluster name $cluster_name, other parameters are fixed.-cthe cluster ID. It is a group of numbers, and custom cluster IDs must not be repeated. -lthe log level. -rthe RS list, in the format of $ip:2882:2881; the RS list is separated by semicolons. It indicates the information of the Root Service.Notice
When you start the observer process with an IPv6 address, you must wrap the IP address with
[].-oSpecifies the list of cluster startup parameters (configuration items), which is optional. You can specify values for multiple configuration items, with values separated by commas. Based on actual needs, select appropriate startup parameters and assign suitable values to each parameter to optimize cluster performance and resource utilization. Below are some commonly used cluster startup configuration items: - cpu_count: Used to set the total number of system CPUs.
- system_memory: Used to set the memory capacity reserved for the tenant with tenant ID
500, i.e., specifying the internal reserved memory of OceanBase Database. In situations where machine memory is limited, you may consider reducing this value. Note that during performance testing, insufficient memory issues may arise. - memory_limit: Used to set the total available memory size.
- datafile_size: Used to set the size of disk space occupied by data files. This specifies the size of the OceanBase Database data file
sstable(initialized once). Based on the available space in/data/1/, it is recommended to be no less than100G. - datafile_disk_percentage: The percentage of disk space occupied by disk data files out of the total disk space.
- datafile_next: Used to set the step size for automatic expansion of disk data files.
- datafile_maxsize: Used to set the maximum space for automatic expansion of disk data files.
- config_additional_dir: Used to set multiple directories for storing local configuration files redundantly.
- log_disk_size: Used to set the size of the Redo log disk.
- log_disk_percentage: Used to set the percentage of disk space occupied by Redo logs relative to the total disk space.
- syslog_level: Used to set the system log level.
- syslog_io_bandwidth_limit: Used to set the upper limit of disk IO bandwidth that system logs can occupy. System logs exceeding this bandwidth will be discarded.
- max_syslog_file_count: Used to set the number of log files that can be accommodated before recycling log files.
- enable_syslog_recycle: Used to set whether to enable the switch to record old logs before startup. Used in conjunction with
max_syslog_file_count, it determines whether the recycling logic considers old log files.
datafile_size,datafile_disk_percentage,datafile_next, anddatafile_maxsizework together to achieve automatic expansion of disk data files. For more details, see Configuring Dynamic Expansion of Disk Data Files. For more cluster configuration information, see Configuration Items Overview - Cluster-Level Configuration Items.Here is an example of how to start the observer process with an IPv4 address:
[root@xxx /home/admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"Run the following command to check whether the observer process has started successfully:
- Use the
netstat -ntlpcommand. If the process listens on the2881and2882ports, the process has started successfully. - Use the
ps -ef|grep observercommand to return the observer process information.
Here is an example:
-bash-4.2$ netstat -ntlp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 11111/observer tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 11111/observer ... ... ... ... ... ... -bash-4.2$ ps -ef|grep observer admin 11111 0 43 17:58 ? 00:00:14 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2Here is an example of how to start the observer process with an IPv6 address:
[root@xxx /home/admin]# su - admin-bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -6 -I xxxx:xxxx:xxxx:xxxx:xxx:xxxx:xxxx:ebd8 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '[xxxx:xxxx:xxxx:xxxx:xxx:xxxx:xxxx:ebd8]:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"Perform the bootstrap operation to initialize the cluster.
Use the OBClient client to connect to the started observer process. The password is empty.
[root@xxx /home/admin]# obclient -h127.0.0.1 -uroot -P2881 -p******Set the maximum execution duration of SQL queries.
obclient [(none)]> SET SESSION ob_query_timeout=1000000000;Specify the list of Root Service machines and start the cluster.
Use an IPv4 address to specify the list of Root Service machines and start the cluster.
obclient [(none)]> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882';Use an IPv6 address to specify the list of Root Service machines and start the cluster.
obclient [(none)]> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '[xxxx:xxxx:xxxx:xxxx:xxx:xxxx:xxxx:ebd8]:2882';
Notice
If this step fails, check whether the observer process startup parameters are correctly specified, whether the directory of the OBServer node has the correct permissions, whether the log directory has sufficient space (if the data directory and the log directory are located in the same large directory, the data directory occupies the log directory's space), and whether the node has sufficient memory resources. Then, clear the OceanBase directory and start over.
Verify whether the cluster has been initialized.
After the bootstrap initialization is completed, execute the
SHOW DATABASES;command to verify whether the initialization is successful. If the query result shows that theoceanbasedatabase is listed, the cluster initialization is successful.Change the password.
The password of the
rootuser in thesystenant is empty by default. We recommend that you change the password after the initialization is successful.ALTER USER root IDENTIFIED BY '******';
Next steps
After the cluster is created, you can create user tenants as needed.
For more information about how to create a user tenant, see Create a tenant.