OceanBase Database can be deployed with a single replica. A single-replica OceanBase cluster can also be scaled out (by adding nodes), so it is also referred to as a standalone cluster.
This topic will guide you through the process of deploying a single-replica OceanBase cluster using the CLI.
Prerequisites
Before installing OceanBase Database, make sure that the following conditions are met:
The OBServer node has been configured. For more information, see Configure servers and Initialize the OBServer node using oat-cli.
You have obtained the RPM package of OceanBase Database. For more information, see Prepare installation packages.
Procedure
Step 1: Install the RPM package
Install the OceanBase Database RPM package.
In this step,
$rpm_dirindicates the directory where the RPM package is stored, and$rpm_nameindicates the name of the RPM package.[root@xxx /]# cd $rpm_dir [root@xxx $rpm_dir]# rpm -ivh $rpm_nameNote
OceanBase Database will be installed in the
/home/admin/oceanbasedirectory.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:oceanbase-4.2.0.0-100000052023073################################# [100%]Install OceanBase Client.
If an OceanBase instance is compatible with Oracle, you need to install the Java driver file (oceanbase-client-*.jar) provided by OceanBase Database for connecting Java applications. If you want to access an Oracle tenant from the command line, you also need to install OBClient.
OBClient is an OceanBase command-line client that can access both MySQL and Oracle tenants of OceanBase Database.
Here is an example:
[root@xxx $rpm_dir]# rpm -ivh obclient-1.2.6-20210510164331.el7.alios7.x86_64.rpm ##Check whether OBClient has been installed.## [root@xxx $rpm_dir]# which obclient /usr/bin/obclient
Step 2: Configure directories
Clear the OceanBase directory (do not perform this step on the first deployment).
If you want to clear the previous OceanBase environment or the environment is messed up or contains files that will affect the installation in the next time, you can directly clear the old OceanBase directory. In this case, you must specify the cluster name.
$cluster_nameindicates the cluster name.[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/$cluster_name -bash-4.2$ rm -rf /data/log1/$cluster_name -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/obdemo -bash-4.2$ rm -rf /data/log1/obdemo -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerInitialize the OceanBase directory.
We recommend that you install the data directory of OceanBase Database on a separate disk and then use a soft link to link the data directory to the home directory of the software. In this way, the size of the software package can be reduced. In this step,
$cluster_nameindicates the cluster name.Note
Starting from V4.3.0, OceanBase Database allows you to separate the
slogfrom the data (data) disk. In other words, theslogand data files do not need to be on the same disk. You can configure the disks to share an SSD forslogandclog. For more information about the installation directories of OceanBase Database, see OBServer node installation directory structure.[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; doneHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; doneNote
The
obdemodirectory is created based on the cluster name and can be customized. It is used by the startup process.The check result is as follows:
-bash-4.2$ cd /home/admin/oceanbase -bash-4.2$ tree store/ store/ `-- obdemo |-- clog -> /data/log1/obdemo/clog |-- etc2 -> /data/log1/obdemo/etc2 |-- etc3 -> /data/1/obdemo/etc3 |-- slog -> /data/1/obdemo/slog `-- sstable -> /data/1/obdemo/sstable 6 directories, 0 files
Step 3: Initialize the OceanBase cluster
Note
The example IP address has been desensitized and is for reference only. You must specify your server's real IP address during deployment.
Start the observer process.
You must start the observer process as the
adminuser. The sample statement is as follows:cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"The parameters are described as follows:
Parameter Description -I|-i-I: the IP address of the node to be started. In a multi-server deployment scenario, 127.0.0.1 cannot be used as the target IP address. We recommend that you use the specified IP address (-I 10.10.10.1) to start a node.-i: the name of the network interface card. You can use theifconfigcommand to view the network interface card names.
Note
You can start the node by specifying both the IP address and the network interface card name (for example,
-I 10.10.10.1 -i eth0), but we recommend that you do not specify both parameters.-pthe service port number, which is generally set to 2881.-Pthe RPC port number, which is generally set to 2882.-nthe cluster name. It can be customized as long as it is not repeated among different clusters. -zthe zone to which the observer process belongs. -dthe directory where the cluster is located. The directory is created when the cluster is initialized. Apart from the cluster name $cluster_name, other parameters should not be changed.-cthe cluster ID. It is a group of numbers and can be customized as long as it is not repeated among different clusters. -lthe log level. -rthe root service information in the format of $ip:2882:2881, which is separated with semicolons.-othe list of cluster startup parameters (configurations). It is an optional parameter. You can specify values for multiple parameters, separating the values with commas. Choose appropriate startup parameters and specify proper values for each parameter to optimize the cluster's performance and resource utilization. Some common cluster startup configurations are described as follows: - cpu_count: the total number of CPU cores.
- system_memory: the memory size reserved for tenant ID
500. In other words, it is the internal memory of OceanBase Database. If the system memory is limited, you can set thesystem_memoryparameter to a smaller value. Note that during performance tests, the system may run out of memory. - memory_limit: the total size of available memory.
- datafile_size: the size of disk space occupied by data files. In other words, it is the size of the
sstablefile of OceanBase Database. It is set based on the available space of/data/1/. We recommend that you set it to at least100G. - datafile_disk_percentage: the percentage of total disk space occupied by data files.
- datafile_next: the automatic expansion step length of disk data files.
- datafile_maxsize: the maximum space of automatic expansion of disk data files.
- config_additional_dir: multiple directories for storing local configuration files for redundancy.
- log_disk_size: the size of the redo log disk.
- log_disk_percentage: the percentage of total disk space occupied by the redo log.
- syslog_level: the system log level.
- syslog_io_bandwidth_limit: the maximum disk I/O bandwidth allowed for system logs. System logs exceeding the bandwidth limit will be discarded.
- max_syslog_file_count: the maximum number of log files that can be stored before the system starts to recycle logs.
- enable_syslog_recycle: the switch to enable log recycling. It is used in conjunction with the
max_syslog_file_countparameter to determine whether to recycle old log files.
datafile_size,datafile_disk_percentage,datafile_next, anddatafile_maxsizeparameters are used to enable automatic expansion of disk data files. For more information, see Enable dynamic expansion for disk data files. For more information about cluster configurations, see Overview of cluster-level configurations.Here is an example:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"Run the following command to check whether the observer process has started successfully:
- Use the
netstat -ntlpcommand. If the process has listened on the2881and2882ports, the process has started successfully. - Use the
ps -ef|grep observercommand to return the observer process information.
Here is an example:
-bash-4.2$ netstat -ntlp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 11113/observer tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 11113/observer ... ... ... ... ... ... -bash-4.2$ ps -ef|grep observer admin 11113 0 43 17:58 ? 00:00:14 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2Perform the bootstrap operation.
Use OBClient to connect to the started observer process. The password is empty.
[root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p****** obclient> SET SESSION ob_query_timeout=1000000000; Query OK, 0 rows affected obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882'; Query OK, 0 rows affectedNote
If an error occurs when you perform this operation, check whether the startup parameters of the observer process and the directory permissions are correct and whether the log directory has sufficient space. If the node lacks memory resources, the initialization will fail. In this case, resolve the issues first, and then clear the OceanBase directory and start it again (see Clear the OceanBase directory (first time only) ).
Verify that the cluster has been initialized.
After the cluster is initialized, execute the
SHOW DATABASES;command to verify whether theoceanbasedatabase is listed. If yes, the cluster has been initialized successfully.Change the password.
The password of the
rootuser in thesystenant is empty by default. Change the password after the initialization is completed.ALTER USER root IDENTIFIED BY '******';
Next steps
After the cluster is created, you can create user tenants as needed.
For more information about how to create a user tenant, see Create a tenant.