OceanBase Database supports single-replica deployment. A single-replica OceanBase cluster can also be scaled out (by adding nodes), so it is also referred to as a cluster.
This topic will guide you through the process of deploying a single-replica OceanBase cluster using the CLI.
Prerequisites
Before installing OceanBase Database, make sure that the following conditions are met:
The OBServer node has been configured. For more information, see Server configuration and Initialize the OBServer node using oat-cli.
You have obtained the RPM package of OceanBase Database. For more information, see Prepare the installation packages.
Procedure
Step 1: Install the RPM package
Install the OceanBase Database RPM package.
In this example,
$rpm_dirindicates the directory where the RPM package is stored, and$rpm_nameindicates the name of the RPM package.[root@xxx /]# cd $rpm_dir [root@xxx $rpm_dir]# rpm -ivh $rpm_nameNote
OceanBase Database will be installed in the
/home/admin/oceanbasedirectory.Here is an example:
[root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:oceanbase-4.2.0.0-100000052023073################################# [100%]Install OceanBase Client.
An OceanBase instance is compatible with Oracle or MySQL. If an Oracle tenant is created, a Java application needs to connect to the tenant using the Java driver file (oceanbase-client-*.jar) provided by OceanBase. If you want to access an Oracle tenant from the command line, you also need to install OBClient.
OBClient is an OceanBase command-line client that can access MySQL and Oracle tenants of OceanBase.
Here is an example:
[root@xxx $rpm_dir]# rpm -ivh obclient-1.2.6-20210510164331.el7.alios7.x86_64.rpm ##Check whether OBClient is installed.## [root@xxx $rpm_dir]# which obclient /usr/bin/obclient
Step 2: Configure directories
Clear the OceanBase directory (do not perform this step on the first deployment).
If you want to clear the previous OceanBase environment or the environment is messed up or files are left during the installation and deployment of OceanBase, you can choose to clear the old OceanBase directory. In this case, specify the name of the cluster as
$cluster_name.[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/$cluster_name -bash-4.2$ rm -rf /data/log1/$cluster_name -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ kill -9 `pidof observer` -bash-4.2$ rm -rf /data/1/obdemo -bash-4.2$ rm -rf /data/log1/obdemo -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config* -bash-4.2$ ps -ef|grep observerInitialize the OceanBase directory.
We recommend that you install the data directory of OceanBase on a separate disk and then link it to the software home directory through a soft link. In this case, specify the name of the cluster as
$cluster_name.[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; doneHere is an example:
[root@xxx admin]# su - admin -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; doneNote
The
obdemodirectory is created based on the name of the cluster and can be customized. It is used by the startup process.The check result is as follows:
-bash-4.2$ cd /home/admin/oceanbase -bash-4.2$ tree store/ store/ `-- obdemo |-- clog -> /data/log1/obdemo/clog |-- etc2 -> /data/log1/obdemo/etc2 |-- etc3 -> /data/1/obdemo/etc3 |-- slog -> /data/1/obdemo/slog `-- sstable -> /data/1/obdemo/sstable 6 directories, 0 files
Step 3: Initialize the OceanBase cluster
Note
The example IP address has been obfuscated and is for demonstration purposes only. You need to use your own server IP address when you deploy OceanBase Database.
Start the observer process.
You must start the observer process as the
adminuser. A sample statement is as follows:cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"The parameters are described as follows:
Parameter Description -I|-i-I: the IP address of the node to be started. In a multi-server deployment scenario, 127.0.0.1 cannot be used as the target IP address. We recommend that you use a specified IP address (for example:-I 10.10.10.1) to start a node.-i: the name of the network interface card. You can use theifconfigcommand to view the network interface card names.
Note
You can start a node by specifying both the IP address and the network interface card name (for example:
-I 10.10.10.1 -i eth0), but we recommend that you do not use this method.-pthe service port number, which is generally set to 2881.-Pthe RPC port number, which is generally set to 2882.-nthe cluster name. It can be customized as long as it is not repeated among clusters. -zthe zone to which the observer process belongs. -dthe directory where the cluster is located. The directory is created when the cluster is initialized. Apart from the cluster name $cluster_name, other parameters should not be changed.-cthe cluster ID. It is a group of numbers, which can be customized as long as it is not repeated among clusters. -lthe log level. -rthe RS list. The information of the RSs is separated with semicolons ( ;), and the format of each RS is$ip:2882:2881. This parameter indicates the location of the Root Service.-othe list of cluster startup parameters (configurations). It is an optional parameter. You can specify values for multiple parameters, and separate the values with commas. Choose appropriate startup parameters and set each parameter to an appropriate value based on your business needs to optimize the performance and resource utilization of your cluster. Some commonly used cluster startup configurations are described as follows: - cpu_count: the total number of CPU cores.
- system_memory: the memory size reserved for tenant ID
500, namely, the memory space internally reserved for OceanBase Database. If the system memory is limited, you can decrease the value of this parameter. Note that during performance tests, the system may run out of memory. - memory_limit: the total size of available memory.
- datafile_size: the size of disk space occupied by data files. In other words, it indicates the size of the
sstablefile of OceanBase Database. Assess the available space of the/data/1/directory and set this parameter to at least100G. - datafile_disk_percentage: the percentage of total disk space occupied by data files.
- datafile_next: the automatic expansion step length of data files when they are filled up.
- datafile_maxsize: the maximum space to which data files can be automatically expanded.
- config_additional_dir: multiple directories for storing local configuration files for redundancy.
- log_disk_size: the size of the redo log disk.
- log_disk_percentage: the percentage of total disk space occupied by the redo log disk.
- syslog_level: the system log level.
- syslog_io_bandwidth_limit: the upper limit on the disk I/O bandwidth available to system logs. System logs exceeding the bandwidth upper limit will be discarded.
- max_syslog_file_count: the maximum number of log files that can be stored before old log files are recycled.
- enable_syslog_recycle: the switch to enable recycling of old log files before they are overwritten upon OBServer node startup. It is used in conjunction with the
max_syslog_file_countparameter to determine whether old log files are recycled.
datafile_size,datafile_disk_percentage,datafile_next, anddatafile_maxsizeparameters work together to enable automatic expansion of disk data files. For more information, see Configure automatic expansion of disk data files. For more information about cluster configurations, see Overview of system configurations.Here is an example:
[root@xxx admin]# su - admin -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"Run the following command to check whether the observer process has started successfully:
- Use the
netstat -ntlpcommand. If the process has listened on the2881and2882ports, the process has started successfully. - Use the
ps -ef|grep observercommand to return the information of the observer process.
Here is an example:
-bash-4.2$ netstat -ntlp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 11113/observer tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 11113/observer ... ... ... ... ... ... -bash-4.2$ ps -ef|grep observer admin 11113 0 43 17:58 ? 00:00:14 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2Perform the bootstrap operation for the cluster.
Use OBClient to connect to the started observer process. The password is empty.
[root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p****** obclient> SET SESSION ob_query_timeout=1000000000; Query OK, 0 rows affected obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882'; Query OK, 0 rows affectedNote
If an error occurs when you perform this operation, check whether the startup parameters of the observer process are correctly specified, whether the permissions of the observer-related directories are correct, whether the log directory has sufficient free space (if the same large directory is used for data, the data directory is likely to be occupied), and whether the node has sufficient memory resources. If the problem persists, clear the OceanBase directory and start over (for more information, see Clear the OceanBase directory (first time only).
Verify whether the cluster has been initialized.
After the bootstrap initialization is completed, execute the
SHOW DATABASES;command to verify whether the initialization is successful. If the query result shows that theoceanbasedatabase is listed, the cluster initialization is successful.Change the password.
The password of the
rootuser in thesystenant is empty by default. Change the password after the initialization is completed.ALTER USER root IDENTIFIED BY '******';
Next steps
After the cluster is created, you can create user tenants as needed.
For more information about how to create a user tenant, see Create a tenant.