Deploy a three-replica OceanBase cluster by using the CLI

2025-01-02 01:58:40  Updated

This topic describes how to deploy a three-replica OceanBase cluster by using the CLI.

Considerations

If you want to implement resource isolation for your OceanBase database, configure cgroups before the deployment.

For more information about resource isolation and cgroups, see Resource isolation and Configure cgroups.

Prerequisites

Before you install OceanBase Database, make sure that:

Procedure

Step 1: Install the RPM package

  1. Install the RPM package of OceanBase Database.

    Here, $rpm_dir specifies the directory in which the RPM package is stored, and $rpm_name specifies the name of the RPM package.

    [root@xxx /]# cd $rpm_dir
    [root@xxx $rpm_dir]# rpm -ivh $rpm_name
    

    Note

    By default, OceanBase Database is installed in the /home/admin/oceanbase directory.

    Here is an example:

    [root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm
    Preparing...                          ################################# [100%]
    Updating / installing...
       1:oceanbase-4.2.0.0-100000052023073################################# [100%]
    
  2. (Optional) Install OceanBase Client (OBClient).

    OBClient is a CLI tool dedicated to OceanBase Database. You can use it to connect to MySQL tenants and Oracle tenants of OceanBase Database. If you only need to connect to a MySQL tenant, you can also use a MySQL client to access OceanBase Database.

    Notice

    OBClient of a version earlier than V2.2.1 depends on libobclient. Therefore, to use such a version, install libobclient first. To obtain the RPM packages of OBClient and libobclient, contact OceanBase Technical Support.

    Here is an example:

    [root@xxx /home/admin/rpm]# rpm -ivh obclient-2.2.1-20221122151945.el7.alios7.x86_64.rpm
    Preparing...                          ################################# [100%]
    Updating / installing...
       1:obclient-2.2.1-20221122151945.el7################################# [100%]
    
    ## Verify whether the installation is successful. ##
    [root@xxx /home/admin/rpm]# which obclient
    /usr/bin/obclient
    

Step 2: Configure directories

  1. Clear the OceanBase Database directory (not required at the first deployment).

    If you want to clear the old OceanBase Database environment, or problems occur during the installation and deployment process of OceanBase Database and thereby cause an environmental disorder or generate files that will affect the next installation, you can directly clear the old OceanBase Database directory. In the following context, $cluster_name specifies the cluster name.

    [root@xxx admin]# su - admin
    -bash-4.2$ kill -9 `pidof observer` 
    -bash-4.2$ rm -rf /data/1/$cluster_name 
    -bash-4.2$ rm -rf /data/log1/$cluster_name 
    -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config*
    -bash-4.2$ ps -ef|grep observer
    

    Here is an example:

    [root@xxx admin]# su - admin
    -bash-4.2$ kill -9 `pidof observer` 
    -bash-4.2$ rm -rf /data/1/obdemo 
    -bash-4.2$ rm -rf /data/log1/obdemo 
    -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config*
    -bash-4.2$ ps -ef|grep observer
    
  2. Initialize the OceanBase Database directory.

    We recommend that you specify the data directory of OceanBase Database to an independent disk and link this directory to the home directory of OceanBase Database by using a soft link. Here, $cluster_name specifies the cluster name.

    Note

    OceanBase Database V4.2.4 and later support an independent slog disk so that slog files do not need to share a disk with data files. slog files and clog files can share an SSD. For more information about the installation directory of OceanBase Database, see Structure of the OBServer node installation directory.

    [root@xxx admin]# su - admin
    -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} 
    -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} 
    -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name
    -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done
    -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done
    

    Here is an example:

    [root@xxx admin]# su - admin
    -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} 
    -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} 
    -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo
    -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done
    -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done
    

    Note

    The obdemo directory is named after the cluster and can be modified. It is required when the process starts.

    The result is as follows:

    -bash-4.2$ cd /home/admin/oceanbase
    -bash-4.2$ tree store/
    store/
    `-- obdemo
     |-- clog -> /data/log1/obdemo/clog
     |-- etc2 -> /data/log1/obdemo/etc2
     |-- etc3 -> /data/1/obdemo/etc3
     |-- slog -> /data/1/obdemo/slog
     `-- sstable -> /data/1/obdemo/sstable
    
    6 directories, 0 files
    

Step 3: Initialize the OceanBase cluster

Note

  • The IP addresses in the sample code are for reference only. You must enter the actual server IP addresses during deployment.
  • OceanBase Database allows you to start the observer process on a server with a specified IPv4 or IPv6 address. This topic describes how to start the observer process on a server with an IPv4 address. For information about how to start the observer process on a server with an IPv6 address, see Deploy a standalone centralized OceanBase database by using the CLI.

  1. Start the observer process on the nodes.

    Start the observer process as the admin user on each node.

    Notice

    In a three-replica OceanBase cluster, the startup parameters of the OBServer nodes are not exactly the same When you start the observer process, you only need to specify three or more servers that run RootService. You can add servers after the cluster is created.

    cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"
    

    The following table describes the parameters in the command.

    Parameter Description
    -I | -i
    • -I: the IP address of the OBServer node to be started. In multi-node deployment, you cannot use 127.0.0.1 as the target IP address. We recommend that you use an IP address such as -I 10.10.10.1 to start an OBServer node.
    • -i: the NIC name. You can use the ifconfig command to view the NIC name.

    Note

    OceanBase Database allows you to start an OBServer node by specifying both the IP address and the NIC, for example, -I 10.10.10.1 -i eth0. However, we recommend that you do not use this method.

    -p The service port number, which is usually set to 2881.
    -P The RPC port number, which is usually set to 2882.
    -n The name of the cluster. It can be modified. Cluster names must be unique.
    -z The zone to which the started observer process belongs.
    -d The primary directory of the cluster, which is created during initialization. Do not modify the fields other than $cluster_name.
    -c The ID of the cluster. It is a group of digits and can be modified. Cluster IDs must be unique.
    -l The log level.
    -r The RootService list in the format of $ip:2882:2881. Servers in the list are separated with semicolons (;).
    -o Optional. The cluster startup parameters. You can specify values for multiple parameters and separate the settings of different parameters with commas (,). We recommend that you set appropriate values for cluster startup parameters to optimize cluster performance and resource utilization. Here are some commonly used cluster startup parameters:
    • cpu_count: the total number of system CPU cores.
    • system_memory: the memory reserved for the tenant whose ID is 500, which is the internal reserved memory of OceanBase Database. If the server has a small memory size, you can set this parameter to a smaller value. However, insufficient memory may occur during performance testing.
    • memory_limit: the total memory size available.
    • datafile_size: the size of disk space available for data files. It is the size of the SSTable (for one-time initialization) in OceanBase Database. You can evaluate the value of this parameter based on the available space on /data/1/. We recommend that the value be no less than 100G.
    • datafile_disk_percentage: the percentage of disk space that can be occupied by data files.
    • datafile_next: the auto scaling step of disk space for data files.
    • datafile_maxsize: the maximum size that the disk space for data files can be scaled out to.
    • config_additional_dir: the local directories for storing multiple copies of configuration files for redundancy.
    • log_disk_size: the size of the disk space for storing redo logs.
    • log_disk_percentage: the percentage of the total disk space for storing redo logs.
    • syslog_level: the level of syslogs.
    • syslog_io_bandwidth_limit: the maximum I/O bandwidth available for syslogs. If this value is reached, the remaining syslogs are discarded.
    • max_syslog_file_count: the maximum number of log files that can be retained.
    • enable_syslog_recycle: specifies whether to record the logs generated before the OBServer node is started. You can use this parameter with max_syslog_file_count to specify whether to include earlier log files in the recycling logic.
    You can configure datafile_size, datafile_disk_percentage, datafile_next, and datafile_maxsize together to achieve automatic scale-out of disk space for data files. For more information, see Configure automatic scale-out of disk space for data files. For more information about cluster parameters, see the Cluster-level parameters section in Overview.

    Here is an example:

    zone1:

    [root@xxx admin]# su - admin
    -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
    

    zone2:

    [root@xxx admin]# su - admin
    -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.2 -P 2882 -p 2881 -z zone2 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
    

    zone3:

    [root@xxx admin]# su - admin
    -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.3 -P 2882 -p 2881 -z zone3 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
    

    You can use the following commands to check whether the observer process has started successfully:

    • Run the netstat -ntlp command. If ports 2881 and 2882 are listened to, the observer process is started.
    • Run the ps -ef|grep observer command to return information about the observer process.

    Here is an example:

    -bash-4.2$ netstat -ntlp
    (Not all processes could be identified, non-owned process info
    will not be shown, you would have to be root to see it all.)
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:2881            0.0.0.0:*               LISTEN      11114/observer
    tcp        0      0 0.0.0.0:2882            0.0.0.0:*               LISTEN      11114/observer
    ...        ...    ...                       ...                     ...         ...
    
    -bash-4.2$ ps -ef|grep observer
    admin     11114      0 40 16:18 ?        00:00:17 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2
    
  2. Perform the bootstrap operation on the cluster.

    Use OBClient to connect to any node. The password is empty.

    [root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p******
    
    obclient> SET SESSION ob_query_timeout=1000000000;
    Query OK, 0 rows affected
    
    obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882',ZONE 'zone2' SERVER '10.10.10.2:2882',ZONE 'zone3' SERVER '10.10.10.3:2882';
    Query OK, 0 rows affected
    

    Notice

    If an error is reported in this step, the reason may be that a startup parameter of the observer process is incorrect, the privileges on the directories related to the OBServer nodes are incorrect, the space of the log directory does not meet the required proportion, the time is not synchronous on the OBServer nodes, or the node memory resources are insufficient. For example, when the log directory shares the same upper-level directory with the data directory, the disk space for the log directory may be inadequate because the data directory occupies excessive space. Check these issues and then clear the OceanBase Database directory.

  3. Verify whether the cluster is initialized.

    After you perform the bootstrap operation, execute the SHOW DATABASES; statement. If oceanbase appears in the database list, the cluster has been initialized.

    obclient [(none)]> SHOW DATABASES;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | LBACSYS            |
    | mysql              |
    | oceanbase          |
    | ORAAUDITOR         |
    | SYS                |
    | sys_external_tbs   |
    | test               |
    +--------------------+
    8 rows in set
    
  4. Change the password.

    By default, the password of the root user under the sys tenant is empty. After successful initialization, you need to change the password.

    obclient> ALTER USER root IDENTIFIED BY '******';
    Query OK, 0 rows affected
    

What to do next

After the cluster is created, you can create user tenants based on your business needs.

For more information about how to create a user tenant by using the CLI, see Create a tenant.

References

Contact Us