Deploy a three-replica OceanBase cluster by using the CLI

2023-12-25 08:49:41  Updated

This topic describes how to deploy a three-replica OceanBase cluster by using the CLI.

Considerations

If the database to deploy requires resource isolation, you must first configure cgroups before you deploy the database.

For more information about how to configure resource isolation and cgroups, see Resource isolation and Configure cgroups.

Prerequisites

Before you install OceanBase Database, make sure that:

Procedure

Step 1: Install the RPM package

  1. Install the RPM package for OceanBase Database.

    Here, $rpm_dir specifies the directory in which the RPM package is stored, and $rpm_name specifies the name of the RPM package.

    [root@xxx /]# cd $rpm_dir
    [root@xxx $rpm_dir]# rpm -ivh $rpm_name
    

    Note

    By default, OceanBase Database is installed in the /home/admin/oceanbase directory.

    Here is an example:

    [root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm
    Preparing...                          ################################# [100%]
    Updating / installing...
       1:oceanbase-4.2.0.0-100000052023073################################# [100%]
    
  2. (Optional) Install OBClient.

    OceanBase Client (OBClient) is a CLI tool dedicated to OceanBase Database. You can use it to connect to MySQL tenants and Oracle tenants of OceanBase Database. If you only need to connect to a MySQL tenant, you can also use a MySQL client to access OceanBase Database.

    Notice

    OBClient of a version earlier than V2.2.1 depends on OceanBase Connector/C. Therefore, you must first install OceanBase Connector/C.
    Contact OceanBase Technical Support to obtain the RPM packages of OBClient and OceanBase Connector/C.

    Here is an example:

    [root@xxx /home/admin/rpm]# rpm -ivh obclient-2.2.1-20221122151945.el7.alios7.x86_64.rpm
    Preparing...                          ################################# [100%]
    Updating / installing...
       1:obclient-2.2.1-20221122151945.el7################################# [100%]
    
    ## Verify that the installation is successful. ##
    [root@xxx /home/admin/rpm]# which obclient
    /usr/bin/obclient
    

Step 2: Configure directories

  1. Clear the OceanBase Database directory (not required at the first deployment).

    If you want to clear the old OceanBase Database environment, or problems occur during the installation and deployment process of OceanBase Database and thereby cause an environmental disorder or generate files that will affect the next installation, you can directly clear the old OceanBase Database directory. In the following context, $cluster_name specifies the cluster name.

    [root@xxx admin]# su - admin
    -bash-4.2$ kill -9 `pidof observer`
    -bash-4.2$ rm -rf /data/1/$cluster_name
    -bash-4.2$ rm -rf /data/log1/$cluster_name
    -bash-4.2$ rm -rf /home/admin/oceanbase/store/$cluster_name /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config*
    -bash-4.2$ ps -ef|grep observer
    

    Here is an example:

    [root@xxx admin]# su - admin
    -bash-4.2$ kill -9 `pidof observer`
    -bash-4.2$ rm -rf /data/1/obdemo
    -bash-4.2$ rm -rf /data/log1/obdemo
    -bash-4.2$ rm -rf /home/admin/oceanbase/store/obdemo /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config*
    -bash-4.2$ ps -ef|grep observer
    
  2. Initialize the OceanBase Database directory.

    We recommend that you specify the data directory of OceanBase Database to an independent disk and link this directory to the home directory of OceanBase Database by using a soft link. In the following context, $cluster_name specifies the cluster name.

    [root@xxx admin]# su - admin
    -bash-4.2$ mkdir -p /data/1/$cluster_name/{etc3,sstable,slog} 
    -bash-4.2$ mkdir -p /data/log1/$cluster_name/{clog,etc2} 
    -bash-4.2$ mkdir -p /home/admin/oceanbase/store/$cluster_name
    -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done
    -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done
    

    Here is an example:

    [root@xxx admin]# su - admin
    -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog} 
    -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2} 
    -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo
    -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done
    -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done
    

    Note

    The obdemo directory is named after the cluster and can be modified. It is required when the process starts.

    The result is as follows:

    -bash-4.2$ cd /home/admin/oceanbase
    -bash-4.2$ tree store/
    store/
    `-- obdemo
     |-- clog -> /data/log1/obdemo/clog
     |-- etc2 -> /data/log1/obdemo/etc2
     |-- etc3 -> /data/1/obdemo/etc3
     |-- slog -> /data/1/obdemo/slog
     `-- sstable -> /data/1/obdemo/sstable
    
    6 directories, 0 files
    

Step 3: Initialize the OceanBase cluster

Note

The IP addresses in the sample code are for reference only. You need to enter the actual server IP address during deployment.

  1. Start the observer process on the nodes.

    Start the observer process as the admin user on each node.

    Notice

    With three replicas, the startup parameters are different for each node. When you start the observer process, you only need to specify three or more servers that run RootService, instead of all servers. You can add servers after the cluster is created.

    cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"
    

    The parameters are described in the following table:

    Parameter Description
    -I | -i
    • -I: The IP address of the node to be started. In multi-node deployment, you cannot use 127.0.0.1 as the destination IP address. We recommend that you use an IP address such as -I 10.10.10.1, to start a node.
    • -i: The NIC name. You can use the ifconfig command to view the NIC name.

    Note

    OceanBase Database allows you to start a node by specifying both the IP address and the NIC. For example, -I 10.10.10.1 -i eth0. However, we recommend that you do not use this method.

    -p The service port number, which is usually set to 2881.
    -P The RPC port number, which is usually set to 2882.
    -n The name of the cluster. It can be modified. Cluster names must be unique.
    -z The zone where the started observer process belongs.
    -d The primary directory of the cluster, which is created during initialization. Do not modify fields other than $cluster_name.
    -c The cluster ID. It is a group of digits and can be modified. Cluster IDs must be unique.
    -l The log level.
    -r The RootService list in the format of $ip:2882:2881. Multiple items are separated with semicolons (;) to indicate RootService information.
    -o The cluster startup parameters that need to be specified as needed.
    • system_memory: specifies the internal reserved memory of OceanBase Database, which is 30G by default. You can adjust this parameter to a smaller value in the case of insufficient memory on the server. The impact is that memory may be insufficient during performance tests.
    • datafile_size: specifies the size of the SSTable data file of OceanBase Database, which is determined based on the available space of /data/1/. You no longer need to specify the size again once specified in this step. We recommend that you specify the data file size to at least 100G and reserve some spaces.
    • config_additional_dir: specifies the redundancy directory for the parameter file.

    Here is an example:

    zone1:

    [root@xxx admin]# su - admin
    -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
    

    zone2:

    [root@xxx admin]# su - admin
    -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.2 -P 2882 -p 2881 -z zone2 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
    

    zone3:

    [root@xxx admin]# su - admin
    -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.3 -P 2882 -p 2881 -z zone3 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
    

    You can use the following commands to check whether the observer process has started successfully:

    • Run the netstat -ntlp command. If ports 2881 and 2882 are being listened to, the observer process is started.
    • Run the ps -ef|grep observer command to return information about the observer process.

    Here is an example:

    -bash-4.2$ netstat -ntlp
    (Not all processes could be identified, non-owned process info
    will not be shown, you would have to be root to see it all.)
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:2881            0.0.0.0:*               LISTEN      11114/observer
    tcp        0      0 0.0.0.0:2882            0.0.0.0:*               LISTEN      11114/observer
    ...        ...    ...                       ...                     ...         ...
    
    -bash-4.2$ ps -ef|grep observer
    admin     11114      0 40 16:18 ?        00:00:17 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881;10.10.10.2:2882:2881;10.10.10.3:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2
    
  2. Perform the bootstrap operation on the cluster.

    Connect to any node by using the obclient command. The password is empty.

    [root@xxx admin]# obclient -h127.1 -uroot -P2881 -p
    Enter password:
    
    obclient> SET SESSION ob_query_timeout=1000000000;
    Query OK, 0 rows affected
    
    obclient> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882',ZONE 'zone2' SERVER '10.10.10.2:2882',ZONE 'zone3' SERVER '10.10.10.3:2882';
    Query OK, 0 rows affected
    

    Notice

    If an error is returned in this step, the reason may be that a startup parameter of the observer process is incorrect, you do not have the required permissions on the related directories of the OBServer node, the disk usage percentage of the log directory is lower than expected, the time on the nodes is out of synchronization, or the memory resource on the node is insufficient. The log directory issue occurs because the log directory shares the same upper-level directory with the data directory and the space is occupied by the data directory. Check these issues. Then, clear the OceanBase Database directory and retry the deployment.

  3. Verify that the cluster is initialized.

    After you perform the bootstrap operation and execute the SHOW DATABASES statement, if oceanbase appears in the database list, the cluster is initialized.

    obclient> SHOW DATABASES;
    +--------------------+
    | Database           |
    +--------------------+
    | oceanbase          |
    | information_schema |
    | mysql              |
    | SYS                |
    | LBACSYS            |
    | ORAAUDITOR         |
    | test               |
    +--------------------+
    7 rows in set
    
  4. Change the password.

    By default, the password of the root user of the sys tenant is empty. After the initialization, change the password.

    obclient> ALTER USER root IDENTIFIED BY '******';
    Query OK, 0 rows affected
    

What to do next

After the cluster is created, you can create user tenants based on your business needs.

For more information about how to create a user tenant by using the CLI, see Create a tenant.

References

Contact Us