Deploy an OceanBase cluster with two replicas and an arbitration service using the CLI

2025-12-04 02:47:57  Updated

Starting from V4.1, OceanBase Database supports the arbitration service (AS), which addresses the issue of increased response time (RT) when a fault occurs in one of the two IDCs in a three IDCs scenario, reduces cross-region bandwidth usage, and keeps the costs of the third IDC at a low level.

This topic will guide you through the deployment of an OceanBase cluster with two servers and an AS using the CLI.

Note

  • Each OceanBase cluster can use only one AS.
  • AS can be deployed only in standalone mode.

Prerequisites

Before installing OceanBase Database, make sure that the following conditions are met:

Procedure

Step 1: Deploy an OceanBase cluster

  1. Install the RPM packages on the two OBServer servers.

    1. Install the OceanBase Database RPM package.

      Navigate to the directory where the OceanBase RPM package is stored:

      [root@xxx /]# cd $rpm_dir
      

      Install the OceanBase RPM package:

      [root@xxx $rpm_dir]# rpm -ivh $rpm_name [--prefix=/home/admin/oceanbase]
      

      Here, $rpm_dir indicates the directory where the RPM package is stored, and $rpm_name indicates the name of the RPM package.

      Note

      After installation, OceanBase Database will be installed in the /home/admin/oceanbase directory by default.

      Here is an example:

      [root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm
      Preparing...                          ################################# [100%]
      Updating / installing...
          1:oceanbase-4.2.0.0-100000052023073################################# [100%]
      
    2. (Optional) Install OceanBase Client (OBClient).

      OceanBase Client (OBClient) is a dedicated CLI tool for OceanBase Database. With OBClient, you can connect to a MySQL tenant and an Oracle tenant of OceanBase Database. If you only want to connect to a MySQL tenant, you can also use the mysql client to connect to OceanBase Database.

      Notice

      Due to the dependency of obclient V2.2.1 and earlier on libobclient, you must install libobclient first. Contact the technical support team to obtain the RPM packages of OBClient and libobclient.

      Here is an example:

      [root@xxx /home/admin/rpm]# rpm -ivh obclient-2.2.1-20221122151945.el7.alios7.x86_64.rpm
      Preparing...                          ################################# [100%]
      Updating / installing...
         1:obclient-2.2.1-20221122151945.el7################################# [100%]
      

      Verify whether OBClient has been installed:

      [root@xxx /home/admin/rpm]# which obclient
      /usr/bin/obclient
      
  2. Configure the observer process directory.

    1. (Optional) Clear the directory.

      When you deploy OceanBase Database for the first time on a new server, you do not need to clear the directory.

      If either of the following conditions is met, you can choose to clear the old OceanBase directory:

      • You want to clear the OceanBase environment before the installation.
      • The installation environment is chaotic or the installation process generates files that will affect the next installation.
      [root@xxx /home/admin]# kill -9 `pidof observer`
      [root@xxx /home/admin]# rm -rf /data/1/$cluster_name 
      [root@xxx /home/admin]# rm -rf /data/log1/$cluster_name 
      [root@xxx /home/admin]# rm -rf /home/admin/oceanbase/store/$cluster_name 
      [root@xxx /home/admin]# rm -rf /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config*
      

      Here, $cluster_name indicates the name of the cluster.

      Here is an example:

      [root@xxx /home/admin]# kill -9 `pidof observer`
      [root@xxx /home/admin]# rm -rf /data/1/obdemo
      [root@xxx /home/admin]# rm -rf /data/log1/obdemo
      [root@xxx /home/admin]# rm -rf /home/admin/oceanbase/store/obdemo
      [root@xxx /home/admin]# rm -rf /home/admin/oceanbase/log/* /home/admin/oceanbase/etc/*config*
      
    2. Initialize the OceanBase directory.

      We recommend that you install OceanBase Database data files on an independent disk and then create a soft link between the data directory and the home directory of the software.

      Use the following statement to switch to the admin user:

      [root@xxx /home/admin]# su - admin
      

      As the admin user, run the following commands to create the required directories:

      mkdir -p /data/1/$cluster_name/{etc3,sstable,slog}
      
      mkdir -p /data/log1/$cluster_name/{clog,etc2}
      
      mkdir -p /home/admin/oceanbase/store/$cluster_name
      

      As the admin user, run the following commands to create a soft link:

      for t in {etc3,sstable,slog};do ln -s /data/1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done
      
      for t in {clog,etc2};do ln -s /data/log1/$cluster_name/$t /home/admin/oceanbase/store/$cluster_name/$t; done
      

      Here, $cluster_name indicates the name of the cluster. The slog and data files must be on the same disk.

      Here is an example:

      [root@xxx /home/admin]# su - admin
      -bash-4.2$ mkdir -p /data/1/obdemo/{etc3,sstable,slog}
      -bash-4.2$ mkdir -p /data/log1/obdemo/{clog,etc2}
      -bash-4.2$ mkdir -p /home/admin/oceanbase/store/obdemo
      -bash-4.2$ for t in {etc3,sstable,slog};do ln -s /data/1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done
      -bash-4.2$ for t in {clog,etc2};do ln -s /data/log1/obdemo/$t /home/admin/oceanbase/store/obdemo/$t; done
      

      Note

      The obdemo directory is created based on the cluster name and can be customized. It is used when you start the process.

      The verification result is as follows:

      -bash-4.2$ cd /home/admin/oceanbase
      -bash-4.2$ tree store/
      store/
      `-- obdemo
         |-- clog -> /data/log1/obdemo/clog
         |-- etc2 -> /data/log1/obdemo/etc2
         |-- etc3 -> /data/1/obdemo/etc3
         |-- slog -> /data/1/obdemo/slog
         `-- sstable -> /data/1/obdemo/sstable
      
      6 directories, 0 files
      
  3. Initialize the OceanBase cluster.

    Note

    The example IP address has been obfuscated and is for demonstration purposes only. You need to use your server's IP address when you deploy OceanBase Database.

    1. Start the observer process.

      Start the observer process on each node as the admin user.

      Notice

      The parameters for starting the observer process on the two replicas are not exactly the same. When you start the observer process, specify the two (or more) servers that host the root service. You do not need to specify all the servers when you create a cluster. After the cluster is created, you can add new servers.

      cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer {-I $ip | -i $devname} -P $rpc_port -p $sql_port -z $zone_name -d /home/admin/oceanbase/store/$cluster_name -r '$ip:2882:2881' -c $cluster_id -n $cluster_name -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/$cluster_name/etc3;/data/log1/$cluster_name/etc2"
      

      The parameters are described as follows:

      Parameter Description
      -I | -i
      • -I: the IP address of the server to start. In a multi-server deployment, 127.0.0.1 cannot be used as the target IP address. We recommend that you use the specified IP address (-I 10.10.10.1) to start the server.
      • -i: the name of the network card. You can view the available network card names by using the ifconfig command.

      Note

      You can also specify both the IP address and the network card name (-I 10.10.10.1 -i eth0) to start the server, but we recommend that you do not do this.

      -p the port number of the service. The default value is 2881.
      -P the port number of the RPC service. The default value is 2882.
      -n the name of the cluster. Custom cluster names must not be repeated.
      -z the zone to which the observer process belongs.
      -d the directory where the cluster is located. This parameter specifies the directory where the cluster was initialized. Apart from the cluster name $cluster_name, other parameters in this option must not be changed.
      -c the ID of the cluster. Cluster IDs are a series of numbers and can be customized. Different clusters must have different IDs.
      -l the log level.
      -r the RS list. The information of the Root Service is separated by semicolons ($ip:2882:2881).
      -o Specifies the list of cluster startup parameters (configuration items), which is optional. You can specify values for multiple configuration items, with values separated by commas. Choose appropriate startup parameters and assign suitable values to each parameter based on actual needs to optimize cluster performance and resource utilization. Below are some commonly used cluster startup configuration items:
      • cpu_count: Sets the total number of CPUs in the system.
      • system_memory: Sets the memory capacity reserved for the tenant with ID 500, i.e., specifies the internal reserved memory of the OceanBase database. This value can be reduced when machine memory is limited, but note that insufficient memory issues may arise during performance testing.
      • memory_limit: Sets the total available memory size.
      • datafile_size: Sets the disk space occupied by data files. This specifies the size of the OceanBase database data file sstable (initialized once), evaluated based on the available space in /data/1/, and it is recommended to be no less than 100G.
      • datafile_disk_percentage: The percentage of disk space occupied by data files out of the total disk space.
      • datafile_next: Sets the step size for automatic expansion of disk data files.
      • datafile_maxsize: Sets the maximum space for automatic expansion of disk data files.
      • config_additional_dir: Sets multiple directories for storing local configuration files redundantly.
      • log_disk_size: Sets the size of the Redo log disk.
      • log_disk_percentage: Sets the percentage of disk space occupied by Redo logs out of the total disk space.
      • syslog_level: Sets the system log level.
      • syslog_io_bandwidth_limit: Sets the upper limit of disk IO bandwidth that system logs can occupy; system logs exceeding this limit will be discarded.
      • max_syslog_file_count: Sets the number of log files that can be accommodated before log file recycling.
      • enable_syslog_recycle: Sets whether to enable the switch to record old logs before OBServer node startup. Used in conjunction with max_syslog_file_count, it determines whether the recycling logic considers old log files.
      The configuration items datafile_size, datafile_disk_percentage, datafile_next, and datafile_maxsize work together to achieve automatic expansion of disk data files. For more details, see Configuring Dynamic Expansion of Disk Data Files. For more cluster configuration information, see Overview of Configuration Items - Cluster-Level Configuration Items.

      Here is an example:

      Start the observer process on the OBServer servers. Take 10.10.10.1 and 10.10.10.2 as examples.

      zone1:

      [root@xxx admin]# su - admin
      -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.1:2882:2881;10.10.10.2:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
      

      zone2:

      [root@xxx admin]# su - admin
      -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -I 10.10.10.2 -P 2882 -p 2881 -z zone2 -d /home/admin/oceanbase/store/obdemo -r '10.10.10.2:2882:2881;10.10.10.2:2882:2881' -c 10001 -n obdemo -o "system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2"
      

      Run the following command to check whether the observer process has been started:

      • The netstat -ntlp command. If the process is running, the command will detect the listening status of the 2881 and 2882 ports.
      • The ps -ef|grep observer command. It will return information about the observer process.

      Here is an example:

      -bash-4.2$ netstat -ntlp
      (Not all processes could be identified, non-owned process info
      will not be shown, you would have to be root to see it all.)
      Active Internet connections (only servers)
      Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
      tcp        0      0 0.0.0.0:2881            0.0.0.0:*               LISTEN      11112/observer
      tcp        0      0 0.0.0.0:2882            0.0.0.0:*               LISTEN      11112/observer
      ...        ...    ...                       ...                     ...         ...
      
      -bash-4.2$ ps -ef|grep observer
      admin     11112      0 40 16:18 ?        00:00:17 /home/admin/oceanbase/bin/observer -I 10.10.10.1 -P 2882 -p 2881 -z zone1 -d /home/admin/oceanbase/store/obdemo -r 10.10.10.1:2882:2881;10.10.10.2:2882:2881 -c 10001 -n obdemo -o system_memory=30G,datafile_size=500G,config_additional_dir=/data/1/obdemo/etc3;/data/log1/obdemo/etc2
      
    2. Perform the cluster bootstrap operation.

      Use OBClient to connect to any of the servers. The password is empty.

      [root@xxx admin]# obclient -h127.0.0.1 -uroot -P2881 -p******
      
      obclient [(none)]> SET SESSION ob_query_timeout=1000000000;
      Query OK, 0 rows affected
      
      obclient [(none)]> ALTER SYSTEM BOOTSTRAP ZONE 'zone1' SERVER '10.10.10.1:2882',ZONE 'zone2' SERVER '10.10.10.2:2882';
      Query OK, 0 rows affected
      

      Notice

      If an error occurs in this step, check whether the parameters for starting the observer process are correct, whether the directory permissions for the observer process are correct, whether there is sufficient space in the log directory (which is shared with the data directory), whether the server time is synchronized, and whether the server has sufficient memory resources. If the problem persists after the issues are resolved, clear the OceanBase directory and deploy OceanBase Database from the beginning.

    3. Verify whether the cluster has been initialized.

      After the cluster is initialized, execute the SHOW DATABASES; statement to verify whether the initialization is successful. If the query result shows that the oceanbase database is included in the database list, the cluster initialization is successful.

    4. Change the password.

      The password of the root user in the sys tenant is empty by default. After the initialization, you need to change the password.

      ALTER USER root IDENTIFIED BY '******';
      

Step 2: Deploy the arbitration service

  1. Install the OceanBase RPM package on the arbitration server.

    Switch to the directory where the OceanBase Database RPM package is stored:

    [root@xxx /]# cd $rpm_dir
    

    Install the OceanBase Database RPM package:

    [root@xxx $rpm_dir]# rpm -ivh $rpm_name [--prefix=/home/admin/oceanbase]
    

    Here, $rpm_dir represents the directory where the RPM package is stored, and $rpm_name represents the name of the RPM package.

    Note

    By default, OceanBase Database will be installed in the /home/admin/oceanbase directory.

    Here is an example:

    [root@xxx /home/admin/rpm]# rpm -ivh oceanbase-4.2.0.0-100000052023073123.el7.x86_64.rpm
    Preparing...                          ################################# [100%]
    Updating / installing...
       1:oceanbase-4.2.0.0-100000052023073################################# [100%]
    
  2. Configure the arbitration server process directory.

    Create a clog directory for the arbitration server process on the arbitration server.

    Note

    • When you start the arbitration server process, make sure that the clog directory has been created in the store directory specified with the -d option.
    • Do not use up all the disk space of the disk where the store directory is located, otherwise the arbitration server functionality may be abnormal.

    Switch to the admin user:

    [root@xxx /home/admin]# su - admin
    

    As the admin user, execute the following command to create the required directories:

    mkdir -p /home/admin/oceanbase/store/clog
    
  3. Start the arbitration server process.

    Start the arbitration server process in arbitration mode as the admin user on the arbitration server. The necessary parameters are specified in the startup script. The sample startup script is as follows:

    cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -m arbitration -P $rpc_port -d $obdir/store [{-I $ip | -i $devname}] [-l $log_level]  [-o "parameters_list"]
    

    The parameters are described as follows:

    Parameter Description
    -m Specifies the arbitration mode. The value is arbitration.
    -P Specifies the RPC port number of the arbitration server process. The default value is 2882.
    -d Specifies the storage directory path. The default value is /home/admin/oceanbase/store. You must create the clog subdirectory in this directory before you start the process.
    -I | -i Optional.
    • -I: the IP address of the node to be started.
    • -i: the name of the network card. You can use the ifconfig command to view the network card names.

    Note

    You can start a node by specifying both the IP address and the network card name (for example, -I 10.10.10.1 -i eth0). However, we recommend that you do not specify both parameters.

    -l Specifies the log level. The default value is WDIAG. The value range is: {DEBUG, TRACE, WDIAG, EDIAG, INFO, WARN, ERROR}. For more information, see Log levels.
    -o Specifies a list of parameters. You can set values for multiple parameters by separating them with commas.
    The following parameters are related to the arbitration server process:

    Notice

    To ensure automatic cleanup of the log directory of an arbitration node and prevent the disk from filling up, you must set enable_syslog_recycle to True and set a proper value for max_syslog_file_count. Otherwise, log files will grow continuously, which may lead to a lack of disk space.

    Here is an example:

    Start the arbitration server process on the arbitration server. For example, start the arbitration server process with 10.10.10.3 as the IP address.

    -bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observer -m arbitration -P 2882 -d /home/admin/oceanbase/store -I 10.10.10.3 -o "enable_syslog_recycle=True,max_syslog_file_count=100"
    

    You can use the following command to check whether the observer process has started successfully:

    • Run the netstat -ntlp command. If the 2882 port is listened to, the process has started successfully.
    • Run the ps -ef|grep observer command to return the observer process information.

    Here is an example:

    -bash-4.2$ netstat -ntlp
    (Not all processes could be identified, non-owned process info
    will not be shown, you would have to be root to see it all.)
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:2882            0.0.0.0:*               LISTEN      18231/observer
    ...
    
    -bash-4.2$ ps -ef|grep observer
    admin     18231      0  3 10:33 ?        00:00:00 /home/admin/oceanbase/bin/observer -m arbitration -P 2882 -d /home/admin/oceanbase/store -I 10.10.10.3
    ...
    

Step 3: Add an arbitration service to the cluster

Run the following command to add an arbitration service to the cluster:

ALTER SYSTEM ADD ARBITRATION SERVICE "$arb_server_ip:$arb_server_port";

Note

The address of the arbitration server process is recorded in an internal table in the sys tenant. You can query this information from the DBA_OB_ARBITRATION_SERVICE view. This operation is a synchronous one.

Here is an example:

obclient [(none)]> ALTER SYSTEM ADD ARBITRATION SERVICE "10.10.10.3:2882";
Query OK, 0 rows affected

obclient [(none)]> SELECT * FROM oceanbase.DBA_OB_ARBITRATION_SERVICE;
+---------------------+---------------------+-------------------------+---------------------+------------------------------+------+
| CREATE_TIME         | MODIFY_TIME         | ARBITRATION_SERVICE_KEY | ARBITRATION_SERVICE | PREVIOUS_ARBITRATION_SERVICE | TYPE |
+---------------------+---------------------+-------------------------+---------------------+------------------------------+------+
| 2023-03-06 17:29:36 | 2023-03-06 17:29:36 | default                 | 10.10.10.3:2882     | NULL                         | ADDR |
+---------------------+---------------------+-------------------------+---------------------+------------------------------+------+
1 row in set

Step 4: Enable the arbitration service for the sys tenant

You can enable the arbitration service for a tenant when you create the tenant. For more information, see CREATE TENANT.

If you do not enable the arbitration service when you create a tenant, you can enable it for the tenant after the tenant is created.

Run the following statement to enable the arbitration service for an existing tenant.

ALTER TENANT $tenant_name [SET] enable_arbitration_service = true;

Here is an example:

Enable the arbitration service for the sys tenant of a cluster.

obclient [(none)]> ALTER TENANT sys SET enable_arbitration_service = true;
Query OK, 0 rows affected

Step 5: Query the status of the arbitration service for a tenant

  1. Check whether the arbitration service is enabled for the tenant.

    You can query the DBA_OB_TENANTS view for information about the tenant and check the value of the arbitration_service_status column for the status of the arbitration service.

    The possible values of arbitration_service_status are as follows:

    • ENABLED: the arbitration service is enabled for the tenant.
    • ENABLING: the arbitration service is being enabled for the tenant.
    • DISABLED: the arbitration service is disabled for the tenant.
    • DISABLING: the arbitration service is being disabled for the tenant.

    Here is an example:

    obclient [(none)]> SELECT TENANT_ID,TENANT_NAME,ARBITRATION_SERVICE_STATUS FROM oceanbase.DBA_OB_TENANTS WHERE tenant_name = 'sys';
    +-----------+-------------+----------------------------+
    | TENANT_ID | TENANT_NAME | ARBITRATION_SERVICE_STATUS |
    +-----------+-------------+----------------------------+
    |         1 | sys         | ENABLED                    |
    +-----------+-------------+----------------------------+
    1 row in set
    
  2. Check whether the arbitration service is available for the tenant.

    If the arbitration service is enabled for a tenant, the arbitration service is available for the log streams created at the time when it is enabled. However, for log streams created after it is enabled, there is no guarantee that the arbitration service is available, because creating log streams is performed in non-strict mode and does not guarantee that the arbitration service is created. In this case, you can run the following SQL statement to check whether all log streams of the tenant have the arbitration service.

    ##Query the list of log streams without an arbitration replica in the tenant##
    (SELECT distinct ls_id FROM GV$OB_LOG_STAT WHERE tenant_id = xxx) except
    (SELECT ls_id FROM GV$OB_LOG_STAT WHERE tenant_id = xxx and role = 'LEADER' and arbitration_member = "$arb_server_ip:$arb_server_port");
    

    Here is an example:

    Query the list of log streams without an arbitration replica in the sys tenant. If the query result set is an empty set, the arbitration service is available for the current tenant.

    obclient [oceanbase]> (SELECT distinct ls_id FROM GV$OB_LOG_STAT WHERE tenant_id = 1) except
    (SELECT ls_id FROM GV$OB_LOG_STAT WHERE tenant_id = 1 and role = 'LEADER' and arbitration_member = "10.10.10.3:2882");
    Empty set
    

Next steps

After the cluster is created, you can create user tenants as needed.

For more information about how to create a user tenant, see Create a tenant.

References

Contact Us