This topic describes how to use OceanBase Deployer (obd) to deploy OceanBase Database on a single server. In standalone deployment, the OceanBase cluster contains only one zone that contains only one OBServer node.
Note
For more information about how to deploy an OceanBase cluster on the GUI of obd, see Deploy an OceanBase cluster on the GUI.
For more information about how to deploy a multi-node cluster by using obd, see Deploy OceanBase Database on the CLI in a production environment.
Terms
Central control server: the server that stores the installation package of OceanBase Database and the cluster configuration information.
Target server: the server that hosts the OceanBase cluster.
Prerequisites
Make sure that the following conditions are met:
You have installed obd (latest version recommended) on your server. For more information, see Install obd.
At least 2 vCPUs, 6 GB of memory, and 20 GB of disk space are available for deploying OceanBase Database only.
You have installed OceanBase Command-Line Client (OBClient) on your server. For more information, see OBClient documentation.
Note
Installing the OceanBase All in One package automatically installs obd and OBClient. If you plan to download and install the OceanBase All in One package, you can ignore the prerequisites.
Procedure
(Optional) Step 1: Download and install the OceanBase All in One package
OceanBase Database V4.0.0 and later provide the OceanBase All in One package. You can use this package to install obd, OceanBase Database, OceanBase Database Proxy (ODP), OceanBase Agent (OBAgent), Grafana, Prometheus, and OceanBase Cloud Platform (OCP) Express (supported in V4.1.0 and later) at a time.
You can download and install desired components of specified versions from OceanBase Download Center.
Note
To deploy OceanBase Database offline, we recommend that you download the OceanBase All in One package for deployment.
Online installation
If your server can connect to the Internet, run the following commands to install the components online:
[admin@test001 ~]$ bash -c "$(curl -s https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/download-center/opensource/oceanbase-all-in-one/installer.sh)"
[admin@test001 ~]$ source ~/.oceanbase-all-in-one/bin/env.sh
Offline installation
If your server cannot connect to the Internet, perform the following steps to install the components offline:
Download the latest all-in-one package from OceanBase Download Center and copy it to any directory on the central control server.
In the directory where the all-in-one package is located, run the following commands to decompress and install the package.
[admin@test001 ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz [admin@test001 ~]$ cd oceanbase-all-in-one/bin/ [admin@test001 bin]$ ./install.sh [admin@test001 bin]$ source ~/.oceanbase-all-in-one/bin/env.sh
Step 2: Configure obd
Before you deploy the OceanBase cluster, we recommend that you switch to a non-root user for data security.
To deploy the OceanBase cluster offline, download and install the all-in-one package on the central control server based on Step 1.
You can also download the installation package of the desired version for a component from OceanBase Download Center. Then, copy the package to any directory on the central control server and perform the following steps to configure obd.
Note
If you are deploying the OceanBase cluster online, skip steps 1 to 3.
Disable remote repositories.
[admin@test001 rpm]$ obd mirror disable remoteNote
After you install the all-in-one package, the remote repositories are automatically disabled. You can run the
obd mirror listcommand for confirmation. If the values of the remote repositories in theEnabledcolumn are changed toFalse, the remote image sources are disabled.Add the installation packages to the local image repository.
[admin@test001 rpm]$ obd mirror clone *.rpmView the list of installation packages in the local image repository.
[admin@test001 rpm]$ obd mirror list localSelect a configuration file.
If you have installed obd by downloading the RPM package for obd, you can view the sample configuration files in the
/usr/obd/exampledirectory.If you have installed obd by using the all-in-one installation package, you can view the sample configuration files in the
~/.oceanbase-all-in-one/obd/usr/obd/exampledirectory. Select the corresponding configuration file based on your resource conditions.The small-scale development mode applies to individual devices with at least 8 GB of memory.
Sample configuration file for local standalone deployment:
mini-local-example.yamlSample configuration file for standalone deployment:
mini-single-example.yamlSample configuration file for standalone deployment with ODP:
mini-single-with-obproxy-example.yamlSample configuration file for distributed deployment with ODP:
mini-distributed-with-obproxy-example.yamlSample configuration file for distributed deployment with ODP and OCP Express:
default-components-min.yamlSample configuration file for distributed deployment with all components:
all-components-min.yaml
The professional development mode applies to advanced Elastic Compute Service (ECS) instances or physical servers with at least 16 CPU cores and 64 GB of memory.
Sample configuration file for local standalone deployment:
local-example.yamlSample configuration file for standalone deployment:
single-example.yamlSample configuration file for standalone deployment with ODP:
single-with-obproxy-example.yamlSample configuration file for distributed deployment with ODP:
distributed-with-obproxy-example.yamlSample configuration file for distributed deployment with ODP and OCP Express:
default-components.yamlSample configuration file for distributed deployment with all components:
all-components.yaml
Modify the configuration file.
The following uses
mini-single-example.yaml, a configuration file for standalone development in small-scale development mode, as an example to describe how to modify the configuration file.Note
You must modify related parameters based on the actual environment.
Modify user information.
## Only need to configure when remote login is required user: username: admin # password: your password if need key_file: /home/admin/.ssh/id_rsa # port: your ssh port, default 22 # timeout: ssh connection timeout (second), default 30usernamespecifies the username of the account used to log in to the target server. Make sure that this account has the write permission onhome_path.passwordandkey_fileare used for user authentication. Generally, you need to specify only one of them.Notice
After you specify the path of the key, comment out or delete the
passwordparameter if your key does not require a password. Otherwise, the value of thepasswordparameter will be taken as the password of the key and used for login, leading to a login verification failure.Modify the IP address, port, and related directories of each server, and specify memory-related parameters and the password.
oceanbase-ce: servers: # Please don't use hostname, only IP can be supported - ip: 10.10.10.1 global: # Please set devname as the network adaptor's name whose ip is in the setting of severs. # if set severs as "127.0.0.1", please set devname as "lo" # if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0" devname: eth0 cluster_id: 1 # please set memory limit to a suitable value which is matching resource. memory_limit: 6G # The maximum running memory for an observer system_memory: 1G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. datafile_size: 2G # Size of the data file. datafile_next: 2G # the auto extend step. Please enter an capacity, such as 2G datafile_maxsize: 20G # the auto extend max size. Please enter an capacity, such as 20G log_disk_size: 13G # The size of disk space used by the clog files. cpu_count: 16 scenario: htap mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started. rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started. production_mode: false # The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field. home_path: /home/admin/observer # The directory for data storage. The default value is $home_path/store. data_dir: /data # The directory for clog, ilog, and slog. The default value is the same as the data_dir value. redo_dir: /redo root_password: ****** # root user password, can be empty proxyro_password: ****** # proxyro user password, consistent with obproxy's observer_sys_password, can be empty zone: zone1For more information about the parameters in configuration files, see Configuration files. Take note of the following considerations:
If you do not specify the password in the configuration file, obd automatically generates a random password. After the deployment is completed, you can run the
obd cluster edit-configcommand to view the password in the configuration file.If you do not specify the
scenariooption in the configuration file when you deploy OceanBase Database V4.3.0 or later, obd provides interactive options for you to select a load type.
Step 3: Deploy OceanBase Database
Note
For more information about the commands used in this section, see Cluster commands.
Deploy OceanBase Database.
[admin@test001 ~]$ obd cluster deploy obtest -c mini-single-example.yamlAfter you run the
obd cluster deploycommand, if your server is connected to the Internet, obd checks whether the desired installation package exists in the local image repository. If not, obd automatically obtains the installation package from the YUM repository.This command will check whether the directories specified by
home_pathanddata_dirare empty, and returns an error if not. If all the content in these directories can be deleted, you can add the-foption to forcibly purge the directories.Start OceanBase Database.
[admin@test001 ~]$ obd cluster start obtestView the status of the OceanBase cluster.
# View the list of clusters managed by obd. [admin@test001 ~]$ obd cluster list # View the status of the obtest cluster. [admin@test001 ~]$ obd cluster display obtest(Optional) Modify the cluster configurations.
OceanBase Database has hundreds of parameters and some are coupled. We recommend that you do not modify parameters in the sample configuration file before you become familiar with OceanBase Database. The following example shows you how to modify a parameter and make it take effect:
# Run the edit-config command to enter the edit mode before you can edit the cluster configurations. # After you modify and save the configurations and exit, obd will prompt how to validate the modifications. Copy the command provided by obd. [admin@test001 ~]$ obd cluster edit-config obtest # The output after you save the modifications is as follows: Search param plugin and load ok Search param plugin and load ok Parameter check ok Save deploy "obtest" configuration Use `obd cluster reload obtest` to make changes take effect. [admin@test001 ~]$ obd cluster reload obtest
Step 4: Connect to OceanBase Database
Run the following command to connect to OceanBase Database by using OBClient:
obclient -h<IP> -P<PORT> -uroot@sys -p
IP specifies the IP address of the OBServer node. PORT specifies the port for connecting to OceanBase Database, which takes the value of the mysql_port parameter in the case of direct connection. The default port is 2881. If you modified the port, the configured port is used here.
Note
After the OceanBase cluster is deployed, we recommend that you create a business tenant to perform business operations. The sys tenant is intended only for cluster management and is unsuitable for business scenarios. For more information about how to create a tenant, see Create a tenant.
What to do next
You can run the following commands to manage a cluster deployed by using obd. For more information, see Cluster commands.
# View the cluster list.
obd cluster list
# View the status of a cluster. The following takes the obtest cluster as an example.
obd cluster display obtest
# Stop a running cluster. The following takes the obtest cluster as an example.
obd cluster stop obtest
# Destroy a deployed cluster. The following takes the obtest cluster as an example.
obd cluster destroy obtest