This topic describes how to deploy OceanBase Database in a production environment by using obd in a character interface.
Note
obd allows you to deploy an OceanBase database in a production environment by using commands. It also allows you to deploy an OceanBase cluster in a GUI. We recommend that you deploy an OceanBase cluster on the GUI of obd. For more information, see Deploy an OceanBase cluster on the GUI of obd.
Terms
Central control server
The server where obd is installed. The central control server manages the OceanBase cluster and stores the installation packages and configuration information of the OceanBase cluster.
Target server
The server where OceanBase Database is installed.
obd
OceanBase Deployer is a tool for installing and deploying OceanBase Database. It is briefly referred to as obd. For more information, see obd documentation.
ODP
OceanBase Database Proxy, abbreviated as ODP, is a proxy server dedicated for OceanBase Database. For more information, see ODP documentation.
OCP
OceanBase Cloud Platform, abbreviated as OCP. For more information, see OCP documentation.
OCP Express
OCP Express is a web-based management tool for OceanBase Database 4.x. Integrated with an OceanBase cluster, OCP Express allows you to view key performance metrics of the cluster and perform basic database management operations on the cluster. For more information, see OceanBase Cloud Platform Express (OCP Express).
Prerequisites
Before you deploy OceanBase Database, make sure that:
Your server meets the software and hardware requirements. For more information, see Software and hardware requirements.
In a production environment, you have performed environment and configuration checks. For more information, see Configure before deployment.
You have installed obd on the master control server. We recommend that you install the latest version. For more information, see Install and configure obd.
You have installed the OBClient on your local machine. For more information, see OBClient documentation.
Deployment modes
This topic describes the deployment of three replicas of OceanBase Database. You can deploy the components of OceanBase Database on the servers as shown in the following table:
| Role | IP address | Description |
|---|---|---|
| obd | 10.10.10.4 | The automatic deployment software. |
| OBServer node | 10.10.10.1 | The server in Zone1 of OceanBase Database. |
| OBServer node | 10.10.10.2 | The server in Zone2 of OceanBase Database. |
| OBServer node | 10.10.10.3 | The server in Zone3 of OceanBase Database. |
| OBAgent | 10.10.10.1, 10.10.10.2, and 10.10.10.3 | The monitoring and data collection framework for OceanBase Database. |
| ODP | 10.10.10.4 | The reverse proxy software for OceanBase Database. |
| OCP Express | 10.10.10.4 | The web-based management tool for OceanBase Database 4.x. |
Note
In a production environment, we recommend that you deploy ODP on the same server as your application to reduce the time required for ODP to access the application. You can deploy an ODP service on each server that hosts an application. For easier explanation, ODP is deployed separately in this example.
The server on which ODP is deployed can have a different configuration from that on which OceanBase Database is deployed. ODP can be deployed with as little as 1 CPU core and 1 GB of memory.
For information about how to deploy ODP on a single server, see Deploy OceanBase Database on a single server.
Procedure
Notice
The following example is given based on the x86 architecture and CentOS Linux 7.9 image. The procedure may vary slightly based on your environment.
Before you deploy the OceanBase cluster, we recommend that you switch to a non-root user to ensure data security.
Step 1: (Optional) Download and install the all-in-one package
OceanBase Database provides a unified all-in-one package from V4.0.0. You can use this package to install components such as obd, OceanBase Database, ODP, and OBAgent.
You can also choose to download components from OceanBase Download Center based on your needs or specify the versions of the components.
Install obd online
If your server can connect to the Internet, run the following command to install components online.
[admin@test001 ~]$ bash -c "$(curl -s https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/download-center/opensource/oceanbase-all-in-one/installer.sh)"
[admin@test001 ~]$ source ~/.oceanbase-all-in-one/bin/env.sh
Install obd offline
If your server cannot connect to the Internet, follow these steps to install components offline.
Download the latest all-in-one package from OceanBase Download Center and copy it to any directory on the master server.
In the directory where the package is located, run the following command to decompress and install the package.
[admin@test001 ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz [admin@test001 ~]$ cd oceanbase-all-in-one/bin/ [admin@test001 bin]$ ./install.sh [admin@test001 bin]$ source ~/.oceanbase-all-in-one/bin/env.sh
Note
You can run the which obd and which obclient commands to check if the components are installed. If the commands return the paths of obd and obclient in the .oceanbase-all-in-one directory, the installation is successful.
Step 2: Configure obd
Note
The following instructions are based on obd V2.8.0. The configuration files may vary depending on the obd version you are using. For more information about obd, see obd documentation.
If you deployed the OceanBase cluster offline, you can refer to Step 1 to download and install the all-in-one package on the MCO. The components in the all-in-one package have been tested for compatibility and are recommended.
If you deployed the OceanBase cluster offline and have specific requirements for the versions of the components, you can download the installation package of the required component version from OceanBase Download Center. Copy the installation package to any directory on the MCO and upload it to the local image repository of obd in this directory.
Note
If you deployed the OceanBase cluster online or by using the all-in-one package, skip steps 1 to 3.
Disable the remote repository.
[admin@test001 rpm]$ obd mirror disable remoteNote
By default, the remote repository is disabled after you install the all-in-one package. You can run the
obd mirror listcommand to confirm this. If the Enabled value of Type=remote is False, the remote image source is disabled.Add the package to the local image repository.
[admin@test001 rpm]$ obd mirror clone *.rpmView the list of packages in the local image repository.
[admin@test001 rpm]$ obd mirror list localSelect a configuration file.
If obd was installed by direct download on your local device, you can view sample configuration files in the
/usr/obd/exampledirectory.If obd was installed by decompressing the all-in-one package, you can view sample configuration files in the
~/.oceanbase-all-in-one/confdirectory. Select a configuration file based on your resource conditions.Note
Configuration files are divided into mini and professional development modes. The two modes have basically the same parameters, with slight differences in specifications. You can select a configuration file based on your resource conditions.
Mini development mode: Suitable for personal devices with at least 8 GB of memory. The name of the configuration file contains the
miniorminidentifier.Professional development mode: Suitable for high-configured ECS instances or physical servers with at least 16 CPU cores and 64 GB of memory.
If you plan to deploy OceanBase on a single server, you can use the configuration file for single-server deployment.
Local single-server deployment sample configuration files: mini-local-example.yaml and local-example.yaml
Single-server deployment sample configuration files: mini-single-example.yaml and single-example.yaml
Single-server deployment with ODP sample configuration files: mini-single-with-obproxy-example.yaml and single-with-obproxy-example.yaml
If you plan to deploy OceanBase in a distributed manner across multiple servers, you can use the configuration file for distributed deployment.
Distributed deployment sample configuration files: mini-distributed-example.yaml and distributed-example.yaml
Distributed deployment with ODP sample configuration files: mini-distributed-with-obproxy-example.yaml and distributed-with-obproxy-example.yaml
Distributed deployment with ODP and OCP Express sample configuration files: default-components.yaml and default-components-min.yaml
Complete distributed deployment sample configuration files: all-components-min.yaml and all-components.yaml
Note
If you plan to deploy OCP Express or have deployed OCP, you do not need to deploy Prometheus and Grafana.
Modify the configuration file.
The following example describes how to modify the default-components.yaml file to deploy a distributed OceanBase cluster.
Note
You need to modify the following parameters based on your environment.
Modify the username and password.
## Only need to configure when remote login is required user: username: admin # password: your password if need key_file: /home/admin/.ssh/id_rsa # port: your ssh port, default 22 # timeout: ssh connection timeout (second), default 30usernamespecifies the username for logging in to the target server. Make sure that you have the write permission forhome_path.passwordandkey_fileare used to verify the user. Generally, you only need to fill in one of them.Notice
After you configure the key path, if the key does not require a password, comment out or delete
passwordto prevent it from being used as the password for logging in, which will cause verification failure.Modify the IP address, ports, and related directories, and configure the memory and password parameters.
oceanbase-ce: servers: # Please don't use hostname, only IP can be supported - name: server1 ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: devname: eth0 cluster_id: 1 # please set memory limit to a suitable value which is matching resource. memory_limit: 64G # The maximum running memory for an observer system_memory: 30G # The reserved system memory. system_memory is reserved for general tenants. datafile_size: 192G # Size of the data file. datafile_next: 200G datafile_maxsize: 1T log_disk_size: 192G # The size of disk space used by the clog files. enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true. max_syslog_file_count: 4 # The maximum number of reserved log files before enabling auto recycling. The default value is 0. # observer cluster name, consistent with obproxy's cluster_name appname: obdemo ocp_meta_tenant: # The config for ocp express meta tenant tenant_name: ocp max_cpu: 1 memory_size: 2G log_disk_size: 7680M mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started. rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started. obshell_port: 2886 # Operation and maintenance port for OceanBase Database. # The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field. home_path: /home/admin/observer # The directory for data storage. The default value is $home_path/store. data_dir: /data # The directory for clog, ilog, and slog. The default value is the same as the data_dir value. redo_dir: /redo root_password: ****** # root user password, can be empty proxyro_password: ******** # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty server1: zone: zone1 server2: zone: zone2 server3: zone: zone3If there are inconsistent configurations on the servers, you can move the related configurations to the corresponding servers and the priority of the configurations under the server is higher than that under
global. For example, the following example shows that the ports of the two servers are different:server2: mysql_port: 3881 rpc_port: 3882 zone: zone2 server3: mysql_port: 2881 rpc_port: 2882 zone: zone3Parameter Required Default value Description servers Required N/A You need to specify each server by using - name: server name (newline)ip: server IP. You can specify multiple servers in this way, and the server names must be unique.
If the server IPs are unique, you can also specify servers by using- <ip> (newline) - <ip>. In this case, the- <ip>format is equivalent to- name: server IP (newline)ip: server IP.devname Optional N/A The network interface controller (NIC) corresponding to the IP address specified in servers. You can run theip addrcommand to view the correspondence between the IP address and the NIC.memory_limit Optional 0 The maximum memory that the observer process can obtain from the environment. If this parameter is not configured, the value of memory_limit_percentagetakes effect. For more information, see memory_limit and memory_limit_percentage.system_memory Optional 0M The reserved system memory. The value of this parameter consumes the memory specified by memory_limit. If this parameter is not configured, OceanBase Database adaptsively uses the memory.datafile_size Optional 0 The size of the data file (block_file) on the corresponding node. If this parameter is not configured, the value of datafile_disk_percentagetakes effect. For more information, see datafile_size and datafile_disk_percentage.datafile_next Optional 0 The growth step of disk space. It is used to set auto expansion. If this parameter is not configured, you can refer to Configure dynamic expansion for disk data files to enable auto expansion. datafile_maxsize Optional 0 The maximum available upper limit of disk space. It is used to set auto expansion. If this parameter is not configured, you can refer to Configure dynamic expansion for disk data files to enable auto expansion. log_disk_size Optional 0 The size of the redo log disk. If this parameter is not configured, the value of log_disk_percentagetakes effect. For more information, see log_disk_size and log_disk_percentage.enable_syslog_wf Optional true Specifies whether to print system logs of the WARN level and above to a separate log file. max_syslog_file_count Optional 0 The maximum number of log files that can be accommodated before the log files are recycled. A value of 0 indicates that automatic cleanup is not performed. appname Optional obcluster The name of the OceanBase cluster. ocp_meta_tenant Optional tenant_name: ocp
max_cpu: 1
memory_size: 2147483648The specifications of the meta tenant required for deploying OCP Express. The parameters under ocp_meta_tenant are passed as the parameters for creating the tenant.
The example lists only some important parameters. For information about other parameters, see the parameters supported by the command for creating a tenant. For more information, see Cluster commands and the description of the obd cluster tenant create command.mysql_port Required 2881 The port number of the SQL service. The default value is 2881. rpc_port Required 2882 The port number for remote access. It is the RPC communication port between the observer process and other node processes. The default value is 2882. obshell_port Required 2886 The port number for OceanBase Database maintenance. The default value is 2886. home_path Required N/A The installation path of OceanBase Database. We recommend that you specify this path under the admin user. data_dir Optional $home_path/store The directory for storing data files such as SSTables. We recommend that you specify an independent disk for this directory. redo_dir Optional The default value is the same as that of data_dir.The directory for clogs. The default value is the same as that of data_dir. We recommend that you specify an independent disk for this directory. root_password Optional Empty by default in versions earlier than V2.1.0
Random string by default in V2.1.0 and laterThe password of the root@sys user, which is the super administrator of the OceanBase cluster. We recommend that you set a complex password. If you use obd V2.1.0 or later, a random string is automatically generated if you do not set this parameter. proxyro_password Optional Empty by default in versions earlier than V2.1.0
Random string by default in V2.1.0 and laterThe password of the proxyro@sys user, which is the account used by ODP to connect to the OceanBase cluster. We recommend that you set a complex password. If you use obd V2.1.0 or later, a random string is automatically generated if you do not set this parameter. Configure the obproxy-ce component and modify the IP address and
home_path.obproxy-ce: depends: - oceanbase-ce servers: - 10.10.10.4 global: listen_port: 2883 # External port. The default value is 2883. prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884. home_path: /home/admin/obproxy enable_cluster_checkout: false skip_proxy_sys_private_check: true enable_strict_kernel_release: false obproxy_sys_password: ****** # obproxy sys user password, can be empty. When a depends exists, obd gets this value from the oceanbase-ce of the depends. observer_sys_password: ***** # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty. When a depends exists, obd gets this value from the oceanbase-ce of the depends.Parameter Required Default value Description servers Required N/A You need to specify each server by using - name: server name (newline)ip: server IP. You can specify multiple servers in this way, and the server names must be unique.
If the server IPs are unique, you can also specify servers by using- <ip> (newline) - <ip>. In this case, the- <ip>format is equivalent to- name: server IP (newline)ip: server IP.listen_port Required 2883 The listening port of ODP. The default value is 2883. prometheus_listen_port Required 2884 The Prometheus listening port of ODP. The default value is 2884. home_path Required N/A The installation path of ODP. We recommend that you specify this path under the admin user. obproxy_sys_password Optional Empty by default in versions earlier than V2.1.0
Random string by default in V2.1.0 and laterThe password of the root@proxysys user, which is the administrator of ODP. We recommend that you set a complex password. If you use obd V2.1.0 or later, a random string is automatically generated if you do not set this parameter. observer_sys_password Optional Empty by default in versions earlier than V2.1.0
Random string by default in V2.1.0 and laterThe password of the proxyro@sys user, which is the account used by ODP to connect to the OceanBase cluster. The value of this parameter must be the same as that of proxyro_password. We recommend that you set a complex password. If you use obd V2.1.0 or later, a random string is automatically generated if you do not set this parameter.Modify the IP address and
home_pathof the obagent and ocp-express components.Note
Starting from V2.0.0, obd supports the deployment of OCP Express. obd V2.0.0 can only deploy OCP Express V1.0.0, and OCP Express V1.0.1 and later can only be deployed by using obd V2.1.0 or later.
obagent: depends: - oceanbase-ce servers: - name: server1 # Please don't use hostname, only IP can be supported ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: home_path: /home/admin/obagent ocp-express: depends: - oceanbase-ce - obproxy-ce - obagent servers: - name: server1 ip: 10.10.10.4 global: # The working directory for prometheus. prometheus is started under this directory. This is a required field. home_path: /home/oceanbase/ocp-server memory_size: 1G # The memory size of ocp-express server. The recommend value is 512MB + (expect node num + expect tenant num) * 60MB. admin_passwd: ******** logging_file_total_size_cap: 10GB # The total log file size of ocp-express server # logging_file_max_history: 1 # The maximum of retention days the log archive log files to keep. The default value is unlimitedParameter Required Default value Description servers Required N/A You need to specify each server by using - name: server name (newline)ip: server IP. You can specify multiple servers in this way, and the server names must be unique.
If the server IPs are unique, you can also specify servers by using- <ip> (newline) - <ip>. In this case, the- <ip>format is equivalent to- name: server IP (newline)ip: server IP.home_path Required N/A The installation path of the component. We recommend that you specify this path under the admin user. memory_size Required N/A The memory size of the OCP Express server. We recommend that you use the following formula to calculate the memory size: memory_size = 512 MB + (number of expected nodes × number of expected tenants) × 60 MB. The number of expected tenants must include the sys tenant and the OCP meta tenant. logging_file_total_size_cap Required 1 GB The total size of log files. The default value is 1 GB. Note
The unit of this parameter must be GB or MB. If you use G or M as the unit, an error occurs and OCP Express cannot be deployed.
admin_passwd Required N/A The password of the admin account on the OCP Express login page. The password must be 8 to 32 characters in length and must contain at least two digits, two uppercase letters, two lowercase letters, and two special characters (~!@#%^&*_-+= (){}[]:;,.?/). If you use obd V2.1.0 or later, a random string is automatically generated if you do not set this parameter.
Step 3: Deploy an OceanBase cluster
Note
For more information about the commands used in this section, see Cluster commands in the obd documentation.
Deploy the OceanBase cluster.
[admin@test001 ~]$ obd cluster deploy obtest -c default-components.yamlIf the target server is connected to the Internet, obd will check whether the installation package is deployed on the server when you run the
obd cluster deploycommand. If not, obd will automatically obtain the installation package from the YUM source.Start the OceanBase cluster.
[admin@test001 ~]$ obd cluster start obtestIf OCP Express is installed, its access address is displayed in the output. On Alibaba Cloud or in other cloud environments, the installer may output an intranet IP address because it cannot obtain a public IP address. Make sure that you use a correct public IP address.
View the status of the OceanBase cluster.
# View the list of clusters managed by obd [admin@test001 ~]$ obd cluster list # View the status of the obtest cluster [admin@test001 ~]$ obd cluster display obtest(Optional) Modify the cluster settings.
OceanBase Database has hundreds of parameters, and some parameters are coupled. We recommend that you do not modify the parameters in the sample configuration file before you become familiar with OceanBase Database. The following example shows how to modify a parameter and make the modification take effect.
Run the
obd cluster edit-configcommand to go to the parameter editing mode, and modify the cluster parameters.[admin@test001 ~]$ obd cluster edit-config obtestAfter you modify and save the parameters and exit, obd will prompt you how to make the modification take effect. Enter the following command after you save the modifications:
Search param plugin and load ok Search param plugin and load ok Parameter check ok Save deploy "obtest" configuration Use `obd cluster reload obtest` to make changes take effect.Execute the command that is output by obd.
[admin@test001 ~]$ obd cluster reload obtest
Step 4: Connect to the OceanBase cluster
In this example, OBClient is used to connect to the OceanBase cluster:
obclient -h<IP> -P<PORT> -u<user_name>@<tenant_name>#<cluster_name> -p -c -A
# example
obclient -h10.10.10.4 -P2883 -uroot@sys#obdemo -p -c -A
The parameters are described as follows:
-h: specifies the IP address for connecting to the OceanBase cluster. If you connect to the cluster directly, use the IP address of an OBServer node. If you connect to the cluster through an ODP, use the IP address of the ODP.
-u: specifies the account for connecting to the OceanBase cluster. The account format is one of the following:
username@tenant name#cluster name,cluster name:tenant name:username,cluster name-tenant name-username, orcluster name.tenant name.username. The default username of the administrator of a MySQL tenant isroot.Notice
The cluster name used for connection is the
appnamevalue in the configuration file, rather than thedeploy namevalue.-P: specifies the port for connecting to the OceanBase cluster. If you connect to the cluster directly, use the value of
mysql_port. If you connect to the cluster through an ODP, use the value oflisten_port.-p: specifies the password for connecting to the OceanBase cluster.
-c: specifies not to ignore comments in the OBClient runtime environment.
Note
Hint, which is a special comment, is not affected by the -c option.
-A: specifies not to automatically obtain the statistics information when you use OBClient to connect to the database.
For more information about how to connect to an OceanBase cluster, see Connect to OceanBase Database.
Step 5: Create a user tenant
After you deploy an OceanBase cluster, we recommend that you create a user tenant for business operations. The sys tenant is intended only for cluster management and is not suitable for business scenarios.
You can create a user tenant by using one of the following methods:
Method 1: Use obd to create a user tenant.
obd cluster tenant create <deploy name> [-n <tenant name>] [flags] # example obd cluster tenant create test -n obmysql --max-cpu=2 --memory-size=2G --log-disk-size=3G --max-iops=10000 --iops-weight=2 --unit-num=1 --charset=utf8By default, this command creates a tenant based on all the remaining resources of the cluster. You can specify the parameters to control the resources used by the tenant. For more information about this command, see the obd cluster tenant create topic in the obd documentation.
Method 2: Create a user tenant on the command line. For more information, see Create a tenant.
References
Manage clusters
You can run the following commands to manage a cluster deployed by using obd. For more information, see Cluster commands.
# View the list of clusters
obd cluster list
# View the status of a cluster. Here is an example of viewing the status of the obtest cluster:
obd cluster display obtest
# Stop a running cluster. Here is an example of stopping the obtest cluster:
obd cluster stop obtest
# Destroy a deployed cluster. Here is an example of destroying the obtest cluster:
obd cluster destroy obtest
Uninstall all-in-one
You can follow the steps below to uninstall the installed all-in-one package.
Uninstall the installation package.
[admin@test001 ~]$ cd oceanbase-all-in-one/bin/ [admin@test001 bin]$ ./uninstall.shDelete the environment variables.
[admin@test001 bin]$ vim ~/.bash_profile # Delete source /home/admin/.oceanbase-all-in-one/bin/env.sh from the file.Apply the changes.
After you delete the environment variables, you must log in to the terminal again or run the following command.
[admin@test001 ~]$ source ~/.bash_profile