This topic describes how to deploy OceanBase Database in a production environment by using the CLI of obd.
Note
obd supports deploying OceanBase Database in a production environment by using the CLI and OceanBase clusters by using the GUI. We recommend that you deploy OceanBase clusters by using the GUI. For more information, see Deploy OceanBase clusters by using the GUI.
Terms
Central control machine
The machine on which obd is installed. It stores the installation package and configuration information about OceanBase clusters and is used for cluster management.
Target machine
A machine that is installed with OceanBase Database.
OceanBase Database
OceanBase Database is a fully self-developed, enterprise-level, native distributed database. For more information, see OceanBase Database documentation.
obd
OceanBase Deployer (obd) is a tool for installing and deploying open source OceanBase software. For more information, see obd documentation.
ODP
OceanBase Database Proxy (ODP), also known as OBProxy, is a dedicated proxy server for OceanBase Database. For more information, see ODP documentation.
OBAgent
OceanBase Agent (OBAgent) is a framework for data monitoring and collection in OceanBase Database. It supports both pushing and pulling modes for data collection in different scenarios.
obconfigserver
OceanBase Configserver (obconfigserver) provides metadata registration, storage, and query services for OceanBase Database. For more information, see ob-configserver.
Grafana
Grafana is an open-source data visualization tool that enables you to visualize various metrics from data sources, helping you better understand system status and performance. For more information, see Grafana official website.
Prometheus
Prometheus is an open-source service monitoring system and time-series database. It provides a general data model as well as efficient interfaces for data collection, storage, and querying. For more information, see Prometheus official website.
Prerequisites
Before you deploy OceanBase Database, make sure that the following conditions are met:
Your server meets the software and hardware requirements. For more information, see Software and hardware requirements.
In a production environment, you need to check the environment and configurations.
The obd and OBClient clients are installed on the central server. We recommend that you install the latest version. For more information, see Install obd section and OBClient documentation.
Note
When you install OceanBase All-in-One, both obd and OBClient are automatically installed. Therefore, if you intend to download and install OceanBase All-in-One by following the steps in the next section, you can skip these requirements.
Deployment mode
This topic describes the deployment mode of three replicas. We recommend that you deploy the system on four servers. You can choose an appropriate deployment mode based on your actual situation. The following table describes the usage of the four servers.
| Role | Server | Remarks |
|---|---|---|
| obd | 10.10.10.4 | The automatic deployment software installed on the central control server. |
| OBServer node | 10.10.10.1 | OceanBase Database Zone1 |
| OBServer node | 10.10.10.2 | OceanBase Database Zone2 |
| OBServer node | 10.10.10.3 | OceanBase Database Zone3 |
| ODP | 10.10.10.1, 10.10.10.2, 10.10.10.3 | Dedicated reverse proxy software for OceanBase Database. |
| OBAgent | 10.10.10.1, 10.10.10.2, 10.10.10.3 | The monitoring and data collection framework of OceanBase Database. |
| obconfigserver | 10.10.10.4 | Provides metadata registration, storage, and query services for OceanBase Database. |
| Prometheus | 10.10.10.4 | An open-source service monitoring system and time series database that provides a general data model and quick data collection, storage, and query interfaces. |
| Grafana | 10.10.10.4 | An open-source data visualization tool that can visualize various metrics from data sources to help you understand the system status and performance metrics more intuitively. |
Note
In a production environment, we recommend that you deploy ODP and the application on the same server to save ODP access time. You can deploy an ODP service on each application server.
The server on which you deploy ODP can have a different configuration from the server on which you deploy OceanBase Database. The minimum configuration for deploying ODP is 1 CPU core and 1 GB of memory.
Procedure
Notice
The following steps are based on the CentOS Linux 7.9 x86 architecture image. Other environments may vary slightly.
Before deploying an OceanBase cluster, we recommend that you switch to a non-root user for data security.
Step 1: (Optional) Download and install the all-in-one package
Starting from V4.0.0, OceanBase provides a unified installation package called OceanBase All in One. You can use this package to install components such as obd, OceanBase Database, ODP, and OBAgent all at once.
Alternatively, you can download and install specific components or versions from the OceanBase Download Center based on your requirements.
Online installation
If your machine can connect to the network, execute the following command for online installation.
[admin@test001 ~]$ bash -c "$(curl -s https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/download-center/opensource/oceanbase-all-in-one/installer.sh)"
[admin@test001 ~]$ source ~/.oceanbase-all-in-one/bin/env.sh
Offline installation
If your machine cannot connect to the network, follow these steps for offline installation.
Download the latest OceanBase All in One installation package from the OceanBase Download Center and copy it to any directory on the control node.
In the directory where the installation package is located, execute the following command to decompress and install the package.
[admin@test001 ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz [admin@test001 ~]$ cd oceanbase-all-in-one/bin/ [admin@test001 bin]$ ./install.sh [admin@test001 bin]$ source ~/.oceanbase-all-in-one/bin/env.sh
Note
You can execute the which obd and which obclient commands to check whether the installation was successful. If the paths to obd and obclient in the .oceanbase-all-in-one directory are found, the installation was successful.
Step 2: Configure obd
Note
This topic uses obd V3.5.0 as an example. The configuration files of different obd versions may vary. Please refer to the actual configuration files. For more information about obd, see obd documentation.
If you are deploying an OceanBase cluster offline, you can download the all-in-one installation package to the central control server and install the package as described in Step 1. The components provided in the all-in-one installation package have been adapted to each other and are officially recommended.
If you are performing an offline deployment and have specific requirements for the versions of the components needed for installation, you can download the appropriate installation packages from OceanBase Download Center. Copy the downloaded packages to any directory on the central control machine, and then follow steps 1 to 3 below in that directory to upload the packages to the local obd image repository.
Note
If you want to deploy an OceanBase cluster in online mode, you can skip steps 1 to 3.
Disable the remote repository.
[admin@test001 rpm]$ obd mirror disable remoteNote
By default, the remote repository is disabled after you install OceanBase All in One. You can run the
obd mirror listcommand to check whether the remote mirror source is disabled. If the value of theEnabledfield for Type=remote is False, the remote mirror source is disabled.Add the installation package to the local image library.
[admin@test001 rpm]$ obd mirror clone *.rpmView the list of installation packages in the local image library.
[admin@test001 rpm]$ obd mirror list localSelect a configuration file.
If you installed obd by directly downloading it, you can view the sample configuration files provided by obd in the /usr/obd/example directory.
If you installed obd by decompressing OceanBase All in One, you can view the sample configuration files provided by obd in the ~/.oceanbase-all-in-one/obd/usr/obd/example directory. Select a configuration file based on your resource conditions.
Note
Configuration files are provided in two modes: small-scale development and professional development. The configuration items in the two modes are mostly the same, but the specifications are slightly different. You can select a configuration file based on your resource conditions.
Small-scale development mode: This mode is suitable for personal devices with at least 8 GB of memory. The names of the configuration files in this mode contain
miniormin.Professional development mode: This mode is suitable for high-end ECS instances or physical servers with at least 16 CPU cores and 64 GB of memory.
If you want to deploy an OceanBase cluster on a single node, you can refer to the configuration files for standalone deployment.
Sample configuration files for local standalone deployment: mini-local-example.yaml, local-example.yaml
Sample configuration files for standalone deployment: mini-single-example.yaml, single-example.yaml
Sample configuration files for standalone deployment with ODP: mini-single-with-obproxy-example.yaml, single-with-obproxy-example.yaml
If you want to deploy an OceanBase cluster on multiple nodes, you can refer to the configuration files for distributed deployment.
Sample configuration files for distributed deployment: mini-distributed-example.yaml, distributed-example.yaml
Sample configuration files for distributed deployment with ODP: mini-distributed-with-obproxy-example.yaml, distributed-with-obproxy-example.yaml
Sample configuration files for deploying all components: all-components-min.yaml, all-components.yaml
Modify the configuration file.
Note
You need to modify the following parameters based on your environment.
Create a configuration file.
vim deploy.yamlThe following example is for reference. You can customize the configuration file name based on your actual needs.
Modify the username and password.
## Only need to configure when remote login is required user: username: admin # password: your password if need key_file: /home/admin/.ssh/id_rsa # port: your ssh port, default 22 # timeout: ssh connection timeout (second), default 30The
usernameparameter specifies the username for logging in to the target node. Make sure that the username has write permissions for thehome_pathdirectory. Thepasswordandkey_fileparameters are used to verify the user. Usually, you only need to specify one of them.Notice
After you configure the key path, if your key does not require a password, comment out or delete the
passwordparameter to prevent it from being treated as the key password for login, which may cause verification failure.Modify the IP address, port, and related directories, and configure memory-related parameters and the password.
oceanbase-ce: # version: 4.3.3.0 servers: # Please don't use hostname, only IP can be supported - name: server1 ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: devname: eth0 cluster_id: 1 # please set memory limit to a suitable value which is matching resource. memory_limit: 64G # The maximum running memory for an observer system_memory: 30G # The reserved system memory. system_memory is reserved for general tenants. datafile_size: 192G # Size of the data file. datafile_next: 200G datafile_maxsize: 1T log_disk_size: 192G # The size of disk space used by the clog files. scenario: htap enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true. max_syslog_file_count: 4 # The maximum number of reserved log files before enabling auto recycling. The default value is 0. enable_auto_start: true # observer cluster name, consistent with obproxy's cluster_name appname: obdemo mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started. rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started. obshell_port: 2886 # Operation and maintenance port for OceanBase Database. # The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field. home_path: /home/admin/observer # The directory for data storage. The default value is $home_path/store. data_dir: /data # The directory for clog. The default value is the same as the data_dir value. redo_dir: /redo root_password: ****** # root user password, can be empty proxyro_password: ******** # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty server1: zone: zone1 server2: zone: zone2 server3: zone: zone3If there are inconsistent configuration items in the nodes, you can move the relevant configuration items to the corresponding server for configuration. The configuration items in the corresponding server in the configuration file take precedence over those in the
globalsection. For example, if the port is configured differently in two nodes:server2: mysql_port: 3881 rpc_port: 3882 zone: zone2 server3: mysql_port: 2881 rpc_port: 2882 zone: zone3Parameter Required Default value Description version Optional The latest version in the image library The version of the component to be deployed. Usually, this parameter is not specified. servers Required N/A For each server, specify - name: server name (press Enter)ip: server IP address. Repeat this operation for each server. The server names must be unique.
If the server IP addresses are unique, you can also specify- <ip> (press Enter) - <ip>to indicate multiple servers. In this case,- <ip>is equivalent to- name: server IP address (press Enter)ip: server IP address.devname Optional N/A The network card corresponding to the IP address specified in servers. You can run theip addrcommand to view the mapping between the IP address and network card.memory_limit Optional 0 The maximum amount of memory that the observer process can obtain from the environment. If this parameter is not specified, the value of the memory_limit_percentageparameter is used. For more information, see memory_limit and memory_limit_percentage.system_memory Optional 0M The amount of memory reserved for the system. This parameter value will occupy the memory specified by memory_limit. If this parameter is not specified, OceanBase Database will automatically adapt.datafile_size Optional 0 The size of the block_file of the corresponding node. If this parameter is not specified, the value of the datafile_disk_percentageparameter is used. For more information, see datafile_size and datafile_disk_percentage.datafile_next Optional 0 The growth step of the disk space, used to set automatic expansion. If this parameter is not specified, to enable automatic expansion, see Configure dynamic expansion of disk data files. datafile_maxsize Optional 0 The maximum available limit of the disk space, used to set automatic expansion. If this parameter is not specified, to enable automatic expansion, see Configure dynamic expansion of disk data files. log_disk_size Optional 0 The size of the Redo log disk. If this parameter is not specified, the value of the log_disk_percentageparameter is used. For more information, see log_disk_size and log_disk_percentage.scenario Optional htap The load type of the cluster. If this parameter is not specified, an interactive option will be provided for you to select the load type. Valid values: express_oltp: This type of workload is suitable for trade and payment core systems, and high-throughput internet applications. It has no foreign keys, stored procedures, long transactions, large transactions, complex connections, or complex subqueries.complex_oltp: This type of workload is suitable for banking and insurance systems. They usually have complex connections, complex correlated subqueries, batch jobs written in PL, long transactions, and large transactions. Parallel execution is sometimes used for short-running queries.olap: This type of workload is suitable for real-time data warehouse analysis.htap: This type of workload is suitable for mixed OLAP and OLTP workloads. It is usually used to obtain real-time insights from active operational data, fraud detection, and personalized recommendations.kv: This type of workload is suitable for key-value workloads and wide-column workloads similar to HBase. These workloads usually have very high throughput and are sensitive to latency.
enable_syslog_wf Optional true Specifies whether to print system logs of WARN level or higher to a separate log file. max_syslog_file_count Optional 0 The maximum number of log files that can be retained before they are recycled. If the value is 0, no automatic cleanup is performed. enable_auto_start Optional 0 Specifies whether to enable the automatic startup of the observer process. If this parameter is set to true, the observer process is automatically started when an OBServer node restarts.Note
Make sure that the deployment user has sudo privileges and that the deployment environment is not a container environment.
appname Optional obcluster The name of the OceanBase cluster. mysql_port Required 2881 The port number of the SQL service protocol. The default value is 2881. rpc_port Required 2882 The port number of the remote access protocol, which is the RPC communication port between the observer process and the processes of other nodes. The default value is 2882. obshell_port Required 2886 The port number of the OceanBase Database O&M service. The default value is 2886. home_path Required N/A The installation path of OceanBase Database. We recommend that you install it under the admin user. data_dir Optional $home_path/store The directory for storing data such as SSTables. We recommend that you configure an independent disk for this parameter. redo_dir Optional The same as data_dirThe directory for storing clogs. By default, this parameter is the same as data_dir. We recommend that you configure an independent disk for this parameter.root_password Optional A random string The password of the super administrator (root@sys) of the OceanBase cluster. We recommend that you set a complex password. If you use obd V2.1.0 or later, a random string will be automatically generated if this parameter is not specified. proxyro_password Optional A random string The password of the proxyro@sys account used for connecting to the OceanBase cluster. If you use obd V2.1.0 or later, a random string will be automatically generated if this parameter is not specified. Configure the obproxy-ce component and modify the IP address and
home_pathparameter.obproxy-ce: # version: 4.3.4.0 # package_hash: 5c5dca3a355cc7286146ed978ebb26a6342e42e02cd3ca8b9739d300519a449f depends: - oceanbase-ce servers: - name: server1 # Please don't use hostname, only IP can be supported ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: prometheus_listen_port: 2884 listen_port: 2883 rpc_listen_port: 2885 enable_obproxy_rpc_service: true # vip_address: "10.10.10.5" # vip_port: 3306 home_path: /home/admin/obtest/obproxy client_session_id_version: 2 proxy_id: 822 obproxy_sys_password: ******** observer_sys_password: ******** skip_proxy_sys_private_check: true enable_strict_kernel_release: false enable_cluster_checkout: falseParameter Required Default value Description version Optional The latest version in the image library Specifies the version of the component to be deployed. Usually, you do not need to specify this parameter. depends Optional None Specifies the component dependencies. After this parameter is configured, the current component automatically obtains information from the dependent components. If the component has dependencies, this parameter is required. servers Required None For each server, use - name: server identifier (line break)ip: server IPto specify the server. If multiple servers are specified, repeat this parameter for each server. The server identifiers cannot be repeated.
If the server IP addresses are unique, you can also use- <ip> (line break) - <ip>to specify the servers. In this case,- <ip>is equivalent to- name: server IP (line break)ip: server IP.listen_port Required 2883 The ODP listening port. The default value is 2883. prometheus_listen_port Required 2884 The ODP Prometheus listening port. The default value is 2884. rpc_listen_port Optional 2885 The RPC service listening port of ODP. The default value is 2885. Note
This parameter takes effect only when you deploy ODP V4.3.0 or later and set
enable_obproxy_rpc_servicetotrue.enable_obproxy_rpc_service Optional true Specifies whether to enable the RPC service of ODP. The default value trueindicates that the RPC service is enabled. If the value isfalse, the RPC service is disabled, and therpc_listen_portparameter will not take effect.Note
This parameter takes effect only when you deploy ODP V4.3.0 or later.
- If this parameter is not specified, its default value will be set to
falseafter ODP is upgraded.
vip_address Optional None When you deploy multiple ODP nodes, you can configure the VIP IP address through this parameter. Notice
obd V3.3.0 and later support configuring VIP for ODP.
- obd does not provide load balancing deployment. If you need to set up load balancing, please deploy and configure the corresponding load balancer in advance.
vip_port Optional None When you deploy multiple ODP nodes, you can configure the VIP access port through this parameter. Notice
obd V3.3.0 and later support configuring VIP for ODP.
- obd does not provide load balancing deployment. If you need to set up load balancing, please deploy and configure the corresponding load balancer in advance.
home_path Required None The installation path of ODP. We recommend that you install ODP as the admin user. client_session_id_version Optional 2 Specifies whether to use the new Client Session ID generation logic. This parameter can be set to 1or2. If it is set to2, the new Client Session ID generation logic is used.proxy_id Optional 0 Specifies the ID of ODP to ensure that the Client Session IDs generated by different ODP instances do not conflict. The value range of the proxy_idparameter varies based on the value of theclient_session_id_versionparameter. The following two scenarios are described:- If the value of the
client_session_id_versionparameter is1, the value range of theproxy_idparameter is [0,255]. - If the value of the
client_session_id_versionparameter is2, the value range of theproxy_idparameter is [0,8191].
obproxy_sys_password Optional A random string The password of the ODP administrator account (root@proxysys). If you use obd V2.1.0 or later, a random string will be automatically generated if this parameter is not specified. observer_sys_password Optional A random string that is the same as proxyro_passwordby defaultThe password of the account (proxyro@sys) used by ODP to connect to the OceanBase cluster. The password must be the same as the value of proxyro_password. If you use obd V2.1.0 or later, a random string that is the same asproxyro_passwordwill be automatically generated if this parameter is not specified.Modify the monitoring component.
obagent: depends: - oceanbase-ce # The list of servers to be monitored. This list is consistent with the servers in oceanbase-ce. servers: - name: server1 ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: monagent_http_port: 8088 mgragent_http_port: 8089 home_path: /home/admin/obagent http_basic_auth_password: ******** prometheus: servers: - 10.10.10.4 depends: - obagent global: port: 9090 # The working directory for prometheus. prometheus is started under this directory. This is a required field. home_path: /home/admin/prometheus basic_auth_users: #<username>: <password>, the key is the user name and the value is the password. admin: ******** grafana: servers: - 10.10.10.4 depends: - prometheus global: port: 3000 login_password: ******** home_path: /home/admin/grafanaParameter Required Default value Description servers Yes N/A Each server must be specified with - name: server name (line break)ip: server IP address. You can specify multiple servers. The server names must be unique.
If the server IP addresses are unique, you can also specify them in the- <ip> (line break) - <ip>format. In this case, the- <ip>format is equivalent to- name: server IP address (line break)ip: server IP address.depends No N/A Specifies the dependencies of the component. After the component is configured, it automatically obtains information from the dependent components. If the component has dependencies, this parameter is required. home_path Yes N/A The installation path of the component. We recommend that you specify the path in the admin user account. monagent_http_port Yes 8088 The HTTP port of the monitoring service of OBAgent. mgragent_http_port Yes 8089 The HTTP port of the management service of OBAgent. http_basic_auth_password No A random string The HTTP authentication password. If you customize the password, it must be at least one character in length and can contain digits (0-9), uppercase letters (A-Z), lowercase letters (a-z), and special characters ( ~^*{}[]_-+). If you do not configure this parameter, obd generates a random string. After the deployment is completed, you can run theobd cluster edit-configcommand to view the password in the corresponding parameter in the configuration file.basic_auth_users No admin: *****
Here,adminis the username and*****is the randomly generated password.The authentication information of the Prometheus Web service. The key is the username and the value is the password. login_password No A random string The login password of Grafana. After the deployment is completed, you can run the obd cluster edit-configcommand to view the password in the corresponding parameter in the configuration file.
Step 3: Deploy an OceanBase cluster
Note
For more information about the commands used in this section, see Cluster commands in obd Documentation.
Deploy an OceanBase cluster
[admin@test001 ~]$ obd cluster deploy obtest -c deploy.yamlIf your system is connected to the Internet, after you run the
obd cluster deploycommand, obd checks whether the installation package is available on the target server. If not, obd automatically downloads the package from the YUM source.Start the OceanBase cluster
[admin@test001 ~]$ obd cluster start obtestAfter the cluster is started, the IP addresses of obshell Dashboard and monitoring components are displayed. In Alibaba Cloud or other cloud environments, the installation program may output the intranet IP address because it cannot obtain the public IP address. You must use the correct IP address.
View the status of the OceanBase cluster
# View the list of clusters managed by obd. [admin@test001 ~]$ obd cluster list # View the status of the obtest cluster. [admin@test001 ~]$ obd cluster display obtest(Optional) Modify the cluster configuration
OceanBase Database has hundreds of configuration items, some of which are coupled. We recommend that you do not modify the sample configuration file until you are familiar with OceanBase Database. This section provides an example to show you how to modify the configuration and make the modification effective.
Run the
obd cluster edit-configcommand to enter the editing mode and modify the cluster configuration.[admin@test001 ~]$ obd cluster edit-config obtestAfter you modify and save the configuration, obd displays the commands that you need to run to make the modification effective. Run the following commands.
Search param plugin and load ok Search param plugin and load ok Parameter check ok Save deploy "obtest" configuration Use `obd cluster reload obtest` to make changes take effect.Copy and run the command displayed by obd.
[admin@test001 ~]$ obd cluster reload obtest
Step 4: Connect to the OceanBase cluster
This section describes how to connect to the OceanBase cluster by using the OBClient client.
obclient -h<IP> -P<PORT> -u<user_name>@<tenant_name>#<cluster_name> -p -c -A
# example
obclient -h10.10.10.4 -P2883 -uroot@sys#obdemo -p -c -A
Parameter description:
-h: specifies the IP address for connecting to the OceanBase cluster. If you directly connect to the cluster, specify the IP address of the OBServer node. If you connect to the cluster by using an ODP, specify the IP address of the ODP.
-u: specifies the account for connecting to the OceanBase cluster. The account can be in the following formats:
username@tenant name#cluster name,cluster name:tenant name:username,cluster name-tenant name-username, orcluster name.tenant name.username. The default username of the administrator of a MySQL tenant isroot.Notice
The cluster name used for connection is the value of the
appnameparameter in the configuration file, not thedeploy namespecified during deployment.-P: specifies the port for connecting to the OceanBase cluster. If you directly connect to the cluster, specify the value of the
mysql_portparameter. If you connect to the cluster by using an ODP, specify the value of thelisten_portparameter.-p: specifies the password for connecting to the OceanBase cluster.
-c: specifies that comments are not ignored in the OBClient running environment.
Note
Hint is a special comment that is not affected by the -c option.
-A: specifies that statistical information is not automatically obtained when the OBClient connects to the database.
For more information about how to connect to the OceanBase cluster, see Connect to OceanBase Database.
Step 5: Create a user tenant
After you deploy an OceanBase cluster, we recommend that you create a user tenant for business operations. The sys tenant is used only for cluster management and is not suitable for business scenarios.
You can create a user tenant by using one of the following methods:
Method 1: Use obd to create a user tenant.
obd cluster tenant create <deploy name> [-n <tenant name>] [flags] # example obd cluster tenant create test -n obmysql --max-cpu=2 --memory-size=2G --log-disk-size=3G --max-iops=10000 --iops-weight=2 --unit-num=1 --charset=utf8By default, this command creates a tenant based on all remaining available resources of the cluster. You can configure the parameters to allocate specific amount of resources to the tenant. For more information about this command, see obd cluster tenant create in Cluster commands.
Method 2: Create a user tenant by using the CLI. For more information, see Create a tenant.
Related operations
Manage the cluster
You can run the following commands to manage OceanBase clusters deployed by using obd. For more information, see Cluster commands in obd Documentation.
# View the cluster list.
obd cluster list
# View the status of the cluster. The deployment name is obtest.
obd cluster display obtest
# Stop the running cluster. The deployment name is obtest.
obd cluster stop obtest
# Destroy the deployed cluster. The deployment name is obtest.
obd cluster destroy obtest
Uninstall OceanBase All in One
You can run the following commands to uninstall OceanBase All in One.
Uninstall the installation package.
[admin@test001 ~]$ cd oceanbase-all-in-one/bin/ [admin@test001 bin]$ ./uninstall.shRemove the environment variables.
[admin@test001 bin]$ vim ~/.bash_profile # Remove the line source /home/admin/.oceanbase-all-in-one/bin/env.sh from this file.Make the changes take effect.
After you remove the environment variables, you must log in to the terminal again or run the source command to make the changes take effect. You can run the following command:
[admin@test001 ~]$ source ~/.bash_profile