This topic describes how to deploy OceanBase Database in a production environment by running OceanBase Deployer (obd) commands on the CLI.
Note
You can also deploy an OceanBase cluster through the GUI of obd. We recommend that you choose this GUI-based deployment method. For more information, see Deploy an OceanBase cluster through the GUI of obd.
Terms
Central control server
The server on which obd is installed. It stores the installation package and configuration information about OceanBase clusters and is used for cluster management.
Target server
The server where OceanBase Database is installed.
OceanBase Database
A distributed database for mission-critical workloads at any scale. For more information, see OceanBase Database documentation.
obd
OceanBase Deployer (obd) is the OceanBase installation and deployment tool. For more information, see bd Documentatio documentation.
ODP
OceanBase Database Proxy (ODP), also known as OBProxy, is a dedicated proxy service designed for OceanBase Database. For more information, see ODP documentation.
OBAgent
OBAgent is a monitoring and collection framework for OceanBase Database. It supports both push and pull data collection modes to meet different application scenarios.
obconfigserver
OceanBase Configserver (ob-configserver) provides metadata registration, storage, and query services for OceanBase. For more information, see ob-configserver.
Grafana
Grafana is an open-source data visualization tool that visualizes various metrics from data sources for a more intuitive view of system status and performance. For more information, see Grafana official website.
Prometheus
Prometheus is an open-source service monitoring system and time-series database that provides a universal data model and convenient interfaces for data collection, storage, and query. For more information, see Prometheus official website.
Prerequisites
Before you deploy OceanBase Database, make sure that the following requirements are met:
Your server meets the software and hardware requirements. For more information, see Software and hardware requirements.
In the production environment, you must perform environment and configuration checks. For specific operations, see Configuration before deployment.
You have installed obd and OBClient on the central control server. We recommend that you install the latest versions. For more information, see Quick start - Install obd and OBClient documentation.
Note
When you install OceanBase All-in-One, both obd and OBClient are automatically installed. Therefore, if you intend to download and install OceanBase All-in-One by following the steps in the next section, you can skip these requirements.
Deployment mode
This topic describes a three-replica deployment mode. We recommend that you use four servers. You can choose an appropriate deployment solution based on your actual situation. The following table describes how the four servers are used in this topic.
| Role | Server | Remarks |
|---|---|---|
| obd | 10.10.10.4 | The automatic deployment software installed on the central control server. |
| OBServer node | 10.10.10.1 | OceanBase Database Zone1 |
| OBServer node | 10.10.10.2 | OceanBase Database Zone2 |
| OBServer node | 10.10.10.3 | OceanBase Database Zone3 |
| ODP | 10.10.10.1, 10.10.10.2, and 10.10.10.3 | A reverse proxy designed for OceanBase Database. |
| OBAgent | 10.10.10.1, 10.10.10.2, and 10.10.10.3 | A monitoring and collection framework for OceanBase Database. |
| ob-configserver | 10.10.10.4 | Provides metadata registration, storage, and query services for OceanBase. |
| Prometheus | 10.10.10.4 | An open-source service monitoring system and time-series database that provides a universal data model and convenient interfaces for data collection, storage, and query. |
| Grafana | 10.10.10.4 | An open-source data visualization tool that visualizes various metrics from data sources for a more intuitive view of system status and performance. |
Note
In the production environment, we recommend that you deploy your applications on the same servers as ODP to minimize the latency of access from ODP to the applications. That is, you can deploy an ODP service on each application server. In this example, ODP is separately deployed for ease of illustration.
Configurations of the ODP server can be different from those of servers for the deployment of OceanBase Database. The server where ODP is to be deployed requires at least one CPU core and 1 GB of memory.
For more information about how to deploy a standalone database, see Deploy OceanBase Database on a single OBServer node in obd Documentation.
Procedure
Notice
The following describes the deployment of OceanBase Database on an x86-based CentOS Linux 7.9 platform. The procedure may be different on other OS platforms.
Before you deploy the OceanBase cluster, we recommend that you switch to a non-root user for data security.
Step 1: (Optional) Download and install the all-in-one package
OceanBase provides an all-in-one installation package since V4.0.0. You can use this package to install obd, OceanBase Database, ODP, OBAgent, and other components at a time.
You can download and install desired components of specified versions from OceanBase Download Center.
Online installation
If your server can connect to the Internet, run the following commands to install the components online:
[admin@test001 ~]$ bash -c "$(curl -s https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/download-center/opensource/oceanbase-all-in-one/installer.sh)"
[admin@test001 ~]$ source ~/.oceanbase-all-in-one/bin/env.sh
Offline installation
If your server cannot connect to the Internet, perform the following steps to install the components offline:
Download the latest all-in-one package from OceanBase Download Center and copy it to any directory on the central control server.
In the directory where the package is located, run the following commands to decompress and install the package:
[admin@test001 ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz [admin@test001 ~]$ cd oceanbase-all-in-one/bin/ [admin@test001 bin]$ ./install.sh [admin@test001 bin]$ source ~/.oceanbase-all-in-one/bin/env.sh
Note
You can run the which obd and which obclient commands to check whether the installation is successful. If you can find the obd and obclient paths in the .oceanbase-all-in-one directory, the installation is successful.
Step 2: Configure obd
Note
This topic uses obd V3.5.0 as an example. The configuration file may vary with the obd version. For more information about obd, see obd documentation.
If you are deploying an OceanBase cluster offline, you can download the all-in-one installation package to the central control server and install the package as described in Step 1. The components provided in the all-in-one installation package have been adapted to each other and are officially recommended.
You can also download the installation package of the desired version for a component from OceanBase Download Center. Then, copy the package to any directory on the central control server and perform the following steps to configure obd.
Note
If you are deploying the OceanBase cluster online, skip the following steps 1 to 3.
Disable remote repositories.
[admin@test001 rpm]$ obd mirror disable remoteNote
After you install the all-in-one package, the remote repositories are automatically disabled. You can run the
obd mirror listcommand for confirmation. If the values of the remote repositories in theEnabledcolumn are changed toFalse, the remote image sources are disabled.Add the installation package to the local image repository.
[admin@test001 rpm]$ obd mirror clone *.rpmView the list of installation packages in the local image repository.
[admin@test001 rpm]$ obd mirror list localSelect a configuration file.
If obd is directly downloaded and installed on your server, you can view the sample configuration file provided by obd in the
/usr/obd/exampledirectory.If obd is installed by using the all-in-one package, you can view the sample configuration file provided by obd in the
~/.oceanbase-all-in-one/obd/usr/obd/exampledirectory. Select the corresponding configuration file based on your resource conditions.Note
Two types of configuration files are available: one for the small-scale development mode and one for the professional development mode. The configuration files for the two modes contain the same parameters but different parameter values. You can select a file based on your resource conditions.
The small-scale development mode applies to individual devices with at least 8 GB of memory. The configuration file name contains
miniormin.The professional development mode applies to advanced Elastic Compute Service (ECS) instances or physical servers with at least 16 CPU cores and 64 GB of memory.
If you select standalone deployment, only one target server is required. In this case, you can use a configuration file for standalone deployment.
Sample configuration files for local standalone deployment:
mini-local-example.yamlandlocal-example.yamlSample configuration files for standalone deployment:
mini-single-example.yamlandsingle-example.yamlSample configuration files for standalone deployment with ODP:
mini-single-with-obproxy-example.yamlandsingle-with-obproxy-example.yaml
If you select distributed deployment, multiple target servers are required. In this case, you can use a configuration file for distributed deployment.
Sample configuration files for distributed deployment:
mini-distributed-example.yamlanddistributed-example.yamlSample configuration files for distributed deployment with ODP:
mini-distributed-with-obproxy-example.yamlanddistributed-with-obproxy-example.yamlSample configuration files for distributed deployment with all components:
all-components-min.yamlandall-components.yaml
Modify the configuration file.
Note
You need to modify related parameters based on the actual environment.
Create a configuration file.
vim deploy.yamlThis is only an example. You can customize the configuration file name based on your actual situation.
Change the username and password.
## Only need to configure when remote login is required user: username: admin # password: your password if need key_file: /home/admin/.ssh/id_rsa # port: your ssh port, default 22 # timeout: ssh connection timeout (second), default 30Here,
usernamespecifies the username for logging in to the target server. Make sure that this account has the write privilege onhome_path. Bothpasswordandkey_fileare used for user verification. Generally, you only need to specify one of them.Notice
After you specify the key path, comment out or delete the
passwordfield if your key does not require a passphrase. Otherwise, the value ofpasswordwill be treated as the key passphrase for login, leading to verification failure.Modify the IP address, port, and related directories of each server, and specify memory-related parameters and the password.
oceanbase-ce: # version: 4.3.3.0 servers: # Please don't use hostname, only IP can be supported - name: server1 ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: devname: eth0 cluster_id: 1 # please set memory limit to a suitable value which is matching resource. memory_limit: 64G # The maximum running memory for an observer system_memory: 30G # The reserved system memory. system_memory is reserved for general tenants. datafile_size: 192G # Size of the data file. datafile_next: 200G datafile_maxsize: 1T log_disk_size: 192G # The size of disk space used by the clog files. scenario: htap enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true. max_syslog_file_count: 4 # The maximum number of reserved log files before enabling auto recycling. The default value is 0. enable_auto_start: true # observer cluster name, consistent with obproxy's cluster_name appname: obdemo mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started. rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started. obshell_port: 2886 # Operation and maintenance port for OceanBase Database. # The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field. home_path: /home/admin/observer # The directory for data storage. The default value is $home_path/store. data_dir: /data # The directory for clog. The default value is the same as the data_dir value. redo_dir: /redo root_password: ****** # root user password, can be empty proxyro_password: ******** # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty server1: zone: zone1 server2: zone: zone2 server3: zone: zone3If servers have inconsistent parameters, move the relevant parameters to the corresponding server sections. Parameters in server sections have higher priority than those in
global. For example, to configure different ports for two servers:server2: mysql_port: 3881 rpc_port: 3882 zone: zone2 server3: mysql_port: 2881 rpc_port: 2882 zone: zone3Parameter Required? Default value Description version No The latest version in the image repository The version of the component to be deployed. Generally, you do not need to specify this parameter. servers Yes N/A Specify each server in the format of - name: server identifier (line feed) ip: server IP address. You can specify multiple servers. The server identifiers must be unique.
If the server IP addresses are unique, you can also specify the servers in the- <ip> (line break)- <ip>format. In this case,- <ip>is equivalent to- name: server IP address (line break) ip: server IP address.devname No N/A The NIC corresponding to the IP address specified in servers. You can view the mapping between IP addresses and NICs by using theip addrcommand.memory_limit No 0 The maximum memory that the observer process can obtain from the environment. If this parameter is not configured, the memory_limit_percentageparameter takes effect. For more information about the two parameters, see memory_limit and memory_limit_percentage.system_memory No 0M The reserved system memory, which is part of memory_limit. If this parameter is not configured, OceanBase Database reserves memory for the system in adaptive mode.datafile_size No 0 The size of the data file block_fileon the corresponding OBServer node. If this parameter is not configured, thedatafile_disk_percentageparameter takes effect. For more information, see datafile_size and datafile_disk_percentage.datafile_next No 0 The auto scaling step of disk space. If this parameter is not configured, to enable auto scaling, see Configure auto scaling of disk data files. datafile_maxsize No 0 The maximum disk space allowed for auto scaling. If this parameter is not configured, to enable auto scaling, see Configure auto scaling of disk data files. log_disk_size No 0 The size of the redo log disk. If this parameter is not configured, the log_disk_percentageparameter takes effect. For more information, see log_disk_size and log_disk_percentage.scenario No htap Sets the cluster workload type. If this option is not configured, an interactive prompt will allow users to select the workload type. The available values are: express_oltp: Suitable for workloads such as trading, payment core systems, and high-throughput internet applications. These workloads typically have no foreign key constraints, no stored procedures, no long or large transactions, and no complex joins or subqueries.complex_oltp: Designed for workloads like banking and insurance systems. These often involve complex joins, correlated subqueries, batch jobs written in PL, and long or large transactions. Parallel execution may sometimes be used for short-running queries.olap: Ideal for real-time data warehouse analytics scenarios.htap: Suitable for mixed OLAP and OLTP workloads, commonly used to gain instant insights from operational data, fraud detection, and personalized recommendations.kv: Best for key-value workloads and wide-column workloads similar to HBase, which typically require extremely high throughput and are sensitive to latency.
enable_syslog_wf No true Specifies whether to print system logs above the WARN level to a separate log file. max_syslog_file_count No 0 The maximum number of system log files that can be retained before enabling auto recycling. The value 0indicates that auto recycling is disabled.enable_auto_start No 0 Controls whether to enable auto-start of the observer process. When set to true, the observer process is automatically started if the OBServer node restarts.Note
When configuring this parameter, ensure that the deployment user has sudo privilege and the deployment environment is not a container.
appname No obcluster The name of the OceanBase cluster. mysql_port Yes 2881 The SQL port number. Default value: 2881. rpc_port Yes 2882 The Remote Procedure Call (RPC) port number, which is used for communication between the observer process and other node processes. Default value: 2882. obshell_port Yes 2886 The O&M port of OceanBase Database. Default value: 2886. home_path Yes N/A The installation path of OceanBase Database. We recommend that you choose a path under the admin user. data_dir No $home_path/store The directory for storing SSTables and other data. We recommend that you use a separate disk. redo_dir No Same as data_dirThe directory for storing clogs. We recommend that you use a separate disk. root_password No Empty by default for obd V2.0.0 and earlier
A random string by default for obd V2.1.0 and laterThe password of the super administrator (root@sys) of the OceanBase cluster. We recommend that you set a complex password. If you do not specify this parameter in obd V2.1.0 or later, a random string is automatically generated. proxyro_password No Empty by default for obd V2.0.0 and earlier
A random string by default for obd V2.1.0 and laterThe password of the account for connecting the OceanBase cluster (proxyro@sys) through ODP. If you do not specify this parameter in obd V2.1.0 or later, a random string is automatically generated. Configure the obproxy-ce component and modify the IP address and
home_path.obproxy-ce: # version: 4.3.4.0 # package_hash: 5c5dca3a355cc7286146ed978ebb26a6342e42e02cd3ca8b9739d300519a449f depends: - oceanbase-ce servers: - name: server1 # Please don't use hostname, only IP can be supported ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: prometheus_listen_port: 2884 listen_port: 2883 rpc_listen_port: 2885 enable_obproxy_rpc_service: true # vip_address: "10.10.10.5" # vip_port: 3306 home_path: /home/admin/obtest/obproxy client_session_id_version: 2 proxy_id: 822 obproxy_sys_password: ******** observer_sys_password: ******** skip_proxy_sys_private_check: true enable_strict_kernel_release: false enable_cluster_checkout: falseParameter Required? Default value Description version No The latest version in the image repository The version of the component to be deployed. Generally, you do not need to specify this parameter. depends No N/A Sets component dependencies. Once configured, the current component will automatically retrieve information from the dependent components. If there is an existing dependency between components, this option is required. servers Yes N/A Specify each server in the format of - name: server identifier (line feed) ip: server IP address. You can specify multiple servers. The server identifiers must be unique.
If the server IP addresses are unique, you can also specify the servers in the- <ip> (line break)- <ip>format. In this case,- <ip>is equivalent to- name: server IP address (line break) ip: server IP address.listen_port Yes 2883 The ODP listening port. Default value: 2883. prometheus_listen_port Yes 2884 The listening port of ODP Prometheus. Default value: 2884. rpc_listen_port No 2885 The port for RPC communication in ODP. The default value is 2885. Note
This parameter takes effect only when ODP V4.3.0 or later is deployed and the value of
enable_obproxy_rpc_serviceistrue.enable_obproxy_rpc_service No true Specifies whether to enable the RPC service in ODP. The default value is true, which means the RPC service is enabled. The valuefalsemeans the RPC service is disabled. In this case, therpc_listen_portparameter will not take effect.Note
This parameter takes effect only when ODP V4.3.0 or later is deployed.
- If this parameter is not configured, the default value
falsewill be used after ODP is upgraded.
vip_address No N/A When deploying a multi-node ODP, you can use this option to configure the virtual IP address (VIP). Notice
- obd supports VIP configuration for ODP starting from V3.3.0.
- obd does not provide load balancing deployment. To set up load balancing, deploy and configure the corresponding load balancer in advance.
vip_port No N/A When deploying a multi-node ODP, you can use this option to configure the VIP access port. Note
- obd supports VIP configuration for ODP starting from V3.3.0.
- obd does not provide load balancing deployment. To set up load balancing, deploy and configure the corresponding load balancer in advance.
home_path Yes None The installation path of ODP. We recommend that you set it to the home directory of the admin user. client_session_id_version No 2 The parameter is used to specify whether to use the new logic to generate the client session ID. The parameter type is integer. The value range is [1, 2]. The default value is 2 (use the new logic). proxy_id No 0 The parameter is used to set the ID for an ODP to ensure that client session IDs generated by different ODPs do not conflict. The value range of this parameter is different based on the value of client_session_id_version.- If
client_session_id_versionis set to1, the value range ofproxy_idis [0,255]. - If
client_session_id_versionis set to2, the value range ofproxy_idis [0,8191].
obproxy_sys_password No Empty by default for obd V2.0.0 and earlier
A random string by default for obd V2.1.0 and laterThe password of the ODP administrator account (root@proxysys). If you do not specify this parameter in obd V2.1.0 or later, a random string is automatically generated. observer_sys_password No Empty by default for obd V2.0.0 and earlier
A random string same asproxyro_passwordby default for obd V2.1.0 and laterThe password of the account (proxyro@sys) for connecting to the OceanBase cluster through ODP. Set this password to the same value as proxyro_password. If you do not specify this parameter in obd V2.1.0 or later, a random string same asproxyro_passwordis automatically generated.Modify the monitoring components.
obagent: depends: - oceanbase-ce # The list of servers to be monitored. This list is consistent with the servers in oceanbase-ce. servers: - name: server1 ip: 10.10.10.1 - name: server2 ip: 10.10.10.2 - name: server3 ip: 10.10.10.3 global: monagent_http_port: 8088 mgragent_http_port: 8089 home_path: /home/admin/obagent http_basic_auth_password: ******** prometheus: servers: - 10.10.10.4 depends: - obagent global: port: 9090 # The working directory for prometheus. prometheus is started under this directory. This is a required field. home_path: /home/admin/prometheus basic_auth_users: #<username>: <password>, the key is the user name and the value is the password. admin: ******** grafana: servers: - 10.10.10.4 depends: - prometheus global: port: 3000 login_password: ******** home_path: /home/admin/grafanaParameter Required? Default value Description servers Yes N/A Specify each server in the format of - name: server identifier (line feed) ip: server IP address. You can specify multiple servers. The server identifiers must be unique.
If the server IP addresses are unique, you can also specify the servers in the- <ip> (line break)- <ip>format. In this case,- <ip>is equivalent to- name: server IP address (line break) ip: server IP address.depends No N/A Sets the component dependency. After configuration, the current component automatically obtains information from the dependent component. This parameter is required when a dependency exists between components. home_path Yes N/A The installation path of the component. We recommend that you use a path under the admin user. monagent_http_port Yes 8088 The OBAgent monitoring service port. mgragent_http_port Yes 8089 The OBAgent management service port. http_basic_auth_password No Random string The HTTP service authentication password. When customized, the password must be at least one character and can contain digits (0-9), uppercase letters (A-Z), lowercase letters (a-z), and special characters ( ~^*{}[]_-+). If not configured, obd automatically generates a random string. You can view the password in the configuration file by running theobd cluster edit-configcommand after deployment.basic_auth_users No admin: *****
whereadminis the username and*****is the randomly generated passwordThe authentication information for the Prometheus Web service. The key is the username and the value is the password. login_password No Random string The Grafana login password. You can view the password in the configuration file by running the obd cluster edit-configcommand after deployment.
Step 3: Deploy the OceanBase cluster
Note
For more information about the commands used in this section, see Cluster commands in obd Documentation.
Deploy an OceanBase cluster.
[admin@test001 ~]$ obd cluster deploy obtest -c deploy.yamlWhen your server is connected to the Internet, after you run the
obd cluster deploycommand, obd checks whether the target server has the required installation package. If not, obd automatically obtains it from the YUM source.Start the OceanBase cluster.
[admin@test001 ~]$ obd cluster start obtestAfter a successful start, the access addresses of the obshell Dashboard and monitoring components are displayed. On Alibaba Cloud or other cloud environments, the installation program may fail to obtain a public IP address and return an intranet address instead. This is not a public address. Make sure that you use the correct address.
View the status of the OceanBase cluster.
# View the list of clusters managed by obd. [admin@test001 ~]$ obd cluster list # View the status of the obtest cluster. [admin@test001 ~]$ obd cluster display obtest(Optional) Modify the cluster configurations.
OceanBase Database has hundreds of parameters and some are coupled. We recommend that you do not modify parameters in the sample configuration file before you become familiar with OceanBase Database. The following example shows how to modify a parameter and make it take effect:
Run the
obd cluster edit-configcommand to enter the edit mode before you can edit the cluster configurations.[admin@test001 ~]$ obd cluster edit-config obtestAfter you modify and save the configurations and exit, obd will tell you how to make the changes take effect. The output after saving is as follows:
Search param plugin and load ok Search param plugin and load ok Parameter check ok Save deploy "obtest" configuration Use `obd cluster reload obtest` to make changes take effect.Copy and run the command provided by obd.
[admin@test001 ~]$ obd cluster reload obtest
Step 4: Connect to the OceanBase cluster
The following uses OBClient as an example to describe how to connect to the OceanBase cluster:
obclient -h<IP> -P<PORT> -u<user_name>@<tenant_name>#<cluster_name> -p -c -A
# example
obclient -h10.10.10.4 -P2883 -uroot@sys#obdemo -p -c -A
Parameter description:
-h: The IP address for connecting to OceanBase Database. For direct connection, use the OBServer node address. For connection through ODP, use the ODP address.-u: The account for connecting to OceanBase Database. Supported formats areusername@tenant_name#cluster_name,cluster_name:tenant_name:username,cluster_name-tenant_name-username, andcluster_name.tenant_name.username. The default administrator username for a MySQL-compatible tenant isroot.Notice
The cluster name used for connection is the one specified by
appnamein the configuration file, instead of the one specified bydeploy nameduring deployment.-P: The port for connecting to OceanBase Database. For direct connection, use the value of themysql_portparameter. For connection through ODP, use the value of thelisten_portparameter.-p: The password for connecting to OceanBase Database.-c: Specifies not to ignore comments in the OBClient runtime environment.Note
Hints are special comments that are not affected by the
-coption.-A: Specifies not to automatically retrieve statistics when connecting to OceanBase Database by using OBClient.
For more information about how to connect to an OceanBase cluster, see Connect to OceanBase Database.
Step 5: Create a user tenant
After the OceanBase cluster is deployed, we recommend that you create a user tenant to perform business operations. The sys tenant is intended only for cluster management and is unsuitable for business scenarios.
You can use one of the following methods to create a user tenant:
Method 1: Create a user tenant by using obd.
obd cluster tenant create <deploy name> [-n <tenant name>] [flags] # example obd cluster tenant create test -n obmysql --max-cpu=2 --memory-size=2G --log-disk-size=3G --max-iops=10000 --iops-weight=2 --unit-num=1 --charset=utf8By default, this command creates a tenant based on all remaining available resources of the cluster. You can configure the parameters to allocate specific amount of resources to the tenant. For more information about this command, see obd cluster tenant create in Cluster commands.
Method 2: Create a user tenant by using the CLI. For more information, see Create a tenant.
Related operations
Manage clusters
You can run the following commands to manage OceanBase clusters deployed by using obd. For more information, see Cluster commands in obd Documentation.
# View the cluster list.
obd cluster list
# View the status of a cluster. The following takes the `obtest` cluster as an example.
obd cluster display obtest
# Stop a running cluster. The following takes the `obtest` cluster as an example.
obd cluster stop obtest
# Destroy a deployed cluster. The following takes the `obtest` cluster as an example.
obd cluster destroy obtest
Uninstall the all-in-one package
To uninstall the all-in-one package, perform the following steps:
Uninstall the package.
[admin@test001 ~]$ cd oceanbase-all-in-one/bin/ [admin@test001 bin]$ ./uninstall.shDelete environment variables.
[admin@test001 bin]$ vim ~/.bash_profile # Delete the source /home/admin/.oceanbase-all-in-one/bin/env.sh line from the file.Make the modification take effect.
After you delete the environment variables, you must log in to the terminal again or run the
sourcecommand to make the modification take effect. Here is an example:[admin@test001 ~]$ source ~/.bash_profile