This topic describes how to use the upgrade script to upgrade an OceanBase cluster.
Notice
OceanBase Database does not support upgrading from V4.2.x or earlier to V4.3.x.
Prerequisites
You have installed the Python 2 environment on all OBServer nodes and the mysql.connector module compatible with Python 2.
Notice
If the OceanBase cluster is associated with an arbitration service, you must upgrade the arbitration service before upgrading the OceanBase cluster. For more information about how to upgrade the arbitration service, see Upgrade the arbitration service.
You can query the system tenant view oceanbase.DBA_OB_ARBITRATION_SERVICE to check whether the OceanBase cluster is associated with an arbitration service. For more information about the upgrade, see Overview.
Procedure
Notice
When you run the upgrade script in the following steps, the specified -u parameter must be a user with read and write permissions and the SUPER privilege in the sys tenant.
- Analyze the
oceanbase_upgrade_dep.ymlfile. - Confirm the upgrade process.
- Back up the cluster parameters.
- Upload and install the target RPM package.
- Run the
upgrade_checker.pyscript. - Run the
upgrade_pre.pyscript. - Upgrade the cluster.
- Run the
upgrade_post.pyscript. - Restore the cluster parameters.
Step 1: Analyze the oceanbase_upgrade_dep.yml file
After decompressing the OceanBase RPM package of the target version, you can obtain the oceanbase_upgrade_dep.yml file, which records the topology of the OceanBase cluster version upgrade. The steps are as follows:
Obtain the RPM installation package of the target version of OceanBase.
You can click Resources in the upper-left corner of the OceanBase website and choose Download Center to obtain the RPM package of the target version. If you cannot find the RPM package of the target version, contact Technical Support for assistance.
Decompress the OceanBase RPM package.
You can use the following command to decompress the OceanBase RPM package to the current directory:
[xxx@xxx $rpm_dir]# sudo rpm2cpio $rpm_name | cpio -divIn the command,
$rpm_nameindicates the name of the RPM package.Note
After decompressing the RPM package, the
homeandusedirectories are generated in the directory where the RPM package is stored. The upgrade-related scripts and theoceanbase_upgrade_dep.ymlfile are stored in thehome/admin/oceanbase/etcdirectory.Analyze the
oceanbase_upgrade_dep.ymlfile.The following table describes the attributes in the
oceanbase_upgrade_dep.ymlfile:version: the version of the current OceanBase database, referred to as the current version.can_be_upgraded_to: the version to which the current OceanBase database can be upgraded, referred to as the target version.deprecated: indicates whether the current version can be used as the target version. If the value istrue, the current version cannot be used as the target version. For example, the4.1.0.0-100000982023031415version in the example cannot be used as the target version.require_from_binary: indicates whether the upgrade sequence must be upgraded to the current version before upgrading to the target version. In other words, whether the current version is a barrier version in the upgrade sequence.value: when the value istrue, it is used in conjunction with thewhen_come_fromattribute.when_come_from: a list. Ifwhen_come_fromis not defined, it indicates that any version must be upgraded to the current barrier version before being upgraded to the target version. Ifwhen_come_fromis defined, it indicates that the versions in the list must be upgraded to the current barrier version before being upgraded to the target version. For example, in the example, the4.0.0.0and4.1.0.0-100000982023031415versions of the OceanBase cluster must be upgraded to the V4.1.0.1 (barrier) version before being upgraded to the V4.2.0.0 version.
Here is an example:
Assume that the
oceanbase_upgrade_dep.ymlfile contains the following upgrade dependencies of the OceanBase cluster:- version: 4.0.0.0 can_be_upgraded_to: - 4.1.0.0 - version: 4.1.0.0-100000982023031415 can_be_upgraded_to: - 4.1.0.0 deprecated: True - version: 4.1.0.0 can_be_upgraded_to: - 4.1.0.1 - version: 4.1.0.1 can_be_upgraded_to: - 4.2.0.0 require_from_binary: value: True when_come_from: [4.0.0.0, 4.1.0.0-100000982023031415] # 4.3.0.x - version: 4.3.0.0 can_be_upgraded_to: - 4.3.0.1 - version: 4.3.0.1 can_be_upgraded_to: - 4.3.1.0By analyzing the example file, you can obtain the following upgrade sequences:
- V4.0.0.0 > V4.1.0.0 > V4.1.0.1 (barrier) > V4.2.0.0
- V4.1.0.0-100000982023031415 > V4.1.0.0 > V4.1.0.1 (barrier) > V4.2.0.0
- V4.1.0.0 > V4.1.0.1 > V4.2.0.0
- V4.3.0.0 > V4.3.0.1 > V4.3.1.0
Notice
If the upgrade sequence contains a barrier version, you must also obtain the OceanBase RPM packages of all barrier versions in the upgrade path.
Step 2: Confirm the upgrade process
Notice
If the current version of the OceanBase cluster and the target version are separated by a barrier version, the OceanBase cluster must be upgraded to the barrier version first and then to the target version.
Based on the upgrade sequences in Step 1, you can obtain the following information:
If you are using an OceanBase cluster of the V4.0.0.0 or V4.1.0.0-100000982023031415 version, you must upgrade the OceanBase cluster to the V4.1.0.1 (barrier) version first and then to the V4.2.0.0 version.
If you are using an OceanBase cluster of the V4.1.0.0 or V4.1.0.1 version, you can directly upgrade the OceanBase cluster to the V4.2.0.0 version.
If you are using an OceanBase cluster of the V4.3.0.0 or V4.3.0.1 version, you can directly upgrade the OceanBase cluster to the V4.3.1.0 version.
Here is an example:
The following example describes the upgrade process of a three-zone OceanBase cluster.
The upgrade path contains a barrier version. If the current version of the OceanBase cluster is V4.0.0.0, the upgrade process is as follows:
- You must upgrade Zone 1 from the V4.0.0.0 version to the V4.1.0.1 (barrier) version, upgrade Zone 2 from the V4.0.0.0 version to the V4.1.0.1 (barrier) version, and upgrade Zone 3 from the V4.0.0.0 version to the V4.1.0.1 (barrier) version. At this point, the entire cluster has been upgraded from the V4.0.0.0 version to the V4.1.0.1 (barrier) version.
- You must upgrade Zone 1, Zone 2, and Zone 3 from the V4.1.0.1 (barrier) version to the V4.2.0.0 version in sequence. After all zones are upgraded to the V4.2.0.0 version, the cluster upgrade is complete.
The upgrade path does not contain a barrier version. If the current version of the OceanBase cluster is V4.3.0.0, the upgrade process is as follows:
You must upgrade Zone 1 from the V4.3.0.0 version to the V4.3.1.0 version, upgrade Zone 2 from the V4.3.0.0 version to the V4.3.1.0 version, and upgrade Zone 3 from the V4.3.0.0 version to the V4.3.1.0 version. After all zones are upgraded to the V4.3.1.0 version, the cluster upgrade is complete.
Step 3: Backup cluster configuration items
Before you start the upgrade process, you need to back up the values of the following configuration items to restore them after the upgrade:
In the sys tenant, execute the following SQL statement to obtain the values of the configuration items to be backed up:
SELECT SVR_IP,ZONE,NAME,VALUE FROM OCEANBASE.GV$OB_PARAMETERS WHERE NAME IN ("server_permanent_offline_time", "enable_rebalance", "enable_rereplication");
Step 4: Upload and install the target RPM package
Use the
scpcommand to upload the OceanBase RPM package required for the upgrade to the OBServer node.scp $rpm_name admin@$observer_ip:/$rpm_dirParameter description:
$rpm_name: the name of the RPM package.$observer_ip: the IP address of the OBServer node.$rpm_dir: the directory where the RPM package is stored.
Here is an example:
scp oceanbase-4.2.0.0-100010022023081911.el7.x86_64.rpm admin@10.10.10.1:/home/admin/rpmInstall the OceanBase RPM package by using the following command.
Notice
- If there is a barrier version in your upgrade path, install the corresponding barrier version RPM package first. After the cluster is successfully upgraded to the barrier version, install the target version RPM package.
- If there are multiple barrier versions in the upgrade path, install and upgrade them in sequence until the cluster is successfully upgraded to the last barrier version, and then install the target version RPM package.
rpm -Uvh $rpm_nameIn the preceding command,
$rpm_nameindicates the name of the RPM package.Note
The upgrade-related scripts and the
oceanbase_upgrade_dep.ymlfile are stored in the/home/admin/oceanbase/etcdirectory. The/home/admin/oceanbase/etcdirectory is the default installation directory of OceanBase Database.Repeat steps 1 and 2 until the target version RPM package is installed on all OBServer nodes.
Step 5: Execute the upgrade_checker.py script
In any OBServer node, specify the login information of the sys tenant to directly connect to the OBServer node and execute the upgrade_checker.py script for pre-upgrade checks. If the script is executed successfully, you can proceed with the upgrade. The command is as follows:
python upgrade_checker.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password
Parameter description:
$user_name: the user who has the permissions to read, write, and modify global system parameters in thesystenant.$password: the password of the user.
Here is an example:
Use the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcUse the following command to execute the
upgrade_checker.pyscript for pre-upgrade checks.[root@xxx /home/admin/oceanbase/etc]# python upgrade_checker.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Step 6: Execute the upgrade_pre.py script
In any OBServer node, specify the login information of the sys tenant to directly connect to the OBServer node and execute the upgrade_pre.py script. If the script is executed successfully, you can proceed with the upgrade. The command is as follows:
python upgrade_pre.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password
Parameter description:
$user_name: the user who has the permissions to read, write, and modify global system parameters in thesystenant.$password: the password of the user.
The upgrade_pre.py script executes the following commands and actions:
alter system begin upgrade.alter system begin rolling upgrade.special pre action: disable and adjust configuration items.health check: cluster-level health check.
The complete upgrade_pre.py script is as follows:
./upgrade_pre.py [OPTIONS]
-I, --help Display this help and exit.
-V, --version Output version information and exit.
-h, --host=name Connect to host.
-P, --port=name Port number to use for connection.
-u, --user=name User for login.
-p, --password=name Password to use when connecting to server. If password is
not given it's empty string "".
-t, --timeout=name Cmd/Query execute timeout(s).
-m, --module=name Modules to run. Modules should be a string combined by some of
the following strings:
1. begin_upgrade
2. begin_rolling_upgrade
3. special_action
4. health_check
5. all: "all" represents that all modules should be run.
They are splitted by ",".
For example: -m all, or --module=begin_upgrade,begin_rolling_upgrade,special_action
-l, --log-file=name Log file path. If log file path is not given it's ./upgrade_pre.log
Maybe you want to run cmd like that:
./upgrade_pre.py -h 127.0.0.1 -P 2881 -u ****** -p ******
Here is an example:
Use the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcUse the following command to execute the
upgrade_pre.pyscript for pre-upgrade checks.[root@xxx /home/admin/oceanbase/etc]# python upgrade_pre.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Step 7: Upgrade the cluster
Upgrade the cluster by zone. Repeat the following steps for each zone. For example, we use zone1.
Run the following command to check the cluster health.
python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$passwordParameters:
$user_name: the user with read, write, and modify permissions for global system parameters in thesystenant.$password: the user password.
Here is an example:
Run the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcRun the following command to execute the
upgrade_health_checker.pyscript to check the cluster health.[root@xxx /home/admin/oceanbase/etc]# python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Stop the zone (
STOP ZONE).Notice
For a standalone or single-replica OceanBase Database, you do not need to stop the zone (
STOP ZONE).Log in to the
systenant of the cluster as therootuser, and run the following command to stop the zone, where$zone_nameis the name of the zone. The command is as follows:ALTER SYSTEM STOP ZONE '$zone_name';The
STOP ZONEcommand returns successfully only after all leader replicas of the zone have been switched.Here is an example:
obclient [(none)]> ALTER SYSTEM STOP ZONE 'zone1';You can run the following SQL statement to check the status of the zone.
obclient [(none)]> SELECT ZONE,STATUS FROM oceanbase.DBA_OB_ZONES;The return result is as follows:
+-------+----------+ | ZONE | STATUS | +-------+----------+ | zone1 | INACTIVE | | zone2 | ACTIVE | | zone3 | ACTIVE | +-------+----------+ 3 rows in setStop the processes on the servers in the zone, replace the binaries with the new version, and restart the processes.
Stop the observer process and then restart the observer process.
Switch to the
adminuser.[root@xxx /home/admin/oceanbase/etc]# su - adminStop the observer process.
-bash-4.2$ kill -9 `pidof observer`Restart the observer process.
-bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observerNote
When you restart the observer process, you do not need to specify the startup parameters because the previous startup parameters have been written to the parameter file.
Run the zone-level health check.
The zone-level health check basically reuses the cluster-level health check logic. It mainly verifies the status of all servers in the specified zone. The command is as follows:
python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password -z '$zone_name'Parameters:
$user_name: the user with read, write, and modify permissions for global system parameters in thesystenant.$password: the user password.$zone_name: the name of the zone.
Here is an example:
Run the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcRun the following command to execute the
upgrade_health_checker.pyscript to check the zone health.[root@xxx /home/admin/oceanbase/etc]# python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u root@sys -p****** -z 'zone1'
Start the zone (
start zone).Log in to the
systenant of the cluster as therootuser, and run the following command to start the zone, where$zone_nameis the name of the zone. The command is as follows:ALTER SYSTEM START ZONE '$zone_name';Here is an example:
obclient [(none)]> ALTER SYSTEM START ZONE 'zone1';Repeat steps 1 to 5 until all zones are upgraded.
Step 8: Run the upgrade_post.py script
On any OBServer node, specify the login information of the sys tenant to directly connect to the OBServer node and execute the upgrade_post.py script to complete the main upgrade operations and related checks. The command is as follows:
python upgrade_post.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password
Notice
When you run the upgrade_post.py script, if an error occurs, the cause is likely that the RS leader has changed. We recommend that you troubleshoot and fix the issue by following these steps:
- Run the
SELECT * FROM oceanbase.DBA_OB_TENANT_JOBS WHERE job_type = 'upgrade_all' ORDER BY job_id DESC LIMIT 1;statement to verify the status of the upgrade task. IfJOB_STATUSisinprogressandMODIFY_TIMEis the most recent time (close to the current time), you can manually execute theALTER SYSTEM RUN UPGRADE JOB 'UPGRADE_ALL';statement to resolve this issue. - Run the upgrade script again.
Parameter description:
$user_name: the user who has the read/write and modify permissions for global system parameters of thesystenant.$password: the password of the user.
The upgrade_post.py script performs the following upgrade actions:
- Cluster-level health check.
- Execute the
alter system end rolling upgradestatement. - Execute the
begin upgradestatement for each tenant. - Execute the system variable correction for each tenant.
- Execute the system table correction for each tenant.
- Execute the virtual table/view correction for each tenant.
- Execute the version-related upgrade actions for each tenant.
- Execute the internal table self-check for each tenant.
- Execute the
end upgradestatement for each tenant. - Execute the
alter system end upgradestatement to end the cluster upgrade status. - Execute the upgrade post check: re-enable the disabled configuration items during the upgrade and perform the checks.
The steps 3 to 5, 7, and 9 are not executed in the following upgrade scenarios:
- Same-version upgrade.
- Cross-version upgrade where the data version of the original version is the same as that of the target version.
The complete upgrade_post.py script is as follows:
./upgrade_post.py [OPTIONS]
-I, --help Display this help and exit.
-V, --version Output version information and exit.
-h, --host=name Connect to host.
-P, --port=name Port number to use for connection.
-u, --user=name User for login.
-p, --password=name Password to use when connecting to server. If password is
not given it's empty string "".
-t, --timeout=name Cmd/Query execute timeout(s).
-m, --module=name Modules to run. Modules should be a string combined by some of
the following strings:
1. health_check
2. end_rolling_upgrade
3. tenant_upgrade
4. end_upgrade
5. post_check
6. all: "all" represents that all modules should be run.
They are splitted by ",".
For example: -m all, or --module=health_check,begin_rolling_upgrade
-l, --log-file=name Log file path. If log file path is not given it's ./upgrade_pre.log
Maybe you want to run cmd like that:
./upgrade_post.py -h 127.0.0.1 -P 2881 -u *** -p ******
Here is an example:
Run the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcRun the following command to execute the
upgrade_post.pyscript to complete the main upgrade operations and related checks.[root@xxx /home/admin/oceanbase/etc]# python upgrade_post.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Step 9: Restore the cluster configuration items
Based on the old values of the configuration items backed up in Step 3, log in to the sys tenant of the cluster as the root user and run the following command to restore the related configuration items.
ALTER SYSTEM SET $parameter value= $parameter_value
Parameter description:
$parameter: the name of the configuration item.$parameter_value: the value of the configuration item.
Here is an example:
Restore the value of
server_permanent_offline_time.obclient [(none)]> ALTER SYSTEM SET server_permanent_offline_time = '3600s';Restore the value of
enable_rebalance.obclient [(none)]> ALTER SYSTEM SET enable_rebalance = 'True';Restore the value of
enable_rereplication.obclient [(none)]> ALTER SYSTEM SET enable_rereplication = 'True';
What to do next
Post-upgrade checks
To verify whether the upgrade is completed, see Post-upgrade checks.
Reconfigure cgroup
If cgroup is configured for the OceanBase cluster before the upgrade, you must reconfigure cgroup after the upgrade. For more information, see Configure cgroup.
References
- For more information about how to connect to OceanBase Database, see Overview of connection methods.
- For more information about how to start a zone, see Start a zone.
- For more information about how to modify a configuration parameter, see ALTER SYSTEM PARAMETER.
