This topic describes how to use the upgrade script to upgrade an OceanBase cluster.
Notice
OceanBase Database does not support upgrading from V4.2.x or earlier to V4.3.x.
Prerequisites
You have installed Python 2 on all OBServer nodes and installed the mysql.connector module compatible with Python 2.
Notice
If the OceanBase cluster is associated with an arbitration service, you must upgrade the arbitration service before upgrading the OceanBase cluster. For more information about how to upgrade the arbitration service, see Upgrade an arbitration service.
You can query the system tenant view oceanbase.DBA_OB_ARBITRATION_SERVICE to check whether the OceanBase cluster is associated with an arbitration service. For more information about the upgrade, see Overview.
Procedure
Notice
When you execute the upgrade script in the following steps, the specified -u parameter must be a user with read and write permissions and the SUPER privilege in the sys tenant.
- Analyze the
oceanbase_upgrade_dep.ymlfile. - Confirm the upgrade process.
- Back up the cluster parameters.
- Upload and install the target RPM package.
- Execute the
upgrade_checker.pyscript. - Execute the
upgrade_pre.pyscript. - Upgrade the cluster.
- Execute the
upgrade_post.pyscript. - Restore the cluster parameters.
Step 1: Analyze the oceanbase_upgrade_dep.yml file
After decompressing the RPM package of the target OceanBase version, you can obtain the oceanbase_upgrade_dep.yml file, which records the topology of the OceanBase cluster version upgrade. The steps are as follows:
Obtain the RPM package of the target OceanBase version.
Please click Resources > Download Center in the upper-left corner of the OceanBase website to obtain the RPM package of the corresponding version. If the RPM package of the corresponding version is not found, contact Technical Support to obtain the RPM package of the target version.
Decompress the OceanBase RPM package.
You can use the following command to decompress the OceanBase RPM package to the current directory:
[xxx@xxx $rpm_dir]# sudo rpm2cpio $rpm_name | cpio -divIn this command,
$rpm_nameindicates the name of the RPM package.Note
After decompressing the RPM package, the
homeandusedirectories will be generated in the directory where the RPM package is stored. The upgrade-related scripts and theoceanbase_upgrade_dep.ymlfile will be stored in thehome/admin/oceanbase/etcdirectory.Analyze the
oceanbase_upgrade_dep.ymlfile.The meanings of the relevant attributes in the
oceanbase_upgrade_dep.ymlfile:version: the current OceanBase Database version, referred to as the current version.can_be_upgraded_to: the target version to which the current OceanBase Database version can be upgraded, referred to as the target version.deprecated: indicates whether the current version can be used as the target version for upgrade. If it istrue, the current version cannot be used as the target version. For example, the4.1.0.0-100000982023031415version in the example cannot be used as the target version.require_from_binary: indicates whether the upgrade sequence requires upgrading to the current version before upgrading to the target version, i.e., whether the current version is a barrier version in the upgrade sequence.value: when the value istrue, it is used in conjunction with thewhen_come_fromattribute.when_come_from: a list. Ifwhen_come_fromis not defined, it indicates that any version must be upgraded to the current barrier version before upgrading to the target version. Ifwhen_come_fromis defined, it indicates that the versions in the list must be upgraded to the current barrier version before upgrading to the target version. For example, in the example, the OceanBase clusters of the4.0.0.0and4.1.0.0-100000982023031415versions must be upgraded to the V4.1.0.1(barrier) version before upgrading to the V4.2.0.0 version.
Example:
Assume that the upgrade dependencies of the OceanBase clusters of different versions in the
oceanbase_upgrade_dep.ymlfile are as follows:- version: 4.0.0.0 can_be_upgraded_to: - 4.1.0.0 - version: 4.1.0.0-100000982023031415 can_be_upgraded_to: - 4.1.0.0 deprecated: True - version: 4.1.0.0 can_be_upgraded_to: - 4.1.0.1 - version: 4.1.0.1 can_be_upgraded_to: - 4.2.0.0 require_from_binary: value: True when_come_from: [4.0.0.0, 4.1.0.0-100000982023031415] # 4.3.0.x - version: 4.3.0.0 can_be_upgraded_to: - 4.3.0.1 - version: 4.3.0.1 can_be_upgraded_to: - 4.3.1.0By analyzing the example file, you can obtain the following upgrade sequences:
- V4.0.0.0 > V4.1.0.0 > V4.1.0.1(barrier) > V4.2.0.0
- V4.1.0.0-100000982023031415 > V4.1.0.0 > V4.1.0.1(barrier) > V4.2.0.0
- V4.1.0.0 > V4.1.0.1 > V4.2.0.0
- V4.3.0.0 > V4.3.0.1 > V4.3.1.0
Notice
If the upgrade sequence contains a barrier version, you must also obtain the OceanBase RPM packages of all barrier versions in the upgrade path.
Step 2: Confirm the upgrade process
Notice
If there is a barrier version between the current version and the target version of the OceanBase cluster, the OceanBase cluster must be upgraded to the barrier version first, and then from the barrier version to the target version.
Based on the upgrade sequence in Step 1, the following information can be obtained:
If you are currently using an OceanBase cluster of the V4.0.0.0 or V4.1.0.0-100000982023031415 version, you must first upgrade the OceanBase cluster to the V4.1.0.1(barrier) version, and then upgrade it from the V4.1.0.1(barrier) version to the V4.2.0.0 version.
If you are currently using an OceanBase cluster of the V4.1.0.0 or V4.1.0.1 version, you can directly upgrade the OceanBase cluster to the V4.2.0.0 version.
If you are currently using an OceanBase cluster of the V4.3.0.0 or V4.3.0.1 version, you can directly upgrade the OceanBase cluster to the V4.3.1.0 version.
Example:
The following example describes the upgrade process for a three-zone OceanBase cluster.
The upgrade path contains a barrier version. If the current OceanBase cluster version is V4.0.0.0, the upgrade process is as follows:
- You must first upgrade Zone1 from the V4.0.0.0 version to the V4.1.0.1(barrier) version, then upgrade Zone2 from the V4.0.0.0 version to the V4.1.0.1(barrier) version, and then upgrade Zone3 from the V4.0.0.0 version to the V4.1.0.1(barrier) version. At this point, the entire cluster has been upgraded from the V4.0.0.0 version to the V4.1.0.1(barrier) version.
- You must then upgrade Zone1, Zone2, and Zone3 from the V4.1.0.1(barrier) version to the V4.2.0.0 version in sequence. After all zones are upgraded to the V4.2.0.0 version, the cluster upgrade is complete.
The upgrade path does not contain a barrier version. If the current OceanBase cluster version is V4.3.0.0, the upgrade process is as follows:
You must first upgrade Zone1 from the V4.3.0.0 version to the V4.3.1.0 version, then upgrade Zone2 from the V4.3.0.0 version to the V4.3.1.0 version, and then upgrade Zone3 from the V4.3.0.0 version to the V4.3.1.0 version. After all zones are upgraded to the V4.3.1.0 version, the cluster upgrade is complete.
Step 3: Back up cluster configuration items
Before you start the upgrade process, you must back up the values of the following configuration items to restore them after the upgrade:
Execute the following SQL statement in the sys tenant (system tenant) to obtain the values of the configuration items to be backed up:
SELECT SVR_IP,ZONE,NAME,VALUE FROM OCEANBASE.GV$OB_PARAMETERS WHERE NAME IN ("server_permanent_offline_time", "enable_rebalance", "enable_rereplication");
Step 4: Upload and install the target RPM package
Use the
scpcommand to upload the OceanBase RPM package required for the upgrade to the OBServer node.scp $rpm_name admin@$observer_ip:/$rpm_dirParameters:
$rpm_name: the name of the RPM package.$observer_ip: the IP address of the OBServer node.$rpm_dir: the directory where the RPM package is stored.
Example:
scp oceanbase-4.2.0.0-100010022023081911.el7.x86_64.rpm admin@10.10.10.1:/home/admin/rpmUse the following command to install the OceanBase RPM package.
Notice
- If there are barrier versions in your upgrade path, install the corresponding barrier version RPM package first. Wait for the cluster to successfully upgrade to the barrier version before installing the target version RPM package.
- If there are multiple barrier versions in the upgrade path, install and upgrade them in sequence until the cluster successfully upgrades to the last barrier version before installing the target version RPM package.
rpm -Uvh $rpm_nameHere,
$rpm_nameindicates the name of the RPM package.Note
The upgrade-related scripts and the
oceanbase_upgrade_dep.ymlfile are stored in the/home/admin/oceanbase/etcdirectory. The/home/admin/oceanbase/etcdirectory is the default installation directory for OceanBase Database.Repeat steps 1 and 2 until the target version RPM package is installed on all OBServer nodes.
Step 5: Execute the upgrade_checker.py script
Specify the login information of the sys tenant to directly connect to the OBServer node on any OBServer node and execute the upgrade_checker.py script for pre-upgrade checks. If the script execution is successful, you can proceed with the upgrade. The command is as follows:
python upgrade_checker.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password
Parameters:
$user_name: the user with read, write, and modify permissions for global system parameters in thesystenant.$password: the user password.
Example:
Use the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcUse the following command to execute the
upgrade_checker.pyscript for pre-upgrade checks.[root@xxx /home/admin/oceanbase/etc]# python upgrade_checker.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Step 6: Execute the upgrade_pre.py script
Specify the login information of the sys tenant to directly connect to the OBServer node on any OBServer node and execute the upgrade_pre.py script. If the script execution is successful, you can proceed with the upgrade. The command is as follows:
python upgrade_pre.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password
Parameters:
$user_name: the user with read, write, and modify permissions for global system parameters in thesystenant.$password: the user password.
The upgrade_pre.py script performs the following commands and actions:
alter system begin upgrade.alter system begin rolling upgrade.special pre action: disable and adjust configuration items.health check: cluster-level health check.
The complete upgrade_pre.py script is as follows:
./upgrade_pre.py [OPTIONS]
-I, --help Display this help and exit.
-V, --version Output version information and exit.
-h, --host=name Connect to host.
-P, --port=name Port number to use for connection.
-u, --user=name User for login.
-p, --password=name Password to use when connecting to server. If password is
not given it's empty string "".
-t, --timeout=name Cmd/Query execute timeout(s).
-m, --module=name Modules to run. Modules should be a string combined by some of
the following strings:
1. begin_upgrade
2. begin_rolling_upgrade
3. special_action
4. health_check
5. all: "all" represents that all modules should be run.
They are splitted by ",".
For example: -m all, or --module=begin_upgrade,begin_rolling_upgrade,special_action
-l, --log-file=name Log file path. If log file path is not given it's ./upgrade_pre.log
Maybe you want to run cmd like that:
./upgrade_pre.py -h 127.0.0.1 -P 2881 -u ****** -p ******
Example:
Use the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcUse the following command to execute the
upgrade_pre.pyscript for pre-upgrade checks.[root@xxx /home/admin/oceanbase/etc]# python upgrade_pre.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Step 7: Upgrade the cluster
Upgrade the cluster by zone. Repeat the following steps for each zone. For example, we will use zone1.
Run the following command to perform a cluster-level health check.
python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$passwordParameter description:
$user_name: the user who has the privileges to read, write, and modify global system parameters of thesystenant.$password: the password of the user.
Here is an example:
Run the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcRun the following command to execute the
upgrade_health_checker.pyscript for a cluster-level health check.[root@xxx /home/admin/oceanbase/etc]# python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Stop the zone (
STOP ZONE).Notice
For a standalone or single-replica OceanBase Database, you do not need to stop the zone (
STOP ZONE).Log in to the
systenant of the cluster as therootuser and run the following command to stop the zone, where$zone_nameis the name of the zone.ALTER SYSTEM STOP ZONE '$zone_name';The
STOP ZONEcommand returns successfully only after the leader replica of the zone is switched.Here is an example:
obclient [(none)]> ALTER SYSTEM STOP ZONE 'zone1';You can run the following SQL command to view the status of the zone.
obclient [(none)]> SELECT ZONE,STATUS FROM oceanbase.DBA_OB_ZONES;The returned result is as follows:
+-------+----------+ | ZONE | STATUS | +-------+----------+ | zone1 | INACTIVE | | zone2 | ACTIVE | | zone3 | ACTIVE | +-------+----------+ 3 rows in setStop the processes on the servers in the zone and restart them with the new binary.
Stop the observer process and then restart it.
Switch to the
adminuser.[root@xxx /home/admin/oceanbase/etc]# su - adminStop the observer process.
-bash-4.2$ kill -9 `pidof observer`Restart the observer process.
-bash-4.2$ cd /home/admin/oceanbase && /home/admin/oceanbase/bin/observerNote
When you restart the observer process, you do not need to specify the startup parameters because the previous startup parameters have been written to the parameter file.
Perform a zone-level health check.
The zone-level health check basically reuses the cluster-level health check logic. It mainly verifies the status of all servers in the specified zone. Run the following command:
python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password -z '$zone_name'Parameter description:
$user_name: the user who has the privileges to read, write, and modify global system parameters of thesystenant.$password: the password of the user.$zone_name: the name of the zone.
Here is an example:
Run the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcRun the following command to execute the
upgrade_health_checker.pyscript for a zone-level health check.[root@xxx /home/admin/oceanbase/etc]# python upgrade_health_checker.py -h 127.0.0.1 -P 2881 -u root@sys -p****** -z 'zone1'
Start the zone (
start zone).Log in to the
systenant of the cluster as therootuser and run the following command to start the zone, where$zone_nameis the name of the zone.ALTER SYSTEM START ZONE '$zone_name';Here is an example:
obclient [(none)]> ALTER SYSTEM START ZONE 'zone1';Repeat steps 1 to 5 until all zones are upgraded.
Step 8: Execute the upgrade_post.py script
On any OBServer node, specify the login information of the sys tenant to directly connect to the OBServer node and execute the upgrade_post.py script to complete the main upgrade operations and related checks. The command is as follows:
python upgrade_post.py -h 127.0.0.1 -P 2881 -u $user_name@sys -p$password
Notice
If an error occurs during the execution of the upgrade script upgrade_post.py, the cause is likely that the RS has been switched. It is recommended to troubleshoot and fix the issue by following these steps:
- Execute
SELECT * FROM oceanbase.DBA_OB_TENANT_JOBS WHERE job_type = 'upgrade_all' ORDER BY job_id DESC LIMIT 1;to verify the status of the upgrade task. IfJOB_STATUSis displayed asinprogressandMODIFY_TIMEis the most recent time (with a short interval from the current time), you can manually execute theALTER SYSTEM RUN UPGRADE JOB 'UPGRADE_ALL';command to resolve this issue. - Re-execute the upgrade script.
Parameter description:
$user_name: the user with read, write, and modify permissions for global system parameters in thesystenant.$password: the user password.
The upgrade_post.py script performs the following upgrade actions:
- Cluster-level health check.
- Execute
alter system end rolling upgrade. - Execute
begin upgradefor each tenant. - Execute system variable correction for each tenant.
- Execute system table correction for each tenant.
- Execute virtual table/view correction for each tenant.
- Execute version-related upgrade actions for each tenant.
- Execute internal table self-check for each tenant.
- Execute
end upgradefor each tenant. - Execute
alter system end upgradeto end the cluster upgrade status. - Upgrade post check: re-enable the disabled configuration items during the upgrade process and perform checks.
The steps from 3 to 5, 7, and 9 are not executed in the following upgrade scenarios:
- Same version upgrade.
- Cross-version upgrade where the data version of the original version is the same as the target version.
The complete upgrade_post.py script is as follows:
./upgrade_post.py [OPTIONS]
-I, --help Display this help and exit.
-V, --version Output version information and exit.
-h, --host=name Connect to host.
-P, --port=name Port number to use for connection.
-u, --user=name User for login.
-p, --password=name Password to use when connecting to server. If password is
not given it's empty string "".
-t, --timeout=name Cmd/Query execute timeout(s).
-m, --module=name Modules to run. Modules should be a string combined by some of
the following strings:
1. health_check
2. end_rolling_upgrade
3. tenant_upgrade
4. end_upgrade
5. post_check
6. all: "all" represents that all modules should be run.
They are splitted by ",".
For example: -m all, or --module=health_check,begin_rolling_upgrade
-l, --log-file=name Log file path. If log file path is not given it's ./upgrade_pre.log
Maybe you want to run cmd like that:
./upgrade_post.py -h 127.0.0.1 -P 2881 -u *** -p ******
Here is an example:
Use the following command to switch to the
/home/admin/oceanbase/etcdirectory.cd /home/admin/oceanbase/etcUse the following command to execute the
upgrade_post.pyscript to complete the main upgrade operations and related checks.[root@xxx /home/admin/oceanbase/etc]# python upgrade_post.py -h 127.0.0.1 -P 2881 -u root@sys -p******
Step 9: Restore cluster configuration items
Based on the old values of the configuration items backed up in Step 3, log in to the sys tenant of the cluster as the root user and use the following command to restore the related configuration items.
ALTER SYSTEM SET $parameter value= $parameter_value
Parameter description:
$parameter: the name of the configuration item.$parameter_value: the value of the configuration item.
Here is an example:
Restore the value of
server_permanent_offline_time.obclient [(none)]> ALTER SYSTEM SET server_permanent_offline_time = '3600s';Restore the value of
enable_rebalance.obclient [(none)]> ALTER SYSTEM SET enable_rebalance = 'True';Restore the value of
enable_rereplication.obclient [(none)]> ALTER SYSTEM SET enable_rereplication = 'True';
What to do next
Post-upgrade checks
Verify whether the upgrade is completed. For more information, see Post-upgrade checks.
Reconfigure cgroup
If cgroup is configured for the OceanBase cluster before the upgrade, you must reconfigure cgroup after the upgrade. For more information, see Configure cgroup.
References
- For more information about how to connect to OceanBase Database, see Overview of connection methods.
- For more information about how to start a zone, see Start a zone.
- For more information about how to modify a configuration parameter, see ALTER SYSTEM PARAMETER.
