OceanBase Migration Service (OMS) V3.2.1 and later can be directly upgraded to V4.3.1. This topic describes how to upgrade OMS in multi-node deployment mode in different scenarios.
Notice
To upgrade OMS in multi-node deployment mode, you must upgrade OMS on all nodes. Do not perform the upgrade only on part of the nodes.
Background information
An upgrade to OMS V4.3.1 can be classified into the following two version scenarios:
The current version is V3.2.1 or later but earlier than V4.0.1.
Notice
OMS of a version earlier than V3.2.1 must be first upgraded to V3.2.1.
To upgrade OMS from V3.2.1 or later, or a version earlier than V4.0.1 to V4.3.1, you must perform the following two more steps than upgrading from V4.0.1 or later to V4.3.1:
Check the prerequisites below.
Execute the upgrade package in the .jar format during the upgrade.
Notice
OMS V4.0.1 integrates the migration and synchronization frameworks, which involves the restructuring of table schemas. To upgrade OMS of a version earlier than V4.0.1 to V4.0.1 or later, you must restructure table schemas. Do not perform this operation in other scenarios.
The current version is V4.0.1 or later.
Before you upgrade OMS to V4.3.1 in the preceding two scenarios, check the following prerequisites.
If you want to deploy the cluster manager (CM) database separately, make sure that all data migration and synchronization tasks have proceeded to the expected step.
Reverse increment is enabled:
If a task includes the full migration and incremental synchronization steps, the task must enter the reverse increment step before the upgrade.
If a task includes the full migration step but not the incremental synchronization step, the task must complete the full migration step before the upgrade.
If a task includes the incremental synchronization step but not the full migration step, the task must enter the reverse increment step before the upgrade.
If a task does not include the full migration or incremental synchronization step, no requirements are raised for the task before the upgrade.
If reverse increment is disabled, no requirements are raised for tasks before the upgrade.
Modify the
config.yamlfile.Notice
Do not modify OMS deployed by using OAT of a version earlier than V4.3.2.
Modification Key Description Add keys oms_rm_meta_host
oms_cm_meta_hostThe data is sourced from oms_meta_host.oms_rm_meta_port
oms_cm_meta_portThe data is sourced from oms_meta_port.oms_rm_meta_password
oms_cm_meta_passwordThe data is sourced from oms_meta_password.oms_rm_meta_user
oms_cm_meta_userThe data is sourced from oms_meta_user.Delete keys oms_meta_host
oms_meta_port
oms_meta_password
oms_meta_userIn OMS V4.3.1, you must delete these four keys from the config.yamlconfiguration file. Otherwise, the connection strings of the resource manager (RM) and CM databases may be lost when you run thedocker_init.shscript.Here is a sample
config.yamlconfiguration file of OMS V4.3.0:"cm_location": "2" "cm_nodes": - "xxx.xxx.xxx.1" - "xxx.xxx.xxx.2" "cm_region": "cn-hangzhou" "cm_region_cn": "cn-hangzhou" "cm_url": "http://xxx.xxx.xxx.1:8088" "drc_cm_db": "oms_cm" "drc_cm_heartbeat_db": "oms_cm_hb_hangzhou" "drc_rm_db": "oms_rm" "oms_meta_host": "xxx.xxx.xxx.3" "oms_meta_password": "ob_password" "oms_meta_port": "2883" "oms_meta_user": "oms_meta_user"Here is a modified
config.yamlconfiguration file for OMS V4.3.1:"cm_location": "2" "cm_nodes": - "xxx.xxx.xxx.1" - "xxx.xxx.xxx.2" "cm_region": "cn-hangzhou" "cm_region_cn": "cn-hangzhou" "cm_url": "http://xxx.xxx.xxx.1:8088" "drc_cm_db": "oms_cm" "drc_cm_heartbeat_db": "oms_cm_hb_hangzhou" "drc_rm_db": "oms_rm" "oms_cm_meta_host": "xxx.xxx.xxx.3" "oms_cm_meta_password": "ob_password" "oms_cm_meta_port": "2883" "oms_cm_meta_user": "oms_meta_user" "oms_rm_meta_host": "xxx.xxx.xxx.3" "oms_rm_meta_password": "ob_password" "oms_rm_meta_port": "2883" "oms_rm_meta_user": "oms_meta_user"
Upgrade OMS from V4.0.1 or later to V4.3.1
If high availability (HA) is enabled, record the current value of the
ha.configparameter and disable HA.Log in to the OMS console.
In the left-side navigation pane, choose System Management > System Parameters.
On the System Parameters page, find
ha.config.Click the edit icon in the Value column of the parameter.
In the Modify Value dialog box, set
enabletofalseto disable HA.
Back up databases.
Log in to the two hosts where the container of OMS V4.3.0 is deployed by using their respective IP addresses, and stop the container.
sudo docker stop ${CONTAINER_NAME}Note
CONTAINER_NAMEspecifies the name of the container.If you deploy the system in the single-node deployment mode or in the separated deployment mode without the need to keep data migration or data synchronization tasks running, perform the upgrade operations on the management and component containers as described in this section.
If you deploy the system in the separated deployment mode and need to keep data migration or data synchronization tasks running, do not stop the OMS V4.3.0 component container. You can use the upgrade assistant tool to upgrade the component container and then perform the upgrade operations on the management container as described in this section.
The following procedure describes how to upgrade the component container in the separated deployment mode.
- Contact technical support to obtain the installation package of the upgrade assistant tool. The installation package for the x86 architecture is named
oms-<version number>_x86_upgrade_tools-amd64-xxxxxxxxxxxxx, and the installation package for the ARM architecture is namedoms-<version number>_arm_upgrade_tools-arm64-xxxxxxxxxxxxx. - Load the downloaded upgrade assistant tool installation package into the local image repository of the Docker container.
docker load -i <upgrade assistant tool installation package> - Parse the corresponding host directory of
/home/ds/runon the OMS server.docker inspect ${OMS container name} | jq -r '.[0].Mounts[] | select(.Destination == "/home/ds/run") | .Source' - Fill in the
docker runcommand based on the template.sudo docker run -d -v ${directory parsed in the previous step}/rpm:/root/rpm --name oms-upgrade-tool ${upgrade assistant tool image ID} - Enter the OMS component container.
cd /home/ds/run/rpm - Upgrade the component container.
sh support_upgrade_action.shAfter the command is executed, the component container is successfully upgraded, and data migration or data synchronization tasks will not be interrupted.
- After the component container is upgraded, you can remove the upgrade assistant tool.
sudo docker rmi ${upgrade assistant tool image ID}
- Contact technical support to obtain the installation package of the upgrade assistant tool. The installation package for the x86 architecture is named
Log in to the CM heartbeat database specified in the configuration file and back up data.
# Log in to the CM heartbeat database specified in the configuration file. mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -Dcm_hb_430 # Create an intermediate table. CREATE TABLE IF NOT EXISTS `heatbeat_sequence_bak` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'PK', `gmt_created` datetime NOT NULL, `gmt_modified` datetime NOT NULL, PRIMARY KEY (`id`) ) DEFAULT CHARSET=utf8 COMMENT='Heartbeat sequence table'; # Back up data to the intermediate table. INSERT INTO heatbeat_sequence_bak SELECT `id`,`gmt_created`,`gmt_modified` FROM heatbeat_sequence ORDER BY `id` DESC LIMIT 1; # Rename the heatbeat_sequence table and the intermediate table. # The heatbeat_sequence table provides auto-increment IDs and reports heartbeats. ALTER TABLE `heatbeat_sequence` RENAME TO `heatbeat_sequence_bak2`; ALTER TABLE `heatbeat_sequence_bak` RENAME TO `heatbeat_sequence`; # Delete the original table. DROP TABLE heatbeat_sequence_bak2;Run the following commands to back up the
rm,cm, andcm_hbdatabases as SQL files and make sure that the sizes of the three files are not 0.In a multi-region scenario, the cm_hb database in each region must be backed up. For example, if there are two regions, you must back up five databases: rm, cm_region1, cm_region2, cm_hb1, and cm_hb2.
mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false rm_430 > /home/admin/rm_430.sql mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_430 > /home/admin/cm_430.sql mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_hb_430 > /home/admin/cm_hb_430.sqlParameter Description -h The IP address of the host from which the data is exported. -P The port number used to connect to the database. -u The username used to connect to the database. -p The password used to connect to the database. --triggers The data export trigger. The default value is false, which disables data export.rm_430, cm_430, and cm_hb_430 Specifies to back up the rm,cm, andcm_hbdatabases as SQL files named in the format ofdatabase name > SQL file storage path.sql. You need to replace the values based on the actual environment.Back up the
config.yamlconfiguration file.
Load the downloaded OMS installation package to the local image repository of the Docker container.
docker load -i <OMS installation package>Confirm the following information:
The
config.yamlconfiguration file is suitable for OMS V4.3.1 and the CM database for each region is as expected.The three disk mount paths of OMS are the same as those before the upgrade.
You can run the
sudo docker inspect ${CONTAINER_NAME} | grep -A5 'Binds'command to view the paths of disks mounted to the old OMS container.The image ID is correct.
(Optional) Set the primary region and other regions.
OMS V4.3.1 and later versions support setting the primary region when OMS is deployed in multiple regions. After the primary region is set, you can start the management process only in the primary region. The management processes in other regions will not be started. During the upgrade, if you need to distinguish between the primary region and other regions, refer to the steps in the "Set the primary region and other regions" section.
Notice
You must set the primary region before you set other regions.
After you set the primary region and other regions, you must start the management process in the primary region before you start the management processes in other regions.
Start the new container of OMS V4.3.1.
OMS supports accessing the OMS console by using HTTP or HTTPS. If you want to securely access OMS, you can configure an HTTPS self-signed certificate and mount it to the specified directory in the container. If you want to access OMS by using HTTP, you do not need to configure an HTTPS certificate.
OMS_HOST_IP=xxx CONTAINER_NAME=oms_xxx IMAGE_TAG=feature_x.x.x docker run -dit --net host \ -v /data/config.yaml:/home/admin/conf/config.yaml \ -v /data/oms/oms_logs:/home/admin/logs \ -v /data/oms/oms_store:/home/ds/store \ -v /data/oms/oms_run:/home/ds/run \ # If you mount the SSL certificate to the OMS container, you must set the following two parameters: -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt -v /data/oms/https_key:/etc/pki/nginx/oms_server.key -e OMS_HOST_IP=${OMS_HOST_IP} \ -e IS_UPGRADE=true \ --privileged=true \ --pids-limit -1 \ --ulimit nproc=65535:65535 \ --name ${CONTAINER_NAME} \ work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG}Parameter Description OMS_HOST_IP The IP address of the host.
Notice
The value ofOMS_HOST_IPis different for each node.CONTAINER_NAME The name of the container, in the oms_xxxformat. Specifyxxxbased on the actual OMS version. For example, if you use OMS V4.3.1, the value isoms_431.IMAGE_TAG The unique identifier of the loaded image. After you load the installation package of OMS by using Docker, run the docker imagescommand to obtain the [IMAGE ID] or [REPOSITORY:TAG] value of the loaded image. The obtained value is the unique identifier (<OMS_IMAGE>) of the loaded image./data/oms/oms_logs
/data/oms/oms_store
/data/oms/oms_run/data/oms/oms_logs,/data/oms/oms_store, and/data/oms/oms_runcan be replaced with the mount directories created on the server where OMS is deployed to respectively store the runtime log files of OMS, the files generated by the Store component, and the files generated by the Incr-Sync component, for local data persistence.
Notice
The mount directories must remain unchanged during subsequent redeployment or upgrades./home/admin/logs
/home/ds/store
/home/ds/run/home/admin/logs,/home/ds/store, and/home/ds/runare default directories in the container and cannot be modified./data/oms/https_crt (optional)
/data/oms/https_key (optional)The mount directory of the SSL certificate in the container of OMS. If you mount an SSL certificate, the Nginx service in the OMS container runs in HTTPS mode. In this case, you can access the OMS console by using only the HTTPS URL. IS_UPGRADE Specifies whether the current scenario is an upgrade. Note that IS_UPGRADEmust be in uppercase.privileged Specifies whether to grant extended privileges on the container. pids-limit Specifies whether to limit the number of container processes. The value -1indicates that the number is unlimited.ulimit nproc The maximum number of user processes. Run the
sh /root/docker_upgrade.shcommand.In the integrated deployment mode, you can run this command on any OMS node.
In the separated deployment mode, you can run this command on any management node.
On the System Parameters page, enable HA and configure the related parameters.
Log in to the OMS console.
In the left-side navigation pane, choose System Management > System Parameters.
On the System Parameters page, find
ha.config.Click the edit icon in the Value column of the parameter.
In the Modify Value dialog box, set
enabletotrueto enable HA, and record the time as T2.We recommend that you set the
perceiveStoreClientCheckpointparameter totrue. After that, you do not need to record T1 and T2.If you set the
perceiveStoreClientCheckpointparameter totrue, you can use the default value30minof therefetchStoreIntervalMinparameter. HA is enabled, so the system starts the Store component based on the earliest request time of downstream components minus the value of therefetchStoreIntervalMinparameter. For example, if the earliest request time of the downstream Connector or JDBC-Connector component is 12:00:00 and therefetchStoreIntervalMinparameter is set to 30 minutes, the system starts the Store component at 11:30:00.If you set the
perceiveStoreClientCheckpointparameter tofalse, you need to modify the value of therefetchStoreIntervalMinparameter as needed.refetchStoreIntervalMinspecifies the time interval, in minutes, for pulling data from the Store component. The value must be greater than T2 minus T1.
(Optional) To roll back the OMS upgrade, perform the following steps:
Disable the HA feature based on Step 1.
Stop the new container and record the time as T3.
sudo docker stop ${CONTAINER_NAME}Connect to the MetaDB and run the following commands:
drop database rm_430; drop database cm_430; drop database cm_hb_430; create database rm_430; create database cm_430; create database cm_hb_430;Restore the original databases based on the SQL files created in Step 2.
mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/rm_430.sql" -Drm_430 mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_430.sql" -Dcm_430 mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_hb_430.sql" -Dcm_hb_430Restart the container of OMS V4.3.0.
sudo docker restart ${CONTAINER_NAME}On the System Parameters page, enable HA.
Note
We recommend that you set the
perceiveStoreClientCheckpointparameter totrue.The HA feature automatically starts disaster recovery and the Incr-Sync component. However, you must manually resume the Full-Import component.
After the upgrade is complete, clear the browser cache before you log in to OMS.
Upgrade OMS to V4.3.1 from V3.2.1 or a version later than V3.2.1 but earlier than V4.0.1
Prerequisites
Before the upgrade, check whether data migration and synchronization tasks with duplicate names exist. If yes, rename the tasks to ensure that all task names are unique.
Run the following command to check for tasks with duplicate names:
Data migration tasks
SELECT project_name,count(*) AS count,group_concat(id) AS ids FROM oms_project WHERE project_status != "DELETED" GROUP BY project_name HAVING count(*) > 1;Data synchronization tasks
SELECT project_name,count(*) AS count,group_concat(id) AS ids FROM oms_sync_project WHERE project_status != "DELETED" GROUP BY project_name HAVING count(*) > 1;
If tasks with duplicate names exist, rename the tasks in sequence. The syntax for renaming tasks is as follows:
Data migration tasks
UPDATE oms_project SET project_name=<New name of the data migration task> WHERE id=<ID of the data migration task>;Data synchronization tasks
UPDATE oms_sync_project SET project_name=<New name of the data synchronization task> WHERE id=<ID of the data synchronization task>;
If you use an OceanBase data source as both the target of one task and the source of another, and you have updated the
blackRegionNoparameter of JDBCWriter, perform the following steps:In the OMS container, run the following command to obtain the value of
cm_location:cat /home/admin/conf/config.yaml | grep 'cm_location'Log in to the
drc_cmdatabase of OMS and run the following command:SELECT * FROM config_job WHERE `key`='sourceFile.blackRegionNo' AND VALUE!=xxx;If the query result is not empty and a data source is still used as both the target of one task and the source of another, contact OMS Technical Support. If the query result is empty, proceed with the upgrade operations.
Procedure
The following procedure shows how to upgrade OMS from V3.4.0 to V4.3.1.
If HA is enabled, record the current value of the
ha.configparameter and disable HA.Back up databases.
Log in to the two hosts where the container of OMS V3.4.0 is deployed by using their respective IP addresses, and stop the container.
sudo docker stop ${CONTAINER_NAME}Note
CONTAINER_NAMEspecifies the name of the container.Log in to the CM heartbeat database specified in the configuration file and back up data.
# Log in to the CM heartbeat database specified in the configuration file. mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -Dcm_hb_340 # Create an intermediate table. CREATE TABLE IF NOT EXISTS `heatbeat_sequence_bak` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'PK', `gmt_created` datetime NOT NULL, `gmt_modified` datetime NOT NULL, PRIMARY KEY (`id`) ) DEFAULT CHARSET=utf8 COMMENT='Heartbeat sequence table'; # Back up data to the intermediate table. INSERT INTO heatbeat_sequence_bak SELECT `id`,`gmt_created`,`gmt_modified` FROM heatbeat_sequence ORDER BY `id` DESC LIMIT 1; # Rename the heatbeat_sequence table and the intermediate table. # The heatbeat_sequence table provides auto-increment IDs and reports heartbeats. ALTER TABLE `heatbeat_sequence` RENAME TO `heatbeat_sequence_bak2`; ALTER TABLE `heatbeat_sequence_bak` RENAME TO `heatbeat_sequence`; # Delete the original table. DROP TABLE heatbeat_sequence_bak2;Run the following commands to back up the
rm,cm, andcm_hbdatabases as SQL files and make sure that the sizes of the three files are not 0.If you have deployed databases in multiple regions, you must back up the
cm_hbdatabase in all regions. For example, if you have deployed databases in two regions, you must back up the following four databases:rm,cm,cm_hb1, andcm_hb2.mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false rm_340 > /home/admin/rm_340.sql mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_340 > /home/admin/cm_340.sql mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_hb_340 > /home/admin/cm_hb_340.sqlBack up the
config.yamlconfiguration file.
Load the downloaded OMS installation package to the local image repository of the Docker container.
docker load -i <OMS installation package>Confirm the following information:
The
config.yamlconfiguration file is suitable for OMS V4.3.1 and the CM database for each region is as expected.The three disk mount paths of OMS are the same as those before the upgrade.
You can run the
sudo docker inspect ${CONTAINER_NAME} | grep -A5 'Binds'command to view the paths of disks mounted to the old OMS container.The image ID is correct.
Perform the upgrade based on the deployment mode.
For the integrated deployment mode, the upgrade procedure is as follows:
- (Optional) Set the primary region and other regions.
OMS V4.3.1 and later versions support setting the primary region when OMS is deployed in multiple regions. After the primary region is set, you can start the management process only in the primary region. The management processes in other regions will not be started. During the upgrade, if you need to distinguish between the primary region and other regions, refer to the steps in the "Set the primary region and other regions" section.
Notice
You must set the primary region before you set other regions.
After you set the primary region and other regions, you must start the management process in the primary region before you start the management processes in other regions.
Run the
docker runcommand on each region to start all OMS nodes. For more information about the command, see the "Start the new container of OMS V4.3.1" step in the "Upgrade OMS from V4.0.1 or later to V4.3.1" section.Go to the new container.
docker exec -it ${CONTAINER_NAME} bashPerform the following steps on one OMS node of each region except the last region.
Go to the container and run the following command to enable the OMS console to enter the STOPPED state:
supervisorctl stop oms_consoleExecute the .jar upgrade package.
/opt/alibaba/java/bin/java -jar correction-1.0-SNAPSHOT-jar-with-dependencies.jar -mupgrade -y/home/admin/conf/config.yaml -lfalseNotice
Replace the parameter values based on the actual situation.
Parameter Description -m The running mode. The valid value is UPGRADE.-y The absolute path of the OMS configuration file. -l Specifies whether this upgrade node is the last one. In single-region scenarios, set this parameter to true. In multi-region scenarios, set this parameter tofalsefor the first several regions, and totruefor the last region only.Note
In multi-region, multi-node scenarios, you need to execute the .jar upgrade package only for the first node in each region. When you perform an upgrade for the last region, set the
-lparameter totrue.After the upgrade JAR is executed, run the metadata initialization command in the
rootdirectory.python -m omsflow.scripts.units.oms_cluster_manager add_resource
Perform the following steps on one OMS node of the last region.
Go to the container and run the following command to enable the OMS console to enter the STOPPED state:
supervisorctl stop oms_consoleExecute the .jar upgrade package.
/opt/alibaba/java/bin/java -jar correction-1.0-SNAPSHOT-jar-with-dependencies.jar -mupgrade -y/home/admin/conf/config.yaml -ltrueAfter the upgrade JAR is executed, run the metadata initialization command in the
rootdirectory.python -m omsflow.scripts.units.oms_cluster_manager add_resource
For the separated deployment mode, the upgrade procedure is as follows:
- (Optional) Set the primary region and other regions.
OMS V4.3.1 and later versions support setting the primary region when OMS is deployed in multiple regions. After the primary region is set, you can start the management process only in the primary region. The management processes in other regions will not be started. During the upgrade, if you need to distinguish between the primary region and other regions, refer to the steps in the "Set the primary region and other regions" section.
Notice
You must set the primary region before you set other regions.
After you set the primary region and other regions, you must start the management process in the primary region before you start the management processes in other regions.
Run the
docker runcommand on each region to start a management node and all component nodes. For more information about the command, see the "Start the new container of OMS V4.3.1" step in the "Upgrade OMS from V4.0.1 or later to V4.3.1" section.Perform the following steps on one management node of each region except the last region.
Go to the container and run the following command to enable the OMS console to enter the STOPPED state:
supervisorctl stop oms_consoleExecute the .jar upgrade package.
/opt/alibaba/java/bin/java -jar correction-1.0-SNAPSHOT-jar-with-dependencies.jar -mupgrade -y/home/admin/conf/config.yaml -lfalseAfter the upgrade JAR is executed, run the metadata initialization command in the
rootdirectory.python -m omsflow.scripts.units.oms_cluster_manager add_resource
Perform the following steps on one management node of the last region.
Go to the container and run the following command to enable the OMS console to enter the STOPPED state:
supervisorctl stop oms_consoleExecute the .jar upgrade package.
/opt/alibaba/java/bin/java -jar correction-1.0-SNAPSHOT-jar-with-dependencies.jar -mupgrade -y/home/admin/conf/config.yaml -ltrueAfter the upgrade JAR is executed, run the metadata initialization command in the
rootdirectory.python -m omsflow.scripts.units.oms_cluster_manager add_resource
Run the
sh /root/docker_upgrade.shcommand.In the integrated deployment mode, you can run this command on any OMS node.
In the separated deployment mode, you can run this command on any management node.
Upgrade the CM databases to V4.3.1.
Notice
- You need to perform this step only in multi-region deployment mode. If OMS is deployed in a single region, skip this step.
- In integrated deployment mode, you need to stop all OMS nodes before you upgrade the CM databases.
In the separated deployment mode, you need to stop all management nodes of all regions.
Prepare OBLOADER & OBDUMPER.
Download the software package of OBLOADER & OBDUMPER from OceanBase Download Center.
For more information, see OBLOADER & OBDUMPER documentation.
Go to the server and decompress the tool package.
In this example, the tool package is named
ob-loader-dumper-4.3.1.1-RELEASE.zip.cd /home/admin unzip ob-loader-dumper-4.3.1.1-RELEASE.zip
Export data.
In OMS V4.2.5 and earlier, all regions share the same CM database and have their respective CM heartbeat databases. In this step, you need to export the data of the shared CM database and the CM heartbeat database of each region, which will be imported to the new databases later.
Run the following command on the server to export data from the CM database:
ob-loader-dumper-4.3.1.1-SNAPSHOT/bin/obdumper -h ${oms_cm_meta_host} -P ${oms_cm_meta_port} -u ${oms_cm_meta_user} -p "${oms_cm_meta_password}" --sys-user ${drc_user} --sys-password "${drc_user_password}" -D ${drc_cm_db} --table '*' --ddl --sql -f oms_cm_bak/Run the following command to export data from the CM heartbeat database of each region.
The
drc_cm_heartbeat_dbvalue differs for each region. You can obtain the information from theconfig.yamlconfiguration file.ob-loader-dumper-4.3.1.1-SNAPSHOT/bin/obdumper -h ${oms_cm_meta_host} -P ${oms_cm_meta_port} -u ${oms_cm_meta_user} -p "${oms_cm_meta_password}" --sys-user ${drc_user} --sys-password "${drc_user_password}" -D ${drc_cm_heartbeat_db} --table '*' --ddl --sql -f oms_cm_hb_${region}_bak/The following table describes the value sources of parameters in the command.
Parameter Value oms_cm_meta_host
oms_cm_meta_port
oms_cm_meta_user
oms_cm_meta_password
drc_cm_heartbeat_db
drc_cm_dbThe values are sourced from the config.yamlconfiguration file.drc_user
drc_user_passwordThe data replication center (DRC) username and password in the sys tenant. For more information, see the "Create a DRC user" section in Create a database user. region The region identifier, which is sourced from the config.yamlconfiguration file. Examples:cn_hangzhouandhangzhou.View the data export results.
After the data export, a shared
oms_cm_bakfile and anoms_cm_hb_${region}_bakfile for each region are generated. For example, theoms_cm_hb_shanghai_bakfile is generated for the Shanghai region, and theoms_cm_hb_hangzhou_bakfile is generated for the Hangzhou region.
On the server, connect to the target database instance and create a CM database and a CM heartbeat database for each region.
You can use the original database instance and tenant or a new database instance and tenant. Note that the OceanBase Database version of a new database must be later than or equal to that of the old database. Otherwise, importing data to the new database will fail.
A new database is named in the format of the original database name plus the region name. Here are some examples:
If the region name is
cn-hangzhouand the original CM database and CM heartbeat databases are respectively namedoms_cmandoms_cm_hb, the new databases are namedoms_cm_cn_hangzhouandoms_cm_hb_cn_hangzhou, respectively.CREATE DATABASE oms_cm_cn_hangzhou; CREATE DATABASE oms_cm_hb_cn_hangzhou;If the region name is
hangzhouand the original CM database and CM heartbeat databases are respectively namedoms_cmandoms_cm_hb, the new databases are namedoms_cm_hangzhouandoms_cm_hb_hangzhourespectively.Note
If a database name already exists, you can add a suffix to the database name.
CREATE DATABASE oms_cm_hangzhou; CREATE DATABASE oms_cm_hb_hangzhou;
Import data to the new CM database and CM heartbeat database of each region.
Import data to the new CM database
ob-loader-dumper-4.3.1.1-SNAPSHOT/bin/obloader -h ${region_oms_cm_meta_host} -P ${region_oms_cm_meta_port} -u ${region_oms_cm_meta_user} -p "${region_oms_cm_meta_password}" --sys-user ${drc_user} --sys-password "${drc_user_password}" -D ${region_drc_cm_db} --table '*' --ddl --sql -f oms_cm_bak/Parameter Description region_oms_cm_meta_host
region_oms_cm_meta_port
region_oms_cm_meta_user
region_oms_cm_meta_password
region_drc_cm_dbThe connection string and names of new databases created for the current region. drc_user
drc_user_passwordThe DRC username and password in the sys tenant. For more information, see the "Create a DRC user" section in Create a database user. oms_cm_bak The exported data file of the original CM database. region The region identifier, which is sourced from the config.yamlconfiguration file. Examples:cn_hangzhouandhangzhou.Import data to the new CM heartbeat database
ob-loader-dumper-4.3.1.1-SNAPSHOT/bin/obloader -h ${region_oms_cm_meta_host} -P ${region_oms_cm_meta_port} -u ${region_oms_cm_meta_user} -p "${region_oms_cm_meta_password}" --sys-user ${drc_user} --sys-password "${drc_user_password}" -D ${region_drc_cm_db} --table '*' --ddl --sql -f oms_cm_hb_${region}_bak/Note
Import data of the original CM heartbeat database to the new CM heartbeat database, which is named in the
oms_cm_hb_${region}format, of the corresponding region, for example,oms_cm_hb_hangzhouoroms_cm_hb_shanghai.oms_cm_hb_${region}_bakis the exported data file of the original CM database. For example,oms_cm_hb_hangzhou_bakis the exported data file of the original CM heartbeat database of the Hangzhou region, andoms_cm_hb_shanghai_bakis the exported data file of the original CM heartbeat database of the Shanghai region.
Clear redundant data.
Connect to the CM database of each region and clear the data in the
hostandresource_grouptables.USE oms_cm_${region}; # Use fixed table names in the following statements. TRUNCATE host; TRUNCATE resource_group;Modify the
config.yamlconfiguration file on the host.Update the connection string of the new databases to the
config.yamlconfiguration file on all nodes of the corresponding region. You need to modify thedrc_cm_db,drc_cm_heartbeat_db,oms_cm_meta_host,oms_cm_meta_password,oms_cm_meta_portandoms_cm_meta_userparameters.Note
You need to modify these parameters on all nodes in integrated deployment mode, and on the management and component nodes in the separated deployment mode.
"cm_location": "2" "drc_rm_db": "oms_rm" "drc_cm_heartbeat_db": "oms_cm_hb_hangzhou" "drc_cm_db": "oms_cm_hangzhou" "cm_nodes": - "xxx.xxx.xxx.1" - "xxx.xxx.xxx.2" "cm_region": "cn-hangzhou" "cm_region_cn": "cn-hangzhou" "cm_url": "http://xxx.xxx.xxx.1:8088" "oms_rm_meta_host": "xxx.xxx.xxx.3" "oms_cm_meta_host": "xxx.xxx.xxx.4" "oms_rm_meta_password": "*******" "oms_cm_meta_password": "*******" "oms_rm_meta_port": "2883" "oms_cm_meta_port": "2883" "oms_rm_meta_user": "****@oms_mysql#*****" "oms_cm_meta_user": "****@oms_mysql#*****"After you modify the
config.yamlconfiguration file for each region, run thesh docker_init.shcommand on each region to start the OMS service.In the integrated deployment mode, you need to run this command on all nodes of each region.
In the separated deployment mode, you need to run this command on the management node of each region.
On the System Parameters page, enable HA and configure the related parameters.
Log in to the OMS console.
In the left-side navigation pane, choose System Management > System Parameters.
On the System Parameters page, find
ha.config.Click the edit icon in the Value column of the parameter.
In the Modify Value dialog box, set
enabletotrueto enable HA, and record the time as T2.We recommend that you set the
perceiveStoreClientCheckpointparameter totrue. After that, you do not need to record T1 and T2.If you set the
perceiveStoreClientCheckpointparameter totrue, you can use the default value30minof therefetchStoreIntervalMinparameter. HA is enabled, so the system starts the Store component based on the earliest request time of downstream components minus the value of therefetchStoreIntervalMinparameter. For example, if the earliest request time of the downstream Connector or JDBC-Connector component is 12:00:00 and therefetchStoreIntervalMinparameter is set to 30 minutes, the system starts the Store component at 11:30:00.If you set the
perceiveStoreClientCheckpointparameter tofalse, you need to modify the value of therefetchStoreIntervalMinparameter as needed.refetchStoreIntervalMinspecifies the time interval, in minutes, for pulling data from the Store component. The value must be greater than T2 minus T1.
(Optional) To roll back the OMS upgrade, perform the following steps:
Disable the HA feature based on Step 1.
Stop the new container and record the time as T3.
sudo docker stop ${CONTAINER_NAME}Connect to the MetaDB and run the following commands:
drop database rm_340; drop database cm_340; drop database cm_hb_340; create database rm_340; create database cm_340; create database cm_hb_340;Restore the original databases based on the SQL files created in Step 2.
mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/rm_340.sql" -Drm_340 mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_340.sql" -Dcm_340 mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_hb_340.sql" -Dcm_hb_340Restart the container of OMS V3.4.0.
sudo docker restart ${CONTAINER_NAME}On the System Parameters page, enable HA.
Note
We recommend that you set the
perceiveStoreClientCheckpointparameter totrue.The HA feature automatically starts disaster recovery and the Incr-Sync component. However, you must manually resume the Full-Import component.
After the upgrade is complete, clear the browser cache before you log in to OMS.
Set the primary region and other regions
Log in to the host and run the following command to query the mount path of the
config.yamlconfiguration file in the old container.sudo docker inspect ${CONTAINER_NAME} --format '{{range .Mounts}}{{if eq .Destination "/home/admin/conf/config.yaml"}}{{.Source}}{{end}}{{end}}'Notice
Replace
CONTAINER_NAMEwith the name or ID of the OMS container.Modify the
config.yamlconfiguration file in the primary region.Note
If the
primary_region_ipparameter does not exist or is empty, the current region is the primary region. If theprimary_region_ipparameter is not empty, the current machine is in another region and does not start the Console. During initialization, the specifiedprimary_region_ipis requested. For example, "primary_region_ip": "xxx.xxx.xxx.1".# RM and CM metadata information oms_cm_meta_host: ${oms_cm_meta_host} oms_cm_meta_password: ${oms_cm_meta_password} oms_cm_meta_port: ${oms_cm_meta_port} oms_cm_meta_user: ${oms_cm_meta_user} oms_rm_meta_host: ${oms_rm_meta_host} oms_rm_meta_password: ${oms_rm_meta_password} oms_rm_meta_port: ${oms_rm_meta_port} oms_rm_meta_user: ${oms_rm_meta_user} # You can customize the names of the following three databases. OMS creates these three databases in the metadata database during deployment. drc_rm_db: ${drc_rm_db} drc_cm_db: ${drc_cm_db} drc_cm_heartbeat_db: ${drc_cm_heartbeat_db} # Fill in the configuration of the OMS cluster in Hangzhou. # In a multi-region and multi-node deployment, the cm_url parameter must be set to the VIP or domain name of all CMs in the current region. cm_url: ${cm_url} cm_location: ${cm_location} cm_region: ${cm_region} cm_region_cn: ${cm_region_cn} cm_nodes: - ${host_ip1} - ${host_ip2} console_nodes: - ${host_ip3} - ${host_ip4} # If the primary_region_ip parameter is empty, the current region is the primary region. "primary_region_ip": "" # Time series database configuration # The default value is false. If you want to enable the metric reporting feature, set this parameter to true. # tsdb_enabled: false # If tsdb_enabled is set to true, uncomment the following parameters and set their values based on your actual situation. # tsdb_service: 'INFLUXDB' # tsdb_url: '${tsdb_url}' # tsdb_username: ${tsdb_user} # tsdb_password: ${tsdb_password}Modify the
config.yamlconfiguration file in other regions.oms_cm_meta_host: ${oms_cm_meta_host} oms_cm_meta_password: ${oms_cm_meta_password} oms_cm_meta_port: ${oms_cm_meta_port} oms_cm_meta_user: ${oms_cm_meta_user} oms_rm_meta_host: ${oms_rm_meta_host} oms_rm_meta_password: ${oms_rm_meta_password} oms_rm_meta_port: ${oms_rm_meta_port} oms_rm_meta_user: ${oms_rm_meta_user} drc_rm_db: ${drc_rm_db} drc_cm_db: ${drc_cm_db} drc_cm_heartbeat_db: ${drc_cm_heartbeat_db} cm_url: ${cm_url} cm_location: ${cm_location} cm_region: ${cm_region} cm_region_cn: ${cm_region_cn} cm_nodes: - ${host_ip1} - ${host_ip2} console_nodes: - ${host_ip3} - ${host_ip4} # The current region is not the primary region. The primary region is xxx.xxx.xxx.xxx. "primary_region_ip": "xxx.xxx.xxx.xxx" tsdb_enabled: true tsdb_service: 'INFLUXDB' tsdb_url: '${tsdb_url}' tsdb_username: ${tsdb_user} tsdb_password: ${tsdb_password}