OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.3.1Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • Overview of OMS
    • Terms
    • OMS HA
    • Principles of Store
    • Principles of Full-Import and Incr-Sync
    • Data verification principles
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale out
    • Scale down deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log in to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your login password
      • Log out
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database
    • Migrate data from the Oracle compatible mode of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
    • Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data between OceanBase databases of the same tenant type
    • Configure a bidirectional synchronization task
    • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to the Oracle compatible mode of OceanBase Database
    • Migrate data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database
    • Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database
    • Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database
    • Manage data migration tasks
      • View details of a data migration task
      • Rename a data migration task
      • View and modify migration objects
      • Use tags to Manage data migration tasks
      • Perform batch operations on data migration tasks
      • Download and import settings of migration objects
      • View and modify migration parameters
      • Download a conflict log file
      • Start and pause a data migration task
      • End and delete a data migration task
    • Supported DDL operations and limits for synchronization
      • Synchronize DDL operations from a MySQL database to a MySQL-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from MySQL to OceanBase Database's MySQL compatible mode
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create indexes or constraints
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify tables
          • Operations on columns
          • Operations on constraints and indexes
          • Partition operations
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from Oracle to OceanBase Database Oracle compatible mode
        • CREATE TABLE
          • Overview for CREATE TABLE
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview for creating partitions
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Oracle compatible mode
        • ALTER TABLE
          • Overview
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop a partition
            • Drop a subpartition
            • Add partitions and subpartitions
            • Modify partitions
            • Drop partition data
        • DROP TABLE
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
      • Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
      • Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations between MySQL-compatible tenants of OceanBase Database
      • DDL synchronization between Oracle-compatible tenants of OceanBase Database
      • DDL operations for synchronizing data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database
      • DDL synchronization from PostgreSQL to OceanBase Database in Oracle compatible mode
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization tasks
      • View details of a data synchronization task
      • Change the name of a data synchronization task
      • View and modify synchronization objects
      • Use tags to Manage data synchronization tasks
      • Perform batch operations on data synchronization tasks
      • Download and import the settings of synchronization objects
      • View and modify the parameter configurations of a data synchronization task
      • Start and pause a data synchronization task
      • End and delete a data synchronization task
  • Data validation
    • Overview
    • Create a data validation task
    • Manage data validation tasks
      • View details of a data validation task
      • Change the name of a data validation task
      • View and modify validation objects
      • View and modify validation parameters
      • Manage data validation tasks by using tags
      • Import validation objects
      • Start, stop, and resume a data validation task
      • Clone a data validation task
      • Delete a data validation task
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
        • Create a standalone OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
      • Create a PolarDB-X 1.0 data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
      • Manage resource groups
    • Components
      • Store
        • Add a Store component
        • View details of a Store component
        • Update the configurations of a Store component
        • Start and pause a Store component
        • Delete a Store component
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
      • Incr-Verification
        • View details of the Incr-Verification component
        • Pause an Incr-Verification component
        • Rerun and resume an Incr-Verification component
        • Update an Incr-Verification component
      • Row-Verification
        • View details of a Row-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Task Template
        • Create a task template
        • View and edit task templates
        • Copy and export a task template
        • Delete a task template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store component parameters
        • Incr-Sync component parameters
        • Full-Import component parameters
        • Full-Verification component parameters
        • Incr-Verification component parameters
        • Parameters of the Row-Verification component
        • CM component parameters
        • Supervisor component parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View task alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Manage access keys
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • Configure DDL/DML synchronization
      • DDL synchronization scope
      • Rename databases and tables
      • Use SQL conditions to filter data
      • Set the incremental synchronization start timestamp
      • Configure matching rules for migration or synchronization objects
      • Configure matching rules for validation objects
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change a topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • CreatePolarDBDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
      • UpdateProjectConfig
      • Download schema conversion information
      • DownloadConflictData
      • ListConflictData
      • ResetIncrStartTimestamp
      • AdvanceProject
      • DescribeRegions
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • SSO
      • Integrate the OIDC protocol to OMS to implement SSO
      • Integrate the SAML 2.0 protocol to OMS to implement SSO
      • Access Microsoft Entra ID using OMS SAML 2.0
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL database instance
    • Online DDL tools
    • Supplemental logging in Oracle databases
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Task diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the target table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • Release Note
      • Version number rules
      • V4.3
        • OMS V4.3.1
        • OMS V4.3.0
      • V4.2
        • OMS V4.2.5
        • OMS V4.2.4
        • OMS V4.2.3
        • OMS V4.2.2
        • OMS V4.2.1
        • OMS V4.2.0
      • V4.1
        • OMS V4.1.0
      • V4.0
        • OMS V4.0.2
        • OMS V4.0.1
      • V3.4
        • OMS V3.4.0
      • V3.3
        • OMS V3.3.1
        • OMS V3.3.0
      • V3.2
        • OMS V3.2.2
        • OMS V3.2.1
      • V3.1
        • OMS V3.1.0
      • V2.1
        • OMS V2.1.2
        • OMS V2.1.0

Download PDF

OMS Documentation Overview of OMS Terms OMS HA Principles of Store Principles of Full-Import and Incr-Sync Data verification principles Overview Hierarchical functional system Basic components OMS Oracle full migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale out Scale down deployment Check the deployment Deploy a time-series database (Optional) Log in to the OMS console Overview Configure user information Change your login password Log out Overview Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to the MySQL compatible mode of OceanBase Database Migrate data from the Oracle compatible mode of OceanBase Database to an Oracle database Migrate data from an Oracle database to the Oracle compatible mode of OceanBase Database Migrate data from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data between OceanBase databases of the same tenant type Configure a bidirectional synchronization task Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database Migrate data from a PostgreSQL database to the Oracle compatible mode of OceanBase Database Migrate data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database View details of a data migration task Rename a data migration task View and modify migration objects Use tags to Manage data migration tasks Perform batch operations on data migration tasks Download and import settings of migration objects View and modify migration parameters Download a conflict log file Start and pause a data migration task End and delete a data migration task Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a MySQL database Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Synchronize DDL operations from the Oracle compatible mode of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Synchronize DDL operations from the MySQL compatible mode of OceanBase Database to a DB2 LUW database Synchronize DDL operations between MySQL-compatible tenants of OceanBase Database DDL synchronization between Oracle-compatible tenants of OceanBase Database DDL operations for synchronizing data from a PostgreSQL database to the MySQL compatible mode of OceanBase Database DDL synchronization from PostgreSQL to OceanBase Database in Oracle compatible mode Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a MySQL-compatible tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization task Change the name of a data synchronization task View and modify synchronization objects Use tags to Manage data synchronization tasks Perform batch operations on data synchronization tasks Download and import the settings of synchronization objects View and modify the parameter configurations of a data synchronization task Start and pause a data synchronization task End and delete a data synchronization task Overview Create a data validation task View details of a data validation task Change the name of a data validation task View and modify validation objects View and modify validation parameters Manage data validation tasks by using tags Import validation objects Start, stop, and resume a data validation task Clone a data validation task Delete a data validation task Create a MySQL data source Create an Oracle data source Create a TiDB data source
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.3.1
iconOceanBase Migration Service
V 4.3.1Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Upgrade OMS in single-node deployment mode

Last Updated:2025-10-21 10:56:45  Updated
share
What is on this page
Background information
Upgrade OMS from V4.0.1 or later to V4.3.1
Upgrade OMS to V4.3.1 from V3.2.1 or a version later than V3.2.1 but earlier than V4.0.1
Prerequisites
Procedure

folded

share

OceanBase Migration Service (OMS) V3.2.1 and later can be directly upgraded to V4.3.1. This topic describes how to upgrade OMS in single-node deployment mode in different scenarios.

Background information

An upgrade to OMS V4.3.1 can be classified into the following two version scenarios:

  • The current version is V3.2.1 or later but earlier than V4.0.1.

    Notice

    OMS of a version earlier than V3.2.1 must be first upgraded to V3.2.1.

    To upgrade OMS from V3.2.1 or later, or a version earlier than V4.0.1 to V4.3.1, you must perform the following two more steps than upgrading from V4.0.1 or later to V4.3.1:

    • Check the prerequisites below.

    • Execute the upgrade package in the .jar format during the upgrade.

      Notice

      OMS V4.0.1 integrates the migration and synchronization frameworks, which involves the restructuring of table schemas. To upgrade OMS of a version earlier than V4.0.1 to V4.0.1 or later, you must restructure table schemas. Do not perform this operation in other scenarios.

  • The current version is V4.0.1 or later.

Before you upgrade OMS to V4.3.1 in the preceding two scenarios, check the following prerequisites.

  • If you want to deploy the cluster manager (CM) database separately, make sure that all data migration and synchronization tasks have proceeded to the expected step.

    • Reverse increment is enabled:

      • If a task includes the full migration and incremental synchronization steps, the task must enter the reverse increment step before the upgrade.

      • If a task includes the full migration step but not the incremental synchronization step, the task must complete the full migration step before the upgrade.

      • If a task includes the incremental synchronization step but not the full migration step, the task must enter the reverse increment step before the upgrade.

      • If a task does not include the full migration or incremental synchronization step, no requirements are raised for the task before the upgrade.

    • If reverse increment is disabled, no requirements are raised for tasks before the upgrade.

  • Modify the config.yaml file.

    Notice

    Do not modify OMS deployed by using OAT of a version earlier than V4.3.2.

    Modification Key Description
    Add keys oms_rm_meta_host
    oms_cm_meta_host
    The data is sourced from oms_meta_host.
    oms_rm_meta_port
    oms_cm_meta_port
    The data is sourced from oms_meta_port.
    oms_rm_meta_password
    oms_cm_meta_password
    The data is sourced from oms_meta_password.
    oms_rm_meta_user
    oms_cm_meta_user
    The data is sourced from oms_meta_user.
    Delete keys oms_meta_host
    oms_meta_port
    oms_meta_password
    oms_meta_user
    In OMS V4.3.1, you must delete these four keys from the config.yaml configuration file. Otherwise, the connection strings of the resource manager (RM) and CM databases may be lost when you run the docker_init.sh script.

    Here is a sample config.yaml configuration file of OMS V4.3.0:

    "cm_location": "2"
    "cm_nodes":
    - "xxx.xxx.xxx.1"
    - "xxx.xxx.xxx.2"
    "cm_region": "cn-hangzhou"
    "cm_region_cn": "cn-hangzhou"
    "cm_url": "http://xxx.xxx.xxx.1:8088"
    "drc_cm_db": "oms_cm"
    "drc_cm_heartbeat_db": "oms_cm_hb_hangzhou"
    "drc_rm_db": "oms_rm"
    "oms_meta_host": "xxx.xxx.xxx.3"
    "oms_meta_password": "ob_password"
    "oms_meta_port": "2883"
    "oms_meta_user": "oms_meta_user"
    

    Here is a modified config.yaml configuration file for OMS V4.3.1:

    "cm_location": "2"
    "cm_nodes":
    - "xxx.xxx.xxx.1"
    - "xxx.xxx.xxx.2"
    "cm_region": "cn-hangzhou"
    "cm_region_cn": "cn-hangzhou"
    "cm_url": "http://xxx.xxx.xxx.1:8088"
    "drc_cm_db": "oms_cm"
    "drc_cm_heartbeat_db": "oms_cm_hb_hangzhou"
    "drc_rm_db": "oms_rm"
    "oms_cm_meta_host": "xxx.xxx.xxx.3"
    "oms_cm_meta_password": "ob_password"
    "oms_cm_meta_port": "2883"
    "oms_cm_meta_user": "oms_meta_user"
    "oms_rm_meta_host": "xxx.xxx.xxx.3"
    "oms_rm_meta_password": "ob_password"
    "oms_rm_meta_port": "2883"
    "oms_rm_meta_user": "oms_meta_user"
    

Upgrade OMS from V4.0.1 or later to V4.3.1

  1. If high availability (HA) is enabled, record the current value of the ha.config parameter and disable HA.

    1. Log in to the OMS console.

    2. In the left-side navigation pane, choose System Management > System Parameters.

    3. On the System Parameters page, find ha.config.

    4. Click the edit icon in the Value column of the parameter.

    5. In the Modify Value dialog box, copy and save the current value of ha.config, and set enable to false to disable HA.

    6. Click OK.

  2. Back up databases.

    1. Stop the container of OMS V4.3.0 and record the time as T1.

      sudo docker stop ${CONTAINER_NAME}
      

      Note

      CONTAINER_NAME specifies the name of the container.

      • If you deploy the system in the single-node deployment mode or in the separated deployment mode without the need to keep data migration or data synchronization tasks running, perform the upgrade operations on the management and component containers as described in this section.

      • If you deploy the system in the separated deployment mode and need to keep data migration or data synchronization tasks running, do not stop the OMS V4.3.0 component container. You can use the upgrade assistant tool to upgrade the component container and then perform the upgrade operations on the management container as described in this section.

        The following procedure describes how to upgrade the component container in the separated deployment mode.

        1. Contact technical support to obtain the installation package of the upgrade assistant tool. The installation package for the x86 architecture is named oms-<version number>_x86_upgrade_tools-amd64-xxxxxxxxxxxxx, and the installation package for the ARM architecture is named oms-<version number>_arm_upgrade_tools-arm64-xxxxxxxxxxxxx.
        2. Load the downloaded upgrade assistant tool installation package into the local image repository of the Docker container.
          docker load -i <upgrade assistant tool installation package>
          
        3. Parse the corresponding host directory of /home/ds/run on the OMS server.
          docker inspect ${OMS container name} | jq -r '.[0].Mounts[] | select(.Destination == "/home/ds/run") | .Source'
          
        4. Fill in the docker run command based on the template.
          sudo docker run -d -v ${directory parsed in the previous step}/rpm:/root/rpm --name oms-upgrade-tool ${upgrade assistant tool image ID} 
          
        5. Enter the OMS component container.
          cd /home/ds/run/rpm
          
        6. Upgrade the component container.
          sh support_upgrade_action.sh
          

          After the command is executed, the component container is successfully upgraded, and data migration or data synchronization tasks will not be interrupted.

        7. After the component container is upgraded, you can remove the upgrade assistant tool.
          sudo docker rmi ${upgrade assistant tool image ID}
          
    2. Log in to the CM heartbeat database specified in the configuration file and back up data.

      # Log in to the CM heartbeat database specified in the configuration file.
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -Dcm_hb_430
      
      # Create an intermediate table.
      CREATE TABLE IF NOT EXISTS `heatbeat_sequence_bak` (
      `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'PK',
      `gmt_created` datetime NOT NULL,
      `gmt_modified` datetime NOT NULL,
      PRIMARY KEY (`id`)
      ) DEFAULT CHARSET=utf8 COMMENT='Heartbeat sequence table';
      
      # Back up data to the intermediate table.
      INSERT INTO heatbeat_sequence_bak SELECT `id`,`gmt_created`,`gmt_modified` FROM heatbeat_sequence ORDER BY `id` DESC LIMIT 1;
      
      # Rename the heatbeat_sequence table and the intermediate table.
      # The heatbeat_sequence table provides auto-increment IDs and reports heartbeats.
      ALTER TABLE `heatbeat_sequence` RENAME TO `heatbeat_sequence_bak2`;
      ALTER TABLE `heatbeat_sequence_bak` RENAME TO `heatbeat_sequence`;
      
      # Delete the original table.
      DROP TABLE heatbeat_sequence_bak2;
      
    3. Run the following commands to back up the rm, cm, and cm_hb databases as SQL files and make sure that the sizes of the three files are not 0:

      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false rm_430 > /home/admin/rm_430.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_430 > /home/admin/cm_430.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_hb_430 > /home/admin/cm_hb_430.sql
      
      Parameter Description
      -h The IP address of the host from which the data is exported.
      -P The port number used to connect to the database.
      -u The username used to connect to the database.
      -p The password used to connect to the database.
      --triggers The data export trigger. The default value is false, which disables data export.
      rm_430, cm_430, and cm_hb_430 Specifies to back up the rm, cm, and cm_hb databases as SQL files named in the format of database name > SQL file storage path.sql. You need to replace the values based on the actual environment.
    4. Back up the config.yaml configuration file.

  3. Load the downloaded OMS installation package to the local image repository of the Docker container.

    docker load -i <OMS installation package>
    
  4. Confirm the following information:

    • The config.yaml configuration file is suitable for OMS V4.3.1 and the CM database for each region is as expected.

    • The three disk mount paths of OMS are the same as those before the upgrade.

      You can run the sudo docker inspect ${CONTAINER_NAME} | grep -A5 'Binds' command to view the paths of disks mounted to the old OMS container.

    • The image ID is correct.

  5. Start the container of OMS V4.3.1.

    You can access the OMS console by using an HTTP or HTTPS URL. To securely access the OMS console, install an SSL certificate and mount it to the specified directory in the container. The certificate is not required for HTTP access.

    OMS_HOST_IP=xxx
    CONTAINER_NAME=oms_xxx
    IMAGE_TAG=feature_x.x.x
    
    docker run -dit --net host \
    -v /data/config.yaml:/home/admin/conf/config.yaml \
    -v /data/oms/oms_logs:/home/admin/logs \
    -v /data/oms/oms_store:/home/ds/store \
    -v /data/oms/oms_run:/home/ds/run \
    # If you mount the SSL certificate to the OMS container, you must set the following two parameters:
    -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt 
    -v /data/oms/https_key:/etc/pki/nginx/oms_server.key
    -e OMS_HOST_IP=${OMS_HOST_IP} \
    -e IS_UPGRADE=true \
    --privileged=true \
    --pids-limit -1 \
    --ulimit nproc=65535:65535 \
    --name ${CONTAINER_NAME} \
    work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG} 
    
    Parameter Description
    OMS_HOST_IP The IP address of the host.
    CONTAINER_NAME The name of the container, in the oms_xxx format. Specify xxx based on the actual OMS version. For example, if you use OMS V4.3.1, the value is oms_431.
    IMAGE_TAG The unique identifier of the loaded image. After you load the installation package of OMS by using Docker, run the docker images command to obtain the [IMAGE ID] or [REPOSITORY:TAG] value of the loaded image. The obtained value is the unique identifier (<OMS_IMAGE>) of the loaded image.
    /data/oms/oms_logs
    /data/oms/oms_store
    /data/oms/oms_run
    /data/oms/oms_logs, /data/oms/oms_store, and /data/oms/oms_run can be replaced with the mount directories created on the server where OMS is deployed to respectively store the runtime log files of OMS, the files generated by the Store component, and the files generated by the Incr-Sync component, for local data persistence.
    Notice
    The mount directories must remain unchanged during subsequent redeployment or upgrades.
    /home/admin/logs
    /home/ds/store
    /home/ds/run
    /home/admin/logs, /home/ds/store, and /home/ds/run are default directories in the container and cannot be modified.
    /data/oms/https_crt (optional)
    /data/oms/https_key (optional)
    The mount directory of the SSL certificate in the container of OMS. If you mount an SSL certificate, the Nginx service in the OMS container runs in HTTPS mode. In this case, you can access the OMS console by using only the HTTPS URL.
    IS_UPGRADE Specifies whether the current scenario is an upgrade. Note that IS_UPGRADE must be in uppercase.
    privileged Specifies whether to grant extended privileges on the container.
    pids-limit Specifies whether to limit the number of container processes. The value -1 indicates that the number is unlimited.
    ulimit nproc The maximum number of user processes.
  6. Run the sh /root/docker_upgrade.sh command.

    • In the integrated deployment mode, you can run this command on any OMS node.

    • In the separated deployment mode, you can run this command on any management node.

  7. On the System Parameters page, enable HA and configure the related parameters.

    1. Log in to the OMS console.

    2. In the left-side navigation pane, choose System Management > System Parameters.

    3. On the System Parameters page, find ha.config.

    4. Click the edit icon in the Value column of the parameter.

    5. In the Modify Value dialog box, set enable to true to enable HA, and record the time as T2.

    6. We recommend that you set the perceiveStoreClientCheckpoint parameter to true. After that, you do not need to record T1 and T2.

      If you set the perceiveStoreClientCheckpoint parameter to true, you can use the default value 30min of the refetchStoreIntervalMin parameter. HA is enabled, so the system starts the Store component based on the earliest request time of downstream components minus the value of the refetchStoreIntervalMin parameter. For example, if the earliest request time of the downstream Connector or JDBC-Connector component is 12:00:00 and the refetchStoreIntervalMin parameter is set to 30 minutes, the system starts the Store component at 11:30:00.

      If you set the perceiveStoreClientCheckpoint parameter to false, you need to modify the value of the refetchStoreIntervalMin parameter as needed. refetchStoreIntervalMin specifies the time interval, in minutes, for pulling data from the Store component. The value must be greater than T2 minus T1.

  8. (Optional) To roll back the OMS upgrade, perform the following steps:

    1. Disable the HA feature based on Step 1.

    2. Stop the new container and record the time as T3.

      sudo docker stop ${CONTAINER_NAME}
      
    3. Connect to the MetaDB and run the following commands:

      drop database rm_430;
      drop database cm_430;
      drop database cm_hb_430;
      
      create database rm_430;
      create database cm_430;
      create database cm_hb_430;
      
    4. Restore the original databases based on the SQL files created in Step 2.

      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/rm_430.sql" -Drm_430
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_430.sql" -Dcm_430
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_hb_430.sql" -Dcm_hb_430
      
    5. Restart the container of OMS V4.3.0.

      sudo docker restart ${CONTAINER_NAME}
      
    6. On the System Parameters page, enable HA.

      Note

      • We recommend that you set the perceiveStoreClientCheckpoint parameter to true.

      • The HA feature automatically starts disaster recovery and the Incr-Sync component. However, you must manually resume the Full-Import component.

  9. After the upgrade is complete, clear the browser cache before you log in to OMS.

Upgrade OMS to V4.3.1 from V3.2.1 or a version later than V3.2.1 but earlier than V4.0.1

Prerequisites

  • Before the upgrade, check whether data migration and synchronization tasks with duplicate names exist. If yes, rename the tasks to ensure that all task names are unique.

    Run the following command to check for tasks with duplicate names:

    • Data migration tasks

      SELECT project_name,count(*) AS count,group_concat(id) AS ids FROM oms_project WHERE project_status != "DELETED" GROUP BY project_name HAVING count(*) > 1;
      
    • Data synchronization tasks

      SELECT project_name,count(*) AS count,group_concat(id) AS ids FROM oms_sync_project WHERE project_status != "DELETED" GROUP BY project_name HAVING count(*) > 1;
      

    If tasks with duplicate names exist, rename the tasks in sequence. The syntax for renaming tasks is as follows:

    • Data migration tasks

      UPDATE oms_project SET project_name=<New name of the data migration task> WHERE id=<ID of the data migration task>;
      
    • Data synchronization tasks

      UPDATE oms_sync_project SET project_name=<New name of the data synchronization task> WHERE id=<ID of the data synchronization task>;
      
  • If you use an OceanBase data source as both the target of one task and the source of another, and you have updated the blackRegionNo parameter of JDBCWriter, perform the following steps:

    1. In the OMS container, run the following command to obtain the value of cm_location:

      cat /home/admin/conf/config.yaml  | grep 'cm_location'
      
    2. Log in to the drc_cm database of OMS and run the following command:

      SELECT * FROM config_job WHERE `key`='sourceFile.blackRegionNo' AND VALUE!=xxx;
      

      If the query result is not empty and a data source is still used as both the target of one task and the source of another, contact OMS Technical Support. If the query result is empty, proceed with the upgrade operations.

Procedure

The following procedure shows how to upgrade OMS from V3.4.0 to V4.3.1.

  1. If HA is enabled, record the current value of the ha.config parameter and disable HA.

  2. Back up databases.

    1. Stop the container of OMS V3.4.0 and record the time as T1.

      sudo docker stop ${CONTAINER_NAME}
      

      Note

      CONTAINER_NAME specifies the name of the container.

    2. Log in to the CM heartbeat database specified in the configuration file and back up data.

      # Log in to the CM heartbeat database specified in the configuration file.
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -Dcm_hb_340
      
      # Create an intermediate table.
      CREATE TABLE IF NOT EXISTS `heatbeat_sequence_bak` (
      `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'PK',
      `gmt_created` datetime NOT NULL,
      `gmt_modified` datetime NOT NULL,
      PRIMARY KEY (`id`)
      ) DEFAULT CHARSET=utf8 COMMENT='Heartbeat sequence table';
      
      # Back up data to the intermediate table.
      INSERT INTO heatbeat_sequence_bak SELECT `id`,`gmt_created`,`gmt_modified` FROM heatbeat_sequence ORDER BY `id` DESC LIMIT 1;
      
      # Rename the heatbeat_sequence table and the intermediate table.
      # The heatbeat_sequence table provides auto-increment IDs and reports heartbeats.
      ALTER TABLE `heatbeat_sequence` RENAME TO `heatbeat_sequence_bak2`;
      ALTER TABLE `heatbeat_sequence_bak` RENAME TO `heatbeat_sequence`;
      
      # Delete the original table.
      DROP TABLE heatbeat_sequence_bak2;
      
    3. Run the following commands to back up the rm, cm, and cm_hb databases as SQL files and make sure that the sizes of the three files are not 0:

      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false rm_340 > /home/admin/rm_340.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_340 > /home/admin/cm_340.sql
      
      mysqldump -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> --triggers=false cm_hb_340 > /home/admin/cm_hb_340.sql
      
    4. Back up the config.yaml configuration file.

  3. Load the downloaded OMS installation package to the local image repository of the Docker container.

    docker load -i <OMS installation package>
    
  4. Confirm the following information:

    • The config.yaml configuration file is suitable for OMS V4.3.1 and the CM database for each region is as expected.

    • The three disk mount paths of OMS are the same as those before the upgrade.

      You can run the sudo docker inspect ${CONTAINER_NAME} | grep -A5 'Binds' command to view the paths of disks mounted to the old OMS container.

    • The image ID is correct.

  5. Start the container of OMS V4.3.1.

    You can access the OMS console by using an HTTP or HTTPS URL. To securely access the OMS console, install an SSL certificate and mount it to the specified directory in the container. The certificate is not required for HTTP access.

    OMS_HOST_IP=xxx
    CONTAINER_NAME=oms_xxx
    IMAGE_TAG=feature_x.x.x
    
    docker run -dit --net host \
    -v /data/config.yaml:/home/admin/conf/config.yaml \
    -v /data/oms/oms_logs:/home/admin/logs \
    -v /data/oms/oms_store:/home/ds/store \
    -v /data/oms/oms_run:/home/ds/run \
    # If you mount the SSL certificate to the OMS container, you must set the following two parameters:
    -v /data/oms/https_crt:/etc/pki/nginx/oms_server.crt 
    -v /data/oms/https_key:/etc/pki/nginx/oms_server.key
    -e OMS_HOST_IP=${OMS_HOST_IP} \
    -e IS_UPGRADE=true \
    --privileged=true \
    --pids-limit -1 \
    --ulimit nproc=65535:65535 \
    --name ${CONTAINER_NAME} \
    work.oceanbase-dev.com/obartifact-store/oms:${IMAGE_TAG} 
    
  6. Go to the new container.

    docker exec -it ${CONTAINER_NAME} bash  
    
  7. Execute the .jar upgrade package.

    /opt/alibaba/java/bin/java -jar correction-1.0-SNAPSHOT-jar-with-dependencies.jar -mupgrade -y/home/admin/conf/config.yaml -ltrue
    

    Notice

    Replace the parameter values based on the actual situation.

    Parameter Description
    -m The running mode. The valid value is UPGRADE.
    -y The absolute path of the OMS configuration file.
    -l Specifies whether this upgrade node is the last one. In single-region scenarios, set this parameter to true.
  8. After the upgrade JAR is executed, run the metadata initialization command in the root directory.

    python -m omsflow.scripts.units.oms_cluster_manager add_resource
    
  9. Run the sh /root/docker_upgrade.sh command.

    • In the integrated deployment mode, you can run this command on any OMS node.

    • In the separated deployment mode, you can run this command on any management node.

  10. On the System Parameters page, enable HA and configure the related parameters.

    1. Log in to the OMS console.

    2. In the left-side navigation pane, choose System Management > System Parameters.

    3. On the System Parameters page, find ha.config.

    4. Click the edit icon in the Value column of the parameter.

    5. In the Modify Value dialog box, set enable to true to enable HA, and record the time as T2.

    6. We recommend that you set the perceiveStoreClientCheckpoint parameter to true. After that, you do not need to record T1 and T2.

      If you set the perceiveStoreClientCheckpoint parameter to true, you can use the default value 30min of the refetchStoreIntervalMin parameter. HA is enabled, so the system starts the Store component based on the earliest request time of downstream components minus the value of the refetchStoreIntervalMin parameter. For example, if the earliest request time of the downstream Connector or JDBC-Connector component is 12:00:00 and the refetchStoreIntervalMin parameter is set to 30 minutes, the system starts the Store component at 11:30:00.

      If you set the perceiveStoreClientCheckpoint parameter to false, you need to modify the value of the refetchStoreIntervalMin parameter as needed. refetchStoreIntervalMin specifies the time interval, in minutes, for pulling data from the Store component. The value must be greater than T2 minus T1.

  11. (Optional) To roll back the OMS upgrade, perform the following steps:

    1. Disable the HA feature based on Step 1.

    2. Stop the new container and record the time as T3.

      sudo docker stop ${CONTAINER_NAME}
      
    3. Connect to the MetaDB and run the following commands:

      drop database rm_340;
      drop database cm_340;
      drop database cm_hb_340;
      
      create database rm_340;
      create database cm_340;
      create database cm_hb_340;
      
    4. Restore the original databases based on the SQL files created in Step 2.

      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/rm_340.sql" -Drm_340
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_340.sql" -Dcm_340
      
      mysql -hxxx.xxx.xxx.xxx -P<port> -u<username> -p<password> -e "source /home/admin/cm_hb_340.sql" -Dcm_hb_340
      
    5. Restart the container of OMS V3.4.0.

      sudo docker restart ${CONTAINER_NAME}
      
    6. On the System Parameters page, enable HA.

      Note

      • We recommend that you set the perceiveStoreClientCheckpoint parameter to true.

      • The HA feature automatically starts disaster recovery and the Incr-Sync component. However, you must manually resume the Full-Import component.

  12. After the upgrade is complete, clear the browser cache before you log in to OMS.

Previous topic

Overview
Last

Next topic

Upgrade OMS in multi-node deployment mode
Next
What is on this page
Background information
Upgrade OMS from V4.0.1 or later to V4.3.1
Upgrade OMS to V4.3.1 from V3.2.1 or a version later than V3.2.1 but earlier than V4.0.1
Prerequisites
Procedure