OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.2.4Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • What is OMS?
    • Terms
    • OMS HA
    • Principles of Store
    • Principles of Full-Import and Incr-Sync
    • Data verification principles
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full data migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Integrate the OIDC protocol to OMS to implement SSO
    • Scale out
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log in to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your login password
      • Log out
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle-compatible OceanBase database
    • Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery task in OceanBase Database
    • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database
    • Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database
    • Manage data migration tasks
      • View details of a data migration task
      • Rename a data migration task
      • View and modify migration objects
      • Use tags to Manage data migration tasks
      • Perform batch operations on data migration tasks
      • Download and import settings of migration objects
      • View and modify the parameter configurations of a data migration task
      • Start and pause a data migration task
      • Release and delete a data migration task
    • Supported DDL operations and limits for synchronization
      • Synchronize DDL operations from a MySQL database to a MySQL-compatible tenant of OceanBase Database
        • Overview
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from Oracle to an Oracle-compatible tenant of OceanBase Database
        • CREATE TABLE
          • Overview
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Normal indexes
        • ALTER TABLE
          • Modify tables
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop partitions
            • Drop subpartitions
            • Add partitions and subpartitions
            • Modify partitions
            • Truncate partitions
        • DROP TABLE
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • DDL synchronization from an Oracle-compatible tenant of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
      • Synchronize DDL operations from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
      • Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database
      • DDL synchronization between MySQL-compatible tenants of OceanBase Database
      • DDL synchronization between Oracle-compatible tenants of OceanBase Database
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a physical table in a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization tasks
      • View details of a data synchronization task
      • Change the name of a data synchronization task
      • View and modify synchronization objects
      • Use tags to Manage data synchronization tasks
      • Perform batch operations on data synchronization tasks
      • Download and import the settings of synchronization objects
      • View and modify the parameter configurations of a data synchronization task
      • Start and pause a data synchronization task
      • Release and delete a data synchronization task
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
      • Create a PolarDB-X 1.0 data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Task Template
        • Create a task template
        • View and edit task templates
        • Copy and export a task template
        • Delete a task template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store parameters
        • Incr-Sync parameters
        • Full-Import parameters
        • Full-Verification parameters
        • CM parameters
        • Supervisor parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View task alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • Configure DDL/DML synchronization
      • Supported DDL operations for synchronization
      • Rename a migration or synchronization object
      • Use SQL conditions to filter data
      • Set an incremental synchronization timestamp
      • Configure matching rules
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change the topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • CreatePolarDBDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
      • UpdateProjectConfig
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL instance
    • Online DDL tools
    • Oracle supplemental logging
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Task diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V4.2
      • OMS V4.2.4
      • OMS V4.2.3
      • OMS V4.2.2
      • OMS V4.2.1
      • OMS V4.2.0
    • V4.1
      • OMS V4.1.0
    • V4.0
      • OMS V4.0.2
      • OMS V4.0.1
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What is OMS? Terms OMS HA Principles of Store Principles of Full-Import and Incr-Sync Data verification principles Overview Hierarchical functional system Basic components OMS Oracle full data migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Integrate the OIDC protocol to OMS to implement SSO Scale out Check the deployment Deploy a time-series database (Optional) Log in to the OMS console Overview Configure user information Change your login password Log out Overview Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to a MySQL-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database Migrate data from a DB2 LUW database to an Oracle-compatible OceanBase database Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery task in OceanBase Database Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database Migrate data from a PostgreSQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database View details of a data migration task Rename a data migration task View and modify migration objects Use tags to Manage data migration tasks Perform batch operations on data migration tasks Download and import settings of migration objects View and modify the parameter configurations of a data migration task Start and pause a data migration task Release and delete a data migration task Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a MySQL database DDL synchronization from an Oracle-compatible tenant of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Synchronize DDL operations from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database DDL synchronization between MySQL-compatible tenants of OceanBase Database DDL synchronization between Oracle-compatible tenants of OceanBase Database Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a physical table in a MySQL-compatible tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization task Change the name of a data synchronization task View and modify synchronization objects Use tags to Manage data synchronization tasks Perform batch operations on data synchronization tasks Download and import the settings of synchronization objects View and modify the parameter configurations of a data synchronization task Start and pause a data synchronization task Release and delete a data synchronization task Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source Create a PolarDB-X 1.0 data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database Minimum privileges required when an Oracle database serves as the source O&M overview
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.2.4
iconOceanBase Migration Service
V 4.2.4Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

View details of a data migration task

Last Updated:2026-04-14 07:36:49  Updated
share
What is on this page
Access the details page
View basic information
View migration details
Schema migration
Full migration
Incremental synchronization
Full verification
Forward switchover
Reverse incremental migration

folded

share

After a data migration task starts, you can view the task metrics on the details page of the task, such as the basic information, and task progress and status.

Access the details page

  1. Log in to the OceanBase Migration Service (OMS) console.

  2. In the left-side navigation pane, click Data Migration.

  3. On the Data Migration page, click the name of the target task to go to the details page and view its Basic Information and Migration Details.

    On the Data Migration page, you can search for data migration tasks by tag, status, type, or keyword. Some of the states of a data migration task are described as follows:

    • Not Started: The data migration task has not been started. You can click Start in the Actions column to start the task.

    • Running: The data migration task is in progress. You can view the data migration plan and current progress on the right.

    • Modifying: The migration objects in the migration task are being modified.

    • Integrating: The data migration task of the modified migration object is being integrated with the migration object modification task.

    • Paused: The data migration task is manually paused. You can click Resume in the Actions column to resume the task.

    • Failed: The data migration task has failed. You can view where the failure occurred on the right. To view the error message, click the task name to go to the task details page.

    • Completed: The data migration task is completed, and OMS has migrated the specified data to the target database in the configured migration mode.

    • Releasing: The data migration task is being released. You cannot edit a data migration task in this state.

    • Released: The data migration task is released. After the task is released, OMS terminates the current migration and incremental synchronization task.

View basic information

The Basic Information section displays the basic information about a data migration task.

Parameter Description
ID The unique ID of the data migration task.
Migration Type The migration type selected when the data migration task was created.
Alert Level The alert level of the data migration task. OMS supports the following alert levels: No Protection, High Protection, Medium Protection, and Low Protection. For more information, see Alert settings.
Created By The user who created the data migration task.
Created At The time when the data migration task was created.
Concurrency for Full Migration The value can be Smooth, Normal, or Fast. The amount of resources to be consumed by a full migration task varies based on the migration performance.
Full Verification Concurrency The value can be Smooth, Normal, or Fast. Resources consumed at the source and target databases vary based on the specified concurrency.
Connection Details Click Connection Details to view information about the connection between the source and target databases of the data migration task.

You can perform the following operations:

  • View migration objects

    Click View Objects in the upper-right corner to view the migration objects of the data migration task. You can also modify the migration objects when the data migration task is running. For more information, see View and modify migration objects.

  • View the component monitoring metrics

    Click View Component Monitoring in the upper-right corner to view information about the Store, Incr-Sync, Full-Import, and Full-Verification components. You can perform the following operations on the components:

    • Start a component: Click Start in the Actions column of the component that you want to start. In the dialog box that appears, click OK.

    • Pause a component: Click Pause in the Actions column of the component that you want to pause. In the dialog box that appears, click OK.

    • Update a component: Click Update in the Actions column of the component that you want to update. On the Update Configuration page, modify the configurations and then click Update.

      Notice

      The system restarts after you update the component. Proceed with caution.

    • View logs: Click View Logs in the Actions column of a component. The View Logs page displays the latest logs of the component. You can search for, download, and copy the logs.

  • View or modify parameter configurations

    You can view the parameter configurations of a data migration task in the Running, Modifying, Integrating, Completed, Stopping, or Stopped state. You can modify the parameter configurations of a data migration task in the Not Started, Paused, or Failed state. For more information, see View and modify the parameter configurations of a data migration task.

    The parameters that can be modified vary with the type and stage of the data migration task.

  • Download object settings

    OMS allows you to download configuration information of data migration tasks and import migration task settings in batches. For more information, see Download and import the settings of migration objects.

View migration details

The Migration Details section displays the status, progress, start time, completion time, and total duration of all subtasks.

Schema migration

The definitions of data objects, such as tables, indexes, constraints, comments, and views, are migrated from the source database to the target database. Temporary tables are automatically filtered out. If the source database is not an OceanBase database, OMS performs format conversion and encapsulation based on the syntax definition and standard of the type of the target tenant of OceanBase Database and then replicates the data to the target database. If a migration object with the same name already exists in the target database, OMS skips the migration object. You must ensure the consistency of table schemas between the source and target databases.

When you advance to the forward switchover step in a data migration task, OMS automatically drops the hidden columns and unique indexes based on the type of the data migration task. For more information, see Hidden column mechanisms.

You can view the overall status, start time, completion time, total time consumed, and table and view migration progress for a schema migration task on the Schema Migration page. You can also perform the following operations on an object:

  • View creation syntax: On the Database or Table tab, click View next to the target object to view the creation syntax of a database, table, or index.

    If the object creation syntax is fully compatible, the DDL syntax executed on the OBServer node is displayed. Incompatible syntax is converted before it is displayed.

  • Modify creation syntax and try again: View the error information, check and modify the definition of the conversion result of a failed DDL statement, and then migrate the data to the target database again.

  • Retry one or all failed objects: You can retry failed schema migration tasks one by one or retry all failed tasks at a time.

  • Skip one or multiple tasks: You can skip failed schema migration tasks one by one or skip multiple failed tasks at a time. To skip multiple objects at a time, select the objects, and click Batch Skip in the upper-right corner. If you skip an object, its index is also skipped.

  • Remove one or multiple tasks: You can remove failed schema migration tasks one by one or remove multiple failed tasks at a time. To remove multiple failed objects at a time, select the objects, and click Batch Remove in the upper-right corner. If you remove an object, its index is also removed.

  • View details: The DDL statements executed on the OBServer node and the execution error information of a failed schema migration task are displayed.

Full migration

Full migration aims to migrate existing data from tables in the source database to corresponding tables in the target database. On the Full Migration page, you can filter objects by source and target databases, or select View Objects with Errors to filter out objects that hinder the overall migration progress. You can also view related information on the Table Objects, Table Indexes, and Migration Performance tabs. The status of a full migration task changes to Completed only after the table objects and table indexes are migrated.

  • On the Table Objects tab, you can view the names, source and target databases, estimated data volume, migrated data volume, and status of tables.

  • On the Table Indexes tab, you can view the table objects, source and target databases, creation time, end time, time consumed, and status. You can also view the index creation syntax and remove unwanted indexes.

  • On the Full Migration Performance tab, you can view diagrams on the performance of the current migration task, including the RPS, migration traffic, and read/write time of the source or target database, to help you identify performance-related issues, if any.

You can combine full migration with incremental synchronization to ensure data consistency between the source and target databases. If any objects fail to be migrated during full migration, the causes of the failure are displayed.

Notice

  • If you do not select Schema Migration for Migration Type, OMS migrates the fields in the source database that match those in the target database during full migration, without checking whether the schemas are consistent.

  • After full migration is completed and the subsequent step is started, you cannot click Rerun in the **Actions** column of the target Full-Verification component on the page displayed after you choose OPS & Monitoring > Component > Full-Verification.

Incremental synchronization

After incremental synchronization starts, data that has been changed (added, modified, or deleted) in the source database is synchronized to the corresponding tables in the target database. When services continuously write data to the source database, OMS starts the incremental data pull module to pull incremental data from the source instance, parses and encapsulates the incremental data, and then stores the data in OMS. After that, OMS starts the full migration.

After the full migration task is completed, OMS starts the incremental data replay module to pull incremental data from the incremental data pull module. The incremental data is synchronized to the target instance after being filtered, mapped, and converted. If an Incr-Sync exception occurs after you execute a DDL statement in the source database and the data migration task fails, a page appears, displaying the DDL statement that causes the task failure and a Skip button. You can click Skip and confirm your operation.

Notice

This operation may lead to data structure inconsistency between the source and target databases. Proceed with caution.

For a data migration task in the Running state, you can view its latency, current timestamp, and incremental synchronization performance in the incremental synchronization section. The latency is displayed in the following format: X seconds (updated Y seconds ago). Normally, Y is less than 20.

For a data migration task in the Stopped or Failed state, you can enable the DDL/DML statistics collection feature to collect statistics on database operations performed after this feature is enabled. You can also view the specific information about incremental synchronization objects and the incremental synchronization performance.

  • The Synchronization Object Statistics tab displays the statistics about table-level DML statements executed for each incremental synchronization object in the current task. The numbers displayed in the Change Sum, Delete, Insert, and Update fields in the section above the Synchronization Object Statistics tab are the sums of the corresponding columns on this tab.

    Incremental synchronization 1

  • The Incremental Synchronization Performance tab displays the following content:

    • Latency: the latency in synchronizing incremental data from the source database to the target database, in seconds.

    • Migration traffic: the traffic throughput of incremental data synchronization from the source database to the target database, in KB/s.

    • Average execution time: the average execution time of an SQL statement, in ms.

    • Average commit time: the average commit time of a transaction, in ms.

    • RPS: the number of rows written to the target database per second.

    When you create a data migration task, we recommend that you specify related information such as the alert level and alert frequency, to help you understand the task status. OMS provides low-level protection by default. You can modify the alert level based on your business needs. For more information, see Manage alert settings.

    • When the incremental synchronization latency exceeds the specified alert threshold, the incremental synchronization status stays at Running and the system does not trigger any alerts.

    • When the incremental synchronization latency is less than or equal to the specified alert threshold, the incremental synchronization status changes from Running to Monitoring. After the incremental synchronization status changes to Monitoring, it will not change back to Running when the latency exceeds the specified alert threshold.

Full verification

After the full migration and incremental migration are completed, OMS automatically initiates a full verification task to verify the data tables in the source and target databases.

Notice

  • If you do not select Schema Migration for Migration Type, OMS migrates the fields in the source database that match those in the target database during full verification, without checking whether the schemas are consistent.

  • During a full verification task, if you perform a CREATE, DROP, ALTER, or RENAME operation on the source table, the task may exit.

You can also initiate custom data verification tasks during incremental synchronization. On the Full Verification page, you can view the overall status, start time, end time, total consumed time, estimated total number of rows, number of migrated rows, real-time traffic, and RPS of the full verification task.

The Full Verification page contains the Verified Objects and Full Verification Performance tabs.

  • On the Verified Objects tab, you can view the verification progress and verification object list.

    • You can view the names, source and target databases, full verification progress and results, and result summary of all migration objects.

    • You can filter migration objects by source or target database.

    • You can select View Completed Objects Only to view the basic information of objects that have completed schema migration, such as the object names.

    • You can choose Reverify > Restart Full Verification to run full verification again for all migration objects.

    • Take note of the following items for tables with inconsistent verification results:

      If you need to reverify all data in the tables, choose Reverify > Reverify Abnormal Table.

      If you need to reverify only inconsistent data, choose Reverify > Verify Only Inconsistent Records.

      Notice

      Correction operations are not supported if the source database has no corresponding data.

  • On the Full Verification Performance tab, you can view the graphs of performance data such as the RPS and verification traffic of the source and target databases and performance benchmarks. Such information can help you identify performance issues in a timely manner.

    OMS allows you to skip full verification for a task that is being verified or has failed verification. On the Full Verification page, click Skip Full Verification in the upper-right corner. In the dialog box that appears, click OK.

    Notice

    If you skip full verification, you cannot resume the verification task for data comparison and correction. You can only clone the current task to initiate full verification again. Therefore, proceed with caution.

    After the full verification is completed, you can click Go To Next Stage to start a forward switchover. After you enter the switchover process, you cannot recheck the current verification task to compare or correct data.

Forward switchover

Forward switchover is an abstract and standard process of traditional system cutover and does not involve the switchover of application connections. This process includes a series of tasks that are performed by OMS for application switchover in a data migration task. You must make sure that the entire forward switchover process is completed before the application connections are switched over to the target database.

Forward switchover is required for data migration. By performing forward switchover, OMS ensures the completion of forward data migration. You can start the Incr-Sync component for reverse incremental synchronization based on your business needs. The forward switchover process involves the following operations:

  1. You must make sure that data migration is completed and wait until forward synchronization is completed.

  2. OMS automatically supplements CHECK constraints, FOREIGN KEY constraints, and other objects that are ignored in the schema migration phase when the target is an Oracle database, an Oracle-compatible tenant of OceanBase Database, or a DB2 LUW database.

  3. OMS automatically drops the additional hidden columns and unique indexes that the migration depends on.

    This operation is performed only for data migration between an Oracle database and an OceanBase database or between OceanBase databases. For more information, see Hidden column mechanisms.

  4. You need to supplement triggers, functions, stored procedures, and other database objects in the source database that are not supported by OMS to the target database.

  5. You need to disable triggers and FOREIGN KEY constraints in the source database when the data migration task involves reverse incremental migration.

The forward switchover process contains the following steps:

forward-switchover

  1. Start forward switchover.

    In this step, you can start forward switchover, but no operation is performed in the background. After you confirm that data migration is completed, you can click Start Forward Switchover to start the forward switchover process for business cutover.

    Notice

    Before you start forward switchover, make sure that data writing has stopped in the source database.

  2. Perform a switchover precheck.

    In this step, OMS checks the following items:

    • Synchronization latency between the source and target databases. If the synchronization latency is within 15 seconds, this check item is passed.

    • Write privilege of the account in the source database. If the data migration task involves reverse incremental migration, OMS checks whether the account configured in the source database has the privilege to write data, to ensure that data can be properly written during reverse incremental migration.

    • Read privilege of the account on incremental data in the target database. If the data migration task involves reverse incremental migration, OMS checks whether the account configured in the target database has the privilege to read data. This ensures that data can be properly written to the target database during reverse incremental migration.

    • Incremental logs in the target database. If the data migration task involves reverse incremental migration, OMS checks whether the incremental logging configuration in the target database meets the log extraction requirements of reverse incremental migration.

    If the switchover precheck is passed, OMS automatically proceeds to the next step. If the switchover precheck fails, OMS provides two options: Retry and Skip.

    Notice

    If you click **Skip**, data loss may occur in the target database, or the reverse incremental migration process may fail. Proceed with caution.

  3. Start the Store component in the target database.

    Note

    This step is available only when the data migration task involves reverse incremental migration.

    If the forward switchover precheck is passed, OMS automatically starts the incremental log pull service for the target database to obtain the DML and DDL operations performed in the target database and parse and save related log data, to prepare for reverse incremental migration. This step takes 3 to 5 minutes.

  4. Confirm that data writing has stopped in the source database.

    In this step, OMS checks whether business data is still being written to the source database. If you make sure that no new data is written to the source database, click OK to go to the next step.

  5. Confirm the data writing stop timestamp upon synchronization completion.

    In this step, OMS checks whether the target database is synchronized to the data writing stop timestamp in the source database. If this step is in progress or fails and the synchronization is not completed after a long period, you can click Skip.

    Notice

    If you choose to skip this step, data inconsistency may occur between the source and target databases. Proceed with caution.

  6. Stop forward synchronization.

    In this step, you can stop forward synchronization. After forward synchronization is stopped, any database changes in the source database are no longer synchronized to the target database. If the service fails to be stopped, OMS provides two options: Retry and Skip.

    Notice

    You can skip this step only after you confirm that forward synchronization is completed in the background. Otherwise, data in the source database may be unexpectedly written to the target database. Proceed with caution.

  7. Process database objects.

    In this step, you can process objects that are ignored during data migration or not supported by OMS. This ensures normal running of business after its cutover to the target database.

    • Migrate database objects to the target database: You need to migrate triggers, functions, stored procedures, and other database objects in the source database that are not supported by OMS to the target database. After you complete the migration, click Mark as Complete.

    • Disable triggers and FOREIGN KEY constraints in the source database: This operation is required only when the data migration task involves reverse incremental synchronization. It prevents data from being affected by triggers or FOREIGN KEY constraints, to avoid failures of reverse incremental synchronization. After you complete this operation, click Mark as Complete.

    • Supplement objects ignored during schema migration to the target: This operation is automatically performed only when the target is an Oracle database, an Oracle-compatible tenant of OceanBase Database, or a DB2 LUW database. This operation aims to supplement CHECK constraints, FOREIGN KEY constraints, and other objects ignored during schema migration to the target. When the type of the target is not any of the aforementioned ones, the preceding objects are migrated during schema migration by default and no extra operation is required.

    • Drop additional hidden columns and unique indexes added by OMS: This operation is automatically performed only for data migration between an Oracle database and an OceanBase database or between OceanBase databases. This operation aims to drop the additional hidden columns and unique indexes added to the target database by OMS to ensure data consistency during data migration. This operation runs automatically, and the amount of time required depends on the amount of data in the target database. OMS provides the Skip option for this operation. If you choose to skip this operation, you need to manually perform the drop operation. Proceed with caution. For more information, see Hidden column mechanisms.

  8. Start reverse incremental migration.

    Note

    This step is available only when the data migration task involves reverse incremental migration.

    In this step, you can start incremental synchronization for the target database to synchronize incremental DML or DDL operations from the target database to the source database in real time. The configuration of incremental synchronization is the same as that specified during task creation. For more information, see topics for specific databases in the Supported DDL operations and limits for synchronization chapter.

Reverse incremental migration

For a data migration task in the Running state, you can view its latency, current timestamp, and performance of reverse incremental migration in the Reverse Incremental Migration section. The latency is displayed in the following format: X seconds (updated Y seconds ago). Normally, Y is less than 20.

For a data migration task in the Stopped or Failed state, you can enable the DDL/DML statistics collection feature to collect statistics on database operations performed after this feature is enabled. You can also view the specific information about the objects and performance of reverse incremental migration.

  • The Synchronization Object Statistics tab displays the statistics about table-level DML statements executed for each incremental synchronization object in the current task. The numbers displayed in the Change Sum, Delete, Insert, and Update fields in the section above the Synchronization Object Statistics tab are the sums of the corresponding columns on this tab.

  • The Reverse Incremental Migration Performance tab displays the following content:

    • Latency: the latency in synchronizing incremental data from the target database to the source database, in seconds.

    • Migration traffic: the traffic throughput of incremental data synchronization from the target database to the source database, in Kbit/s.

    • Average execution time: the average execution time of an SQL statement, in ms.

    • Average commit time: the average commit time of a transaction, in ms.

    • RPS: the number of rows written to the target database per second.

Previous topic

Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database
Last

Next topic

Rename a data migration task
Next
What is on this page
Access the details page
View basic information
View migration details
Schema migration
Full migration
Incremental synchronization
Full verification
Forward switchover
Reverse incremental migration