OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.2.3Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • What is OMS?
    • Terms
    • OMS HA
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full data migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Single-node deployment
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Integrate the OIDC protocol to OMS to implement SSO
    • Scale out OMS
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Active-active disaster recovery between OceanBase databases
    • Migrate data from a TiDB database to a MySQL tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Migrate incremental data from an Oracle tenant of OceanBase Database to a MySQL database
    • Manage data migration projects
      • View details of a data migration project
      • Rename a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Perform batch operations on data migration projects
      • Download and import settings of migration objects
      • Start and pause a data migration project
      • Release and delete a data migration project
    • Supported DDL operations and limits for synchronization
      • DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • Overview of DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle tenant of OceanBase Database
        • Overview
        • CREATE TABLE
          • Overview
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Normal indexes
        • ALTER TABLE
          • Modify tables
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop partitions
            • Drop subpartitions
            • Add partitions and subpartitions
            • Modify partitions
            • Truncate partitions
        • DROP TABLE
        • COMMENT
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle tenant of OceanBase Database
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database
      • DDL synchronization between MySQL tenants of OceanBase Database
      • DDL synchronization between Oracle tenants of OceanBase Database
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • Change the name of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Perform batch operations on data synchronization projects
      • Download and import the settings of synchronization objects
      • Start and pause a data synchronization project
      • Release and delete a data synchronization project
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Project Template
        • Create a project template
        • View and edit project templates
        • Copy and export a project template
        • Delete a project template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store parameters
        • Incr-Sync parameters
        • Full-Import parameters
        • Full-Verification parameters
        • CM parameters
        • Supervisor parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • DML filtering
      • DDL synchronization
      • Rename a migration or synchronization object
      • Use SQL conditions to filter data
      • Set an incremental synchronization timestamp
      • Configure matching rules for migration objects
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change the topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL instance
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V4.2
      • OMS V4.2.2
      • OMS V4.2.1
      • OMS V4.2.0
    • V4.1
      • OMS V4.1.0
    • V4.0
      • OMS V4.0.2
      • OMS V4.0.1
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What is OMS? Terms OMS HA Overview Hierarchical functional system Basic components OMS Oracle full data migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Single-node deployment Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Integrate the OIDC protocol to OMS to implement SSO Scale out OMS Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Overview Migrate data from a MySQL database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to a MySQL tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to an Oracle database Migrate data from an Oracle database to an Oracle tenant of OceanBase Database Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Active-active disaster recovery between OceanBase databases Migrate data from a TiDB database to a MySQL tenant of OceanBase Database Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database Migrate incremental data from an Oracle tenant of OceanBase Database to a MySQL database View details of a data migration project Rename a data migration project View and modify migration objects Use tags to manage data migration projects Perform batch operations on data migration projects Download and import settings of migration objects Start and pause a data migration project Release and delete a data migration project Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle tenant of OceanBase Database Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database DDL synchronization between MySQL tenants of OceanBase Database DDL synchronization between Oracle tenants of OceanBase Database Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project Change the name of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Perform batch operations on data synchronization projects Download and import the settings of synchronization objects Start and pause a data synchronization project Release and delete a data synchronization project Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database Minimum privileges required when an Oracle database serves as the source O&M overview Go to the overview page View server information Update the quota View server logs View O&M tasks Skip a task or subtask Retry a task or subtask
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.2.3
iconOceanBase Migration Service
V 4.2.3Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

View details of a data migration project

Last Updated:2025-05-28 09:46:28  Updated
share
What is on this page
Access the details page
View basic information
View migration details
Schema migration
Full migration
Incremental synchronization
Full verification
Forward switchover
Reverse incremental migration

folded

share

After a data migration project starts, you can view the project metrics on the details page of the project, such as the basic information, and project progress and status.

Access the details page

  1. Log on to the OMS console.

  2. In the left-side navigation pane, click Data Migration.

  3. On the Data Migration page, click the name of the target project. On the details page that appears, view the basic information and migration details of the project.

    On the Data Migration page, you can search for data migration projects by tag, status, type, or keywords. A data migration project may be in any of the following states:

    • Not Started: The data migration project has not been started. You can click Start in the Actions column to start the project.

    • Running: The data migration project is in progress. You can view the data migration plan and current progress on the right.

    • Modifying: The migration objects in the migration project are being modified.

    • Integrating: The data migration project of the modified migration object is being integrated with the migration object modification task.

    • Paused: The data migration project is manually paused. You can click Resume in the Actions column to resume the project.

    • Failed: The data migration project has failed. You can view where the failure occurred on the right. To view the error messages, click the project name to go to the project details page.

    • Completed: The data migration project is completed and OMS has migrated the specified data to the destination database in the configured migration mode.

    • Releasing: The data migration project is being released. You cannot edit a data migration project in this status.

    • Released: The data migration project is released. After the project is released, OMS terminates the current migration and incremental synchronization project.

View basic information

The Basic Information section shows you the basic information related to the current migration project.

Parameter Description
ID The unique identifier of the migration project.
Migration Type The migration type chosen when creating the migration project.
Alert Level The alert level of the data synchronization project. OMS supports the following alert levels: No Protection, High Protection, Medium Protection, and Low Protection. For more information, see Alert settings.
Created By The user who created the current data migration project.
Created At The creation time of the current migration project.
Concurrency for Full Migration The value can be Smooth, Normal, or Fast. The amount of resources to be consumed by a full data migration task varies based on the migration performance.
Full Verification Concurrency The value can be Smooth, Normal, or Fast. Resources consumed at the source and destination databases vary based on the specified concurrency.
Connection Details Click Connection Details to view the information about the connection between the source and destination databases of the data migration project.

You can perform the following operations:

  • View migration objects

    Click View Objects in the upper-right corner to view the list of migration objects for the current project. You can also modify the migration objects of an ongoing data migration object. For more information, see View and modify migration objects.

  • View the component monitoring metrics

    Click View Component Monitoring in the upper-right corner to view the information about the Store, Incr-Sync, Full-Import, and Full-Verification components. You can perform the following operations on the components:

    • Start a component: Click Start in the Actions column of the target component. In the dialog box that appears, click OK.

    • Pause a component: Click Pause in the Actions column of the target component. In the dialog box that appears, click OK.

    • Update a component: Click Update in the Actions column of the target component. On the Update Configuration page, modify the configurations and then click Update.

      Notice

      The system restarts after you update the component. Proceed with caution.

    • View logs: Click View Logs in the Actions column of the target component. The View Logs page displays the latest logs. You can search for, download, and copy the logs.

  • View or modify parameter configurations

    • For a data migration project in the Running state, click the More icon in the upper-right corner and then select Settings from the drop-down list to view the parameters of the data migration project when it was created.

    • For a data migration project in the Not Started, Paused, or Failed state, click the More icon in the upper-right corner and then select Modify Parameter Configurations from the drop-down list. In the Modify Parameter Configurations dialog box, modify the parameters, and click OK.

      The parameters that can be modified vary with the type of the data migration project and the stage of the task.

  • Download object settings

    OMS allows you to download configuration information of data migration projects and import migration project settings in batches. For more information, see Download and import the settings of migration objects.

View migration details

The Migration Details section shows you the details of all the sub-projects in your migration project, including the current state and progress, start and end time, and the total elapsed time, etc.

Schema migration

The definitions of data objects, such as tables, indexes, constraints, comments, and views, are migrated from the source database to the destination database. Temporary tables are automatically filtered out. When the source database is not OceanBase Database, the data types and SQL syntax are automatically converted and assembled according to the syntax definition standards of the destination tenant of OceanBase Database, and then replicated to the destination database.

When you advance to the forward switchover step in a data migration project, OMS will automatically drop the hidden columns and unique indexes based on the type of the data migration project. For more information, see Hidden column mechanisms.

You can view the overall status, start time, completion time, total time consumed, and table and view migration progress for a schema migration project on the Schema Migration page. You can also perform the following operations on an object:

  • View Creation Syntax: On the Database or Table tab, click View next to the target object to view the creation syntax of a database, table, or index.

    If the object creation syntax is fully compatible, the DDL syntax executed on the OBServer node is displayed. Incompatible syntax is converted before it is displayed.

  • Modify Creation Syntax and Try Again: View the error information, check and modify the definition of the conversion result of a failed DDL statement, and then migrate the data to the destination again.

  • Retry/Retry All Failed Objects: You can retry failed schema migration tasks one by one or retry all failed tasks at a time.

  • Skip/Batch Skip: You can skip failed schema migration tasks one by one or skip multiple failed tasks at a time. To skip multiple objects at a time, click Batch Skip in the upper-right corner. If you skip an object, its index is also skipped.

  • Remove/Batch Remove: You can remove failed schema migration tasks one by one or remove multiple failed tasks at a time. To remove multiple failed tasks at a time, click Batch Remove in the upper-right corner. If you remove an object, its index is also removed.

  • View Details: The DDL statements executed on the OBServer node and the execution error information of a failed schema migration task are displayed.

Full migration

Full data migration migrates the existing data of the source database to the corresponding tables in the target database. You can filter information based on the target and source databases, or select View Objects with Errors to filter out objects that hinder the overall migration progress. You can also view related information on the Table Objects, Table Indexes, and Full Migration Performance tabs. The status of a full migration task changes to Completed only after the table objects and table indexes are migrated.

  • On the Table Objects tab, you can view the names, source and destination databases, estimated data volume, migrated data volume, and status of tables.

  • On the Table Indexes tab, you can view the table objects, source and destination databases, creation time, end time, time consumed, and status. You can also view the index creation syntax and remove unwanted indexes.

  • On the Migration Performance tab, you can view diagrams on the performance of the current migration project, including the RPS, migration traffic, and read/write time of the source or target database, to help you identify performance-related issues, if any.

You can combine full migration with incremental synchronization to ensure data consistency between the source and destination databases. If any objects fail to be migrated during full migration, the causes of the failure are displayed.

Notice

  • If you do not select Schema Migration for Migration Type, OMS migrates the fields in the source database that match those in the destination database during full migration, without checking whether the schemas are consistent.

  • After full migration is completed and the subsequent step has started, you cannot click Rerun next to the target Full-Verification component on the page displayed after you choose O&M and Monitoring > Component > Full-Verification.

Incremental synchronization

After incremental synchronization starts, the data migration service synchronizes the data that has been changed (added, modified, or deleted) in the source database to the corresponding tables in the target database. When services continuously write data to the source database, OMS starts the incremental data pull module to pull incremental data from the source instance, parses and encapsulates the incremental data, and then stores the data in OMS. After that, OMS starts the full data migration.

After the full data migration task is completed, OMS starts the incremental data replay module to pull incremental data from the incremental data pull module. The incremental data is synchronized to the destination instance after being filtered, mapped, and converted. If an Incr-Sync exception occurs after you execute a DDL statement on the source database and the data migration project fails, a page appears, displaying the DDL statement that causes the project failure and the Skip button. You can skip the operation by clicking Skip on the page.

Notice

This operation may lead to data schema inconsistency between the source and destination databases. Proceed with caution.

For a Running data migration project, you can view its latency, current timestamp, and incremental synchronization performance in the incremental synchronization section. The latency is displayed in the following format: X seconds (updated Y seconds ago). Normally, Y is less than 20.

For a Paused or Failed data migration project, you can enable the DDL/DML statistics feature to collect statistics on the database operations performed after this feature is enabled. You can also view the specific information about incremental synchronization objects and the incremental synchronization performance.

  • The Synchronization Object Statistics tab displays the statistics about table-level DML statements executed for each incremental synchronization object in the current project. The numbers displayed in the Change Sum, Delete, Insert, and Update fields in the section above the Synchronization Object Statistics tab are the sums of the corresponding columns on this tab.

  • The Incremental Synchronization Performance tab displays the following content:

    • Latency: the delay in the source-side incremental changes being synchronized to the target side, measured in seconds.

    • Migration traffic: the traffic throughput of incremental data synchronization from the source to the destination, in KB/s.

    • Average execution time: the average execution time per SQL statement, measured in milliseconds.

    • Average commit time: the average commit time per transaction, measured in milliseconds.

    • RPS: the number of records processed per second.

    When you create a data migration project, we recommend that you specify related information such as the alert level and alert frequency, to help you understand the project status. OMS provides low-level protection by default. You can modify the alert level based on your business requirements. For more information, see Alert settings.

    • When the incremental synchronization latency exceeds the specified alert threshold, the incremental synchronization status stays at Running and the system does not trigger any alerts.

    • When the incremental synchronization latency is less than or equal to the specified alert threshold, the incremental synchronization status changes from Running to Monitoring. After the incremental synchronization status changes to Monitoring, it will not change back to Running when the latency exceeds the specified alert threshold.

Full verification

After the full data migration and incremental data migration are completed, OMS automatically initiates a full data verification task to verify the data tables in the source and destination databases.

Notice

  • If you do not select Schema Migration for Migration Type, OMS verifies the fields in the source database that match those in the destination database during full verification, without checking whether the schemas are consistent.

  • During the full data verification, if you perform the create, drop, alter, or rename operation on the source tables, the full data verification may exit.

You can also initiate custom data verification tasks in the incremental data synchronization process. On the Full Verification page, you can view the overall status, start time, end time, total consumed time, estimated total number of rows, number of migrated rows, real-time traffic, and RPS of the full verification task.

The Full Verification page contains the Verified Objects and Full Verification Performance tabs.

  • On the Verified Objects tab, you can view the verification progress and verification object list.

    • You can view the names, source and destination databases, full data verification progress and results, and result summary of all migration objects.

    • You can filter migration objects by source or destination database.

    • You can select View Completed Objects Only to view the basic information of objects that have completed schema migration, such as the object names.

    • You can choose Reverify > Restart Full Verification to run a full verification again for all migration objects.

    • Take note of the following items for tables with inconsistent verification results:

      If you need to reverify all data in the tables, choose Reverify > Reverify Abnormal Table.

      If you need to reverify only inconsistent data, choose Reverify > Verify Only Inconsistent Records.

      Notice

      Correction operations are not supported if the source database has no corresponding data.

  • On the Full Verification Performance tab, you can view the graphs of performance data such as the RPS and verification traffic of the source and destination databases and performance benchmarks. Such information can help you identify performance issues in a timely manner.

    OMS allows you to skip full verification for a project that is being verified or has failed verification. On the Full Verification page, click Skip Full Verification in the upper-right corner. In the dialog box that appears, click OK.

    Notice

    If you skip full data verification, you cannot resume the verification task for data comparison and correction. You can only clone the current project to initiate full data verification again. Therefore, proceed with caution.

    After the full verification is completed, you can click Go To Next Stage to start a forward switchover. After you enter the switchover process, you cannot recheck the current verification task to compare or correct data.

Forward switchover

Forward switchover is an abstract and standard process of traditional system cutover and does not involve the switchover of application connections. This process includes a series of tasks that are performed by OMS for application switchover in a data migration project. You need to make sure that the entire forward switchover process is completed before the application connections are switched over to the destination database.

Forward switchover is indispensable in data migration. Through forward switchover, OMS can verify that forward data migration is completed. You can then start the reverse incremental migration component. Forward switchover involves the following jobs:

  1. You need to confirm that data migration is completed and wait until there is no latency in forward synchronization.

  2. OMS will automatically supplement CHECK constraints, FOREIGN KEY constraints, and other objects that are ignored in the schema migration phase when the destination is an Oracle database, an Oracle tenant of OceanBase Database, or a DB2 LUW database.

  3. OMS will automatically drop the additional hidden columns and unique indexes that the migration depends on.

    This operation is performed only for data migration between an Oracle database and an OceanBase database or between OceanBase databases. For more information, see Hidden column mechanisms.

  4. You need to supplement triggers, functions, stored procedures, and other database objects at the source that are not supported by OMS to the destination.

  5. You need to disable triggers and FOREIGN KEY constraints at the source when reverse incremental migration is selected for a data migration project.

The forward switchover procedure is as follows:

  1. Start forward switchover

    This step aims to confirm the start of forward switchover. No related operation is actually performed in the background. After you confirm that data migration is completed, you can click Start Forward Switchover to start the forward switchover process for business cutover.

    Notice

    Before you start forward switchover, make sure that writing has been stopped at the source.

  2. Perform switchover precheck

    This step checks the following items before forward switchover:

    • Latency between the source and destination. If the latency is within 15s, this item passes the check.

    • Write privilege of the migration account on the source. If reverse incremental migration is selected for a data migration project, you need to check whether the migration account configured for the source has the privilege to write data, to ensure that data can be properly written during reverse incremental migration.

    • Read privilege of the migration account on the destination. If reverse incremental migration is selected for a data migration project, you need to check whether the migration account configured for the destination has the privilege to read data, to ensure that data can be properly read from the destination during reverse incremental migration.

    • Incremental logs at the destination. If reverse incremental migration is selected for a data migration project, you need to check whether the incremental log configurations at the destination meet the log pull requirements during reverse incremental migration.

    If the switchover precheck is passed, OMS automatically proceeds to the next step. If the switchover precheck fails, OMS provides two options: Retry and Skip.

    Notice

    If you choose to skip this step, issues such as data loss at the destination or failure of reverse incremental migration may occur. Proceed with caution.

  3. Start the destination Store

    Note

    This step is displayed only when a data migration project has the reverse incremental migration phase.

    If the forward switchover precheck is passed, OMS automatically starts the incremental log pull service for the destination to obtain the DML and DDL operations performed at the destination and parse and save related log data, to prepare for reverse incremental migration. This step lasts about three to five minutes.

  4. Confirm that writing has stopped at the source

    This step aims to confirm that the source has no continuous business writes. If you confirm that the source has no new data, click OK to proceed to the next step.

  5. Confirm that the destination is synchronized to the writing stop timestamp of the source

    This step aims to confirm that the destination is synchronized to the timestamp when writing is stopped at the source. If this step is in progress or fails and the synchronization is not completed after a long period, you can click Skip.

    Notice

    If you choose to skip this step, data inconsistency may occur between the source and destination. Proceed with caution.

  6. Stop forward synchronization

    This step aims to stop the forward synchronization service. After the service is stopped, the database changes made at the source are no longer synchronized to the destination. If the service fails to be stopped, OMS provides two options: Retry and Skip.

    Notice

    You can choose to skip this step only after you confirm that forward synchronization has been completed in the background. Otherwise, data of the source may be unexpectedly written to the destination. Proceed with caution.

  7. Process database objects

    This step aims to process objects that are ignored during data migration or not supported by OMS. This ensures normal running of business after its cutover to the destination.

    • Migrate database objects to the destination: You need to migrate triggers, functions, stored procedures, and other database objects at the source that are not supported by OMS to the destination. After you complete this operation, click Mark as Complete.

    • Disable triggers and FOREIGN KEY constraints at the source: You need to perform this operation only in the reverse incremental migration phase of a data migration project. This aims to protect data from being affected by triggers or FOREIGN KEY constraints during reverse incremental migration. After you complete this operation, click Mark as Complete.

    • Supplement objects ignored during schema migration to the destination: This operation is automatically performed only when the destination is an Oracle database, an Oracle tenant of OceanBase Database, or a DB2 LUW database. This operation aims to supplement CHECK constraints, FOREIGN KEY constraints, and other objects ignored during schema migration to the destination. When the type of the destination is not any of the aforementioned ones, the preceding objects are migrated during schema migration by default and no extra operation is required.

    • Drop additional hidden columns and unique indexes added by OMS: This operation is automatically performed only for data migration between an Oracle database and an OceanBase database or between OceanBase databases. This operation aims to drop the additional hidden columns and unique indexes added at the destination by OMS to ensure data consistency during data migration. The execution time of this operation is subject to the data amount at the destination. OMS provides the Skip option for this operation. If you choose to skip this operation, you need to manually perform the drop operation. Proceed with caution. For more information, see Hidden column mechanisms.

  8. Start reverse incremental migration

    Note

    This step is displayed only when a data migration project has the reverse incremental migration phase.

    This step aims to start the incremental synchronization service for the destination to synchronize the incremental DML or DDL changes at the destination to the source. The configurations for incremental synchronization are consistent with those specified during project creation. For more information about the incremental DDL synchronization feature, see the topic of the specific database under Supported DDL operations and limits for synchronization.

Reverse incremental migration

For a Running data migration project, you can view its latency, current timestamp, and reverse incremental migration performance in the reverse incremental migration section. The latency is displayed in the following format: X seconds (updated Y seconds ago). Normally, Y is less than 20.

For a Paused or Failed data migration project, you can enable the DDL/DML statistics feature to collect statistics on the database operations performed after this feature is enabled. You can also view the specific information about the objects and performance of reverse incremental data synchronization.

  • The Synchronization Object Statistics tab displays the statistics about table-level DML statements executed for each incremental synchronization object in the current project. The numbers displayed in the Change Sum, Delete, Insert, and Update fields in the section above the Synchronization Object Statistics tab are the sums of the corresponding columns on this tab.

  • The Reverse Incremental Migration Performance tab displays the following content:

    • Latency: the latency in synchronizing incremental data from the destination database to the source database, in seconds.

    • Migration traffic: the traffic throughput of incremental data synchronization from the destination to the source, in KB/s.

    • Average execution time: the average execution time per SQL statement, measured in milliseconds.

    • Average commit time: the average commit time per transaction, measured in milliseconds.

    • RPS: the number of records processed per second.

Previous topic

Migrate incremental data from an Oracle tenant of OceanBase Database to a MySQL database
Last

Next topic

Rename a data migration project
Next
What is on this page
Access the details page
View basic information
View migration details
Schema migration
Full migration
Incremental synchronization
Full verification
Forward switchover
Reverse incremental migration