OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.2.3Community Edition

  • OMS Documentation
  • OMS Community Edition Introduction
    • What is OMS Community Edition?
    • Terms
    • OMS Community Edition HA
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS Community Edition
    • Deployment modes
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS Community Edition on a single node
    • Deploy OMS Community Edition on multiple nodes in a single region
    • Deploy OMS Community Edition on multiple nodes in multiple regions
    • Integrate the OIDC protocol into OMS Community Edition to implement SSO
    • Scale out OMS Community Edition
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS Community Edition console
    • Log on to the console of OMS Community Edition
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Overview
    • Migrate data from a MySQL database to OceanBase Database Community Edition
    • Migrate data from OceanBase Database Community Edition to a MySQL database
    • Migrate data from HBase to OBKV
    • Migrate data between instances of OceanBase Database Community Edition
    • Migrate data in active-active disaster recovery scenarios
    • Migrate data from a TiDB database to OceanBase Database Community Edition
    • Migrate data from a PostgreSQL database to OceanBase Database Community Edition
    • Manage data migration projects
      • View the details of a data migration project
      • Change the name of a data migration project
      • View and modify migration objects
      • Manage computing platforms
      • Use tags to manage data migration projects
      • Perform batch operations on data migration projects
      • Download and import settings of migration objects
      • Start and pause a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • DDL synchronization
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
      • Set an incremental synchronization timestamp
    • Supported DDL operations in incremental migration and limits
      • DDL synchronization from MySQL database to OceanBase Community Edition
        • Overview of DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between MySQL database and OceanBase Community Edition
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Supported DDL operations in incremental migration from OceanBase Community Edition to a MySQL database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from OceanBase Database Community Edition to a Kafka instance
    • Create a project to synchronize data from OceanBase Database Community Edition to a RocketMQ instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • Change the name of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Perform batch operations on data synchronization projects
      • Download and import the settings of synchronization objects
      • Start and pause a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • DDL synchronization
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase-CE data source
      • Create a MySQL data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a PostgreSQL data source
      • Create an HBase data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
  • OMS Community Edition O&M
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync or Full-Import tuning
    • Component parameters
      • Coordinator
      • Condition
      • Source Plugin
        • Overview
        • StoreSource
        • DataFlowSource
        • LogProxySource
        • KafkaSource (TiDB)
        • HBaseSource
      • Sink Plugin
        • Overview
        • JDBC-Sink
        • KafkaSink
        • DatahubSink
        • RocketMQSink
        • HBaseSink
      • Store parameters
        • Parameters of a MySQL store
        • Parameters of an OceanBase store
      • Parameters of the CM component
      • Parameters of the Supervisor component
      • Full-Verification parameters
    • Set throttling
  • Reference Guide
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • ListFullVerifyInconsistenciesResult
      • ListFullVerifyCorrectionsResult
      • UpdateStore
      • UpdateFullImport
      • UpdateIncrSync
      • UpdateFullVerification
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • Telemetry parameters
  • Upgrade Guide
    • Overview
    • Upgrade OMS Community Edition in single-node deployment mode
    • Upgrade OMS Community Edition in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • Clear files in the Store component
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Project diagnostics
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
      • How do I upgrade CDC?
      • What do I do when the "Failed to fetch" error is reported?
      • Change port numbers for components
      • Switch to the standby database

Download PDF

OMS Documentation What is OMS Community Edition? Terms OMS Community Edition HA Overview Hierarchical functional system Basic components Limitations Data migration process Data synchronization process Deployment modes System and network requirements Memory and disk requirements Prepare the environment Deploy OMS Community Edition on a single node Deploy OMS Community Edition on multiple nodes in a single region Deploy OMS Community Edition on multiple nodes in multiple regions Integrate the OIDC protocol into OMS Community Edition to implement SSO Scale out OMS Community Edition Check the deployment Deploy a time-series database (Optional) Log on to the console of OMS Community Edition Overview Configure user information Change your logon password Log off Overview Migrate data from a MySQL database to OceanBase Database Community Edition Migrate data from OceanBase Database Community Edition to a MySQL database Migrate data from HBase to OBKV Migrate data between instances of OceanBase Database Community Edition Migrate data in active-active disaster recovery scenarios Migrate data from a TiDB database to OceanBase Database Community Edition Migrate data from a PostgreSQL database to OceanBase Database Community Edition View the details of a data migration project Change the name of a data migration project View and modify migration objects Manage computing platforms Use tags to manage data migration projects Perform batch operations on data migration projects Download and import settings of migration objects Start and pause a data migration project Release and delete a data migration project DML filtering DDL synchronization Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Set an incremental synchronization timestamp Supported DDL operations in incremental migration from OceanBase Community Edition to a MySQL database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from OceanBase Database Community Edition to a Kafka instance Create a project to synchronize data from OceanBase Database Community Edition to a RocketMQ instance View details of a data synchronization project Change the name of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Perform batch operations on data synchronization projects Download and import the settings of synchronization objects Start and pause a data synchronization project Release and delete a data synchronization project DML filtering DDL synchronization Rename a topic Use SQL conditions to filter data Column filtering Data formats Create an OceanBase-CE data source Create a MySQL data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a PostgreSQL data source Create an HBase data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database O&M overview Go to the overview page View server information Update the quota View server logs View O&M tasks Skip a task or subtask Retry a task or subtask Overview Manage users Manage departments View project alerts View system alerts Manage alert settings
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.2.3
iconOceanBase Migration Service
V 4.2.3Community Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

View the details of a data migration project

Last Updated:2024-04-18 03:40:56  Updated
share
What is on this page
Access the details page
View basic information
View migration details
Schema migration
Full migration
Incremental synchronization
Full verification
Forward switchover
Reverse incremental migration

folded

share

After a data migration project starts, you can view the project metrics on the details page of the project, such as the basic information, and project progress and status.

Access the details page

  1. Log on to the console of OMS Community Edition.

  2. In the left-side navigation pane, click Data Migration.

  3. On the Data Migration page, click the name of the target project. On the details page that appears, view the basic information and migration details of the project.

    On the Data Migration page, you can search for data migration projects by tag, status, type, or keywords. A data migration project can be in one of the following states:

    • Not Started: The data migration project has not been started. You can click Start in the Actions column to start the project.

    • Running: The data migration project is in progress. You can view the data migration plan and current progress on the right.

    • Modifying: The migration objects in the migration project are being modified.

    • Integrating: The data migration project of the modified migration object is being integrated with the migration object modification task.

    • Paused: The data migration project is manually paused. You can click Resume in the Actions column to resume the project.

    • Failed: The data migration project has failed. You can view where the failure occurred on the right. To view the error message, click the project name to go to the project details page.

    • Completed: The data migration project is completed and OMS Community Edition has migrated the specified data to the destination database in the configured migration mode.

    • Releasing: The data migration project is being released. You cannot edit a data migration project in this status.

    • Released: The data migration project is released. After the project is released, OMS Community Edition terminates the current migration and incremental synchronization project.

View basic information

The Basic Information section displays the basic information of the current data migration project.

Parameter Description
ID The unique ID of the data migration project.
Migration Type The migration type chosen when creating the migration project.
Alert Level The alert level of the data synchronization project. OMS Community Edition supports the following alert levels: No Protection, High Protection, Medium Protection, and Low Protection. For more information, see Alert settings.
Created By The user who created the current data migration project.
The time when the task was created. The time when the data migration project was created.
Concurrency for Full Migration The value can be Smooth, Normal, or Fast. The amount of resources to be consumed by a full data migration task varies based on the migration performance.
Full Verification Concurrency The value can be Smooth, Normal, or Fast. Resources consumed at the source and destination databases vary based on the specified concurrency.
Connection Details Click Connection Details to view the information about the connection between the source and destination databases of the data migration project.

You can perform the following operations:

  • View migration objects

    Click View Objects in the upper-right corner. The migration objects of the current data migration project are displayed. You can also modify the migration objects of an ongoing data migration object. For more information, see View and modify migration objects.

  • View the component monitoring metrics

    Click View Component Monitoring in the upper-right corner to view the information about the Store, Incr-Sync, Full-Import, and Full-Verification components. You can perform the following operations on the components:

    • Start a component: Click Start in the Actions column of the component that you want to start. In the dialog box that appears, click OK.

    • Pause a component: Click Pause in the Actions column of the component that you want to pause. In the dialog box that appears, click OK.

    • Update a component: Click Update in the Actions column of the component that you want to update. On the Update Configuration page, modify the configurations and then click Update.

      Notice

      The system restarts after you update the component. Proceed with caution.

    • View logs: Click View Logs in the Actions column of the component. The View Logs page displays the latest logs. You can search for, download, and copy the logs.

  • View or modify parameter configurations

    • For a data migration project in the Running state, click the More icon in the upper-right corner and then select Settings from the drop-down list to view the parameters of the data migration project when it was created.

    • For a data migration project in the Not Started, Paused, or Failed state, click the More icon in the upper-right corner and then select Modify Parameter Configurations from the drop-down list. In the Modify Parameter Configurations dialog box, modify the parameters, and click OK.

      The parameters that can be modified vary with the type of the data migration project and the stage of the task.

  • Download object settings

    OMS Community Edition allows you to download the settings of data migration projects and import migration project settings in batches. For more information, see Download and import the settings of migration objects.

  • Modify the alert level

    OMS Community Edition allows you to modify the alert level of a data migration project. For more information, see Alert settings.

  • Change data sources

    OMS Community Edition allows you to change a data source of a data migration project. However, you can use this feature only in the following scenarios. Otherwise, the data migration project will fail and cannot be recovered.

    • The data source has experienced a primary/standby switchover and you need to replace the IP address of the original primary database with that of the new primary database.

    • The IP address or port number of the data source is changed, but the data source remains unchanged.

    • The username or password for logging on to the data source is changed.

    Perform the following steps to change the data source:

    1. Go to the details page of the data migration project.

    2. Click More in the upper-right corner and select Modify Data.

    3. In the Modify Data Source dialog box, select the new source or destination database as needed.

      Notice

      The type of the new data source must be the same as that of the current data source.

    4. Click OK.

View migration details

The Migration Details section displays the status, progress, start time, completion time, and total time spent of all subtasks of the current project.

Schema migration

The definitions of data objects, such as tables, indexes, constraints, comments, and views, are migrated from the source database to the destination database. Temporary tables are automatically filtered out. If the source database is not OceanBase database Community Edition, OMS Community Edition performs SQL format conversion and construction based on the syntax definition and standard of the type of the destination tenant of OceanBase Database, and then replicates the data to the destination database.

When you advance to the forward switchover stage in a data migration project, the hidden columns and unique indexes are automatically dropped based on the type of the data migration project.

You can view the overall status, start time, completion time, total time consumed, and table and view migration progress for a schema migration project on the Schema Migration page. You can also perform the following operations on an object:

  • View Creation Syntax: On the Database or Table tab, click View next to the target object to view the creation syntax of a database, table, or index.

    Compatible DDL syntax executed on the OBServer node is displayed. Incompatible syntax is converted before it is displayed.

  • Modify Creation Syntax and Try Again: View the error information, check and modify the definition of the conversion result of a failed DDL statement, and then migrate the data to the destination again.

  • Retry/Retry All Failed Objects: You can retry failed schema migration tasks one by one or retry all failed tasks at a time.

  • Skip/Batch Skip: You can skip failed schema migration tasks one by one or skip multiple failed tasks at a time. To skip multiple objects at a time, click Batch Skip in the upper-right corner. If you skip an object, its index is also skipped.

  • Remove/Batch Remove: You can remove failed schema migration tasks one by one or remove multiple failed tasks at a time. To remove multiple failed tasks at a time, click Batch Remove in the upper-right corner. If you remove an object, its index is also removed.

  • View Details: The DDL statements executed on the OBServer node and the execution error information of a failed schema migration task are displayed.

Full migration

The existing data is migrated from tables in the source database to the corresponding tables in the destination database. On the Full Migration page, you can filter objects by source and destination databases, or select View Objects with Errors to filter out objects that hinder the overall migration progress. You can also view related information on the Table Objects, Table Indexes, and Full Migration Performance tabs. The status of a full migration task changes to Completed only after the table objects and table indexes are migrated.

  • On the Table Objects tab, you can view the names, source and destination databases, estimated data volume, migrated data volume, and status of tables.

  • On the Table Indexes tab, you can view the table objects, source and destination databases, creation time, end time, time consumed, and status. You can also view the index creation syntax and remove unwanted indexes.

  • On the Full Migration Performance tab, you can view the graphs of performance data such as the RPS and migration traffic of the source database and destination database, average read time and average sharding time of the source database, average write time of the destination database, and performance benchmarks. Such information can help you identify performance issues in a timely manner.

You can combine full migration with incremental synchronization to ensure data consistency between the source and destination databases. If any objects fail to be migrated during full migration, the causes of the failure are displayed.

Notice

  • If you did not select Schema Migration for Migration Type, the fields in the source database that match those in the destination database are migrated during full migration. OMS Community Edition does not check whether the table structures are consistent.

  • After full migration is completed and the subsequent step is started, you cannot choose OPS and Monitoring > Component > Full-Verification and click Rerun in the Actions column to rerun the target Full-Verification component.

Incremental synchronization

After incremental synchronization starts, the data migration service synchronizes the data that has been changed (added, modified, or deleted) in the source database to the corresponding tables in the destination database. When services are continuously writing data to the source database, OMS Community Edition starts the incremental data pull module to pull incremental data from the source instance, parses and encapsulates the incremental data, and then stores the data in OMS Community Edition, before it starts the full data migration.

After a full data migration task is completed, OMS Community Edition starts the incremental data replay module to pull incremental data from the incremental data pull module. The incremental data is synchronized to the destination instance after being filtered, mapped, and converted. If an Incr-Sync exception occurs after you execute a DDL statement on the source database and the data migration project fails, a page appears, displaying the DDL statement that causes the project failure and the Skip button. You can click Skip and confirm your operation.

Notice

This operation may lead to data structure inconsistency between the source and destination databases. Proceed with caution.

For a Running data migration project, you can view its latency, current timestamp, and incremental synchronization performance in the incremental synchronization section. The latency is displayed in the following format: X seconds (updated Y seconds ago). Normally, Y is less than 20.

For a Paused or Failed data migration project, you can enable the DDL/DML statistics feature to collect statistics on the database operations performed after this feature is enabled, You can also view the specific information about incremental synchronization objects and the incremental synchronization performance.

  • The Synchronization Object Statistics tab displays the statistics about table-level DML statements executed for each incremental synchronization object in the current project. The numbers displayed in the Change Sum, Delete, Insert, and Update fields in the section above the Synchronization Object Statistics tab are the sums of the corresponding columns on this tab.

    Incremental synchronization 1

  • The Incremental Synchronization Performance tab displays the following content:

    • Latency: the latency in synchronizing incremental data from the source database to the destination database, in seconds.

    • Migration traffic: The throughput of data flow for the synchronization from the source database to the target database, measured in KB/s.

    • Average execution time: the average execution time of an SQL statement, in ms.

    • Average commit time: the average commit time of a transaction, in ms.

    • RPS: The number of records processed per second.

    When you create a data migration project, we recommend that you specify related information such as the alert level and alert frequency, to help you understand the project status. OMS Community Edition provides low-level protection by default. You can modify the alert level based on your business requirements. For more information, see Alert settings.

    • When the incremental synchronization latency exceeds the specified alert threshold, the incremental synchronization status stays at Running and the system does not trigger any alerts.

    • When the incremental synchronization latency is less than or equal to the specified alert threshold, the incremental synchronization status changes from Running to Monitoring. After the incremental synchronization status changes to Monitoring, it will not change back to Running when the latency exceeds the specified alert threshold.

Full verification

After the full data migration and incremental data migration are completed, OMS Community Edition automatically initiates a full verification task to verify the data tables in the source and destination data sources.

Notice

  • If you did not select Schema Migration for Migration Type, the fields in the source database that match those in the destination database are verified during full verification. OMS Community Edition does not check whether the table structures are consistent.

  • During the full verification, if you perform the create, drop, alter, or rename operation on the source tables, the full verification may exit.

You can also initiate custom data verification tasks in the incremental data synchronization process. On the Full Verification page, you can view the overall status, start time, end time, total consumed time, estimated total number of rows, number of migrated rows, real-time traffic, and RPS of the full verification task.

The Full Verification page contains the Verified Objects and Verification Performance tabs.

  • On the Verified Objects tab, you can view the verification progress and verification object list.

  • You can view the names, source and destination databases, full verification progress and results, and result summary of all migration objects.

  • You can filter migration objects by source or destination database.

  • You can select View Completed Objects Only to view the basic information of objects that have completed schema migration, such as the object names.

  • You can choose Reverify > Restart Full Verification to run a full verification again for all migration objects.

  • Take note of the following items for tables with inconsistent verification results:

    If you need to reverify all data in the tables, choose Reverify > Reverify Abnormal Table.

    If you need to reverify only inconsistent data, choose Reverify > Verify Only Inconsistent Records.

    Notice

    Correction operations are not supported if the source database has no corresponding data.

  • On the Full Verification Performance tab, you can view the graphs of performance data such as the RPS and verification traffic of the source and destination databases and performance benchmarks. Such information can help you identify performance issues in a timely manner.

    OMS Community Edition allows you to skip full verification for a project that is being verified or has failed verification. On the Full Verification page, click Skip Full Verification in the upper-right corner. In the dialog box that appears, click OK.

    Notice

    If you skip full verification, you cannot resume the verification task for data comparison and correction. You can only clone the current project to initiate full verification again. Therefore, proceed with caution.

    After the full verification is completed, you can click Go To Next Stage to start a forward switchover. After you enter the switchover process, you cannot recheck the current verification task to compare or correct data.

Forward switchover

Notice

When you execute a data migration project in an active-active disaster recovery scenario, forward switchover is not supported.

Forward switchover is an abstract and standard process of traditional system cutover and does not involve the switchover of application connections. This process includes a series of tasks that are performed by OMS Community Edition for application switchover in a data migration project. You must make sure that the entire forward switchover process is completed before the application connections are switched over to the destination database.

Forward switchover is required for data migration. OMS Community Edition can ensure the completion of forward data migration in this process, and you can start the reverse incremental synchronization component based on your business needs. The forward switchover process involves the following operations:

  1. You must make sure that data migration is completed and wait until forward synchronization is completed.

  2. OMS Community Edition automatically supplements CHECK constraints, FOREIGN KEY constraints, and other objects that are ignored in the schema migration stage.

  3. OMS Community Edition automatically drops the additional hidden columns and unique indexes that the migration depends on.

    This operation is required only when you migrate data within OceanBase Database Community Edition. For more information, see Mechanisms for handling hidden columns.

  4. You must migrate triggers, functions, and stored procedures in the source database that are not supported by OMS Community Edition to the destination database.

  5. You must disable triggers and FOREIGN KEY constraints in the source database. This operation is required only when the data migration project involves reverse incremental synchronization.

The forward switchover process contains the following steps:

forward-switchover

  1. Start forward switchover.

    In this step, you can start forward switchover, but no operation is performed in the background. If you make sure that data migration is completed, you can click Start Forward Switchover to start the process.

    Notice

    Before you start forward switchover, make sure that data writing has stopped in the source database.

  2. Perform a switchover precheck.

    In this step, OMS Community Edition checks the following items:

    • Synchronization latency between the source and destination databases. If the synchronization latency is within 15 seconds, this check item is passed.

    • Write privilege of the account in the source database. If the data migration project involves reverse incremental synchronization, OMS Community Edition additionally checks whether the account configured in the source database has the privilege to write data, to ensure that data can be properly written during reverse incremental synchronization.

    • Privilege of the account in the destination database for incremental data read. If the data migration project involves reverse incremental synchronization, OMS Community Edition additionally checks whether the account configured in the destination database has the privilege to read data. This ensures that data can be properly written to the destination database during reverse incremental synchronization.

    • Incremental logs in the destination database. If the data migration project involves reverse incremental synchronization, OMS Community Edition additionally checks whether the incremental logging configuration in the destination database meets the log extraction requirements of reverse incremental synchronization.

    If the switchover precheck is passed, OMS Community Edition automatically performs the next step. If the precheck fails, you can click Retry or Skip.

    Notice

    If you click Skip, data loss may occur in the destination database, or the reverse incremental synchronization process may fail. Proceed with caution.

  3. Start the destination store.

    Note

    This step is available only when the data migration project involves reverse incremental synchronization.

    If the precheck for forward switchover is passed, OMS Community Edition automatically starts incremental log pulling for the destination database. This way, OMS Community Edition obtains DML and DDL operations in the destination database and parses and saves related log data to prepare for reverse incremental synchronization. This step takes about 3 to 5 minutes.

  4. Confirm that data writing has stopped in the source database.

    In this step, OMS Community Edition checks whether business data is still being written to the source database. If you make sure that no new data is written to the source database, click OK to go to the next step.

  5. Confirm the data writing stop timestamp upon synchronization completion.

    In this step, OMS Community Edition checks whether the destination database is synchronized to the data writing stop timestamp in the source database. If not, OMS Community Edition continues to check the destination database until it is synchronized to the timestamp. This way, OMS Community Edition makes sure that all data in the destination database is updated.

  6. Stop forward synchronization.

    In this step, you can stop forward synchronization. After forward synchronization is stopped, any database changes in the source database will no longer be synchronized to the destination database. If forward synchronization fails to be stopped, you can click Retry or Skip.

    Notice

    You can click Skip only when you make sure that forward synchronization is completed in the background. Otherwise, data in the source database may be unexpectedly written to the destination database. Proceed with caution.

  7. Process database objects

    In this step, you can process the objects that are ignored in data migration or not supported by OMS Community Edition. This ensures normal operations of your business after the switchover to the destination database.

    • Migrate database objects to the destination database: You must migrate triggers, functions, and stored procedures in the source database that are not supported by OMS Community Edition to the destination database. After you complete the migration, click Mark as Complete.

    • Disable triggers and FOREIGN KEY constraints in the source database: This operation is required only when the data migration project involves reverse incremental synchronization. It prevents data from being affected by triggers or FOREIGN KEY constraints, to avoid failures of reverse incremental synchronization. After you complete this operation, click Mark as Complete.

    • Supplement the objects ignored in schema migration to the destination database: OMS Community Edition automatically supplements the objects that are ignored in schema migration to the destination database, such as check constraints and FOREIGN KEY constraints. The preceding objects are migrated during schema migration by default.

    • Drop hidden columns and unique indexes added by OMS Community Edition: This operation is required only when you migrate data within OceanBase Database Community Edition. OMS Community Edition automatically drops the hidden columns and unique indexes that are added to the destination database to ensure data consistency. This operation runs automatically, and the amount of time required depends on the amount of data in the destination database. You can click Skip to skip this operation, but you must manually perform it later. Proceed with caution. For more information, see Hidden column mechanisms.

  8. Start reverse incremental synchronization.

    Note

    This step is available only when the data migration project involves reverse incremental synchronization.

    In this step, you can start incremental synchronization for the destination database to synchronize incremental DML or DDL operations from the destination database to the source database in real time. The configuration of incremental synchronization is the same as that specified when the project was created. For more information, see Incremental synchronization of DDL operations.

Reverse incremental migration

Notice

When you execute a data migration project in an active-active disaster recovery scenario, OMS Community Edition automatically starts reverse incremental synchronization before full verification based on the settings of incremental synchronization.

For a Running data migration project, you can view its latency, current timestamp, and reverse incremental migration performance in the reverse incremental migration section. The latency is displayed in the following format: X seconds (updated Y seconds ago). Normally, Y is less than 20.

For a Paused or Failed data migration project, you can enable the DDL/DML statistics feature to collect statistics on the database operations performed after this feature is enabled, You can also view the specific information about the objects and performance of reverse incremental synchronization.

  • The Synchronization Object Statistics tab displays the statistics about table-level DML statements executed for each incremental synchronization object in the current project. The numbers displayed in the Change Sum, Delete, Insert, and Update fields in the section above the Synchronization Object Statistics tab are the sums of the corresponding columns on this tab.

  • The Reverse Incremental Migration Performance tab displays the following content:

    • Latency: the latency in synchronizing incremental data from the destination database to the source database, in seconds.

    • Migration traffic: the traffic throughput of incremental data synchronization from the destination database to the source database, in Kbit/s.

    • Average execution time: the average execution time of an SQL statement, in ms.

    • Average commit time: the average commit time of a transaction, in ms.

    • RPS: The number of records processed per second.

Previous topic

Migrate data from a PostgreSQL database to OceanBase Database Community Edition
Last

Next topic

Change the name of a data migration project
Next
What is on this page
Access the details page
View basic information
View migration details
Schema migration
Full migration
Incremental synchronization
Full verification
Forward switchover
Reverse incremental migration