OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.0.2Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • OMS HA
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Integrate the OIDC protocol to OMS to implement SSO
    • Scale-out OMS
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Active-active disaster recovery between OceanBase databases
    • Migrate data from a TiDB database to a MySQL tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • Change the name of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start and pause a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
      • Set an incremental synchronization timestamp
    • Supported DDL operations and limits for synchronization
      • DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • Overview of DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle tenant of OceanBase Database
        • Overview
        • CREATE TABLE
          • Overview
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Normal indexes
        • ALTER TABLE
          • Modify tables
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop partitions
            • Drop subpartitions
            • Add partitions and subpartitions
            • Modify partitions
            • Truncate partitions
        • DROP TABLE
        • COMMENT
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database
      • DDL synchronization between MySQL tenants of OceanBase Database
      • DDL synchronization between Oracle tenants of OceanBase Database
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from an OceanBase database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • Change the name of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start and pause a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M tickets
      • View details of an O&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Operation audit
  • OMS O&M
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Component parameters
      • Coordinator
      • Condition
      • Source Plugin
        • Overview
        • StoreSource
        • DataFlowSource
        • LogProxySource
        • KafkaSource (TiDB)
      • Sink Plugin
        • Overview
        • JDBC-Sink
        • KafkaSink
        • DatahubSink
        • RocketMQSink
      • Store parameters
        • Parameters of an Oracle store
        • Parameters of a DB2 store
        • Parameters of a MySQL store
        • Parameters of an OceanBase store
      • Parameters of the CM component
      • Parameters of the Supervisor component
    • Set throttling
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V4.0
      • OMS V4.0.2
      • OMS V4.0.1
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms OMS HA Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Integrate the OIDC protocol to OMS to implement SSO Scale-out OMS Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Migrate data from a MySQL database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to a MySQL tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to an Oracle database Migrate data from an Oracle database to an Oracle tenant of OceanBase Database Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Active-active disaster recovery between OceanBase databases Migrate data from a TiDB database to a MySQL tenant of OceanBase Database Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project Change the name of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start and pause a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Set an incremental synchronization timestamp Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database DDL synchronization between MySQL tenants of OceanBase Database DDL synchronization between Oracle tenants of OceanBase Database Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from an OceanBase database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project Change the name of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start and pause a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.0.2
iconOceanBase Migration Service
V 4.0.2Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Synchronize data from a MySQL database to a DataHub instance

Last Updated:2026-04-14 07:36:47  Updated
share
What is on this page
Prerequisites
Limits
Usage notes
Supplemental properties
Procedure

folded

share

Synchronize data from a MySQL database to a DataHub instance

This topic describes how to synchronize data from a MySQL database to a DataHub instance.

Prerequisites

  • You have enabled binlogs for the self-managed MySQL database. For more information, see Enable binlogs for the MySQL database.

  • A database user is created to implement the data migration tasks for the source MySQL database and granted the required privileges. For more information, see Create a database user.

Limits

  • The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.

  • OceanBase Migration Service (OMS) supports data synchronization when character sets are configured with UTF8 and GBK coding.

  • Data source identifiers, user accounts, and tags must be globally unique in OMS.

DataHub has the following limits:

  • DataHub limits the size of a message based on the cloud environment, usually to 1 MB.

  • DataHub sends messages in batches, with each batch sized no more than 4 MB. If a single message meets the conditions for sending, you can modify the batch.size parameter. By default, 20 messages are sent at a time within one second.

  • OMS supports only the synchronization of objects whose database name or table name is an ASCII string without special characters. The special characters are . | \ " ' ` ( ) = ; / & \n

Usage notes

  • If the clocks are not synchronized between the nodes or between the client and a server, the latency of incremental synchronization may be negative.

  • If incremental parsing is required for the MySQL database, you must specify the ID of the MySQL server.

  • When data transmission is resumed for a project, some data (transmitted within the last minute) may be duplicated in the DataHub instance. Therefore, data deduplication is required in downstream applications.

  • We recommend that you select no more than 15,000 database objects for a project.

    If a table contains LOB fields or contains more than 500 columns, we recommend that you create a dedicated project for the table and set the JVM parameters of related components as needed. For example, set the limitator.select.batch.max parameter for the full verification component, the sourceBatchSize parameter for the full import component, and the sourceBatchSize parameter for the incremental synchronization component.

    Execute the following statement to query tables that contain LOB fields: SELECT DISTINCT(TABLE_NAME) FROM ALL_TAB_COLUMNS WHERE DATA_TYPE IN ('BLOB', 'CLOB', 'NCLOB') AND OWNER = XXX;.

  • When you synchronize incremental data from a MySQL database to a DataHub instance, the initial schema is synchronized to the DataHub schema. The following table lists data types supported by MySQL.

    MySQL database Mapped-to data type in DataHub
    BIT STRING (Base64-encoded)
    CHAR STRING
    BINARY STRING (Base64-encoded)
    VARBINARY STRING (Base64-encoded)
    INT BIGINT
    TINYTEXT STRING
    SMALLINT BIGINT
    MEDIUMINT BIGINT
    BIGINT DECIMAL (This data type is used because the maximum unsigned value exceeds the maximum LONG value in Java.)
    FLOAT DECIMAL
    DOUBLE DECIMAL
    DECIMAL DECIMAL
    DATE STRING
    TIME STRING
    YEAR BIGINT
    DATETIME STRING
    TIMESTAMP TIMESTAMP (accurate to milliseconds)
    VARCHAR STRING
    TINYBLOB STRING (Base64-encoded)
    TINYTEXT STRING
    BLOB STRING (Base64-encoded)
    TEXT STRING
    MEDIUMBLOB STRING (Base64-encoded)
    MEDIUMTEXT STRING
    LONGBLOB STRING (Base64-encoded)
    LONGTEXT STRING

Supplemental properties

If you manually create a topic, add the following properties to the DataHub schema before you start a synchronization project. If OMS automatically creates a topic and synchronizes the schema, OMS automatically adds the following properties.

Notice

The following table applies only to tuple topics.

Parameter Type Description
oms_timestamp STRING The time when the change was made.
oms_table_name STRING The new table name of the source table.
oms_database_name STRING The new database name of the source database.
oms_sequence STRING The modified sequence number. The primary key on one server progressively increases.
oms_record_type STRING The change type. Valid values: UPDATE, INSERT, and DELETE.
oms_is_before STRING Specifies whether the data is the original data when the change type is UPDATE. Y indicates that the data is the original data.
oms_is_after STRING Specifies whether the data is the modified data when the change type is UPDATE. Y indicates that the data is the modified data.

Procedure

  1. Create a data synchronization project.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, click Data Synchronization.

    3. On the Data Synchronization page, click Create Synchronization Project in the upper-right corner.

  2. On the Select Source and Destination page, specify the following parameters.

    Parameter Description
    Synchronization Project Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length.
    Labels Click the field and select a target tag from the drop-down list. You can click Manage Tags to create, modify, and delete tags. For more information, see Manage data synchronization projects by using tags.
    Source If you have created a MySQL data source, select it from the drop-down list. Otherwise, click Add Data Source in the drop-down list to create one in the dialog box on the right side. For more information about parameters, see Create a MySQL data source.
    Destination If you have created a DataHub data source, select it from the drop-down list. Otherwise, click Add Data Source in the drop-down list to create one in the dialog box on the right side. For more information about parameters, see Create a DataHub data source.
  3. Click Next. On the Select Synchronization Type page, select the synchronization type for the current data synchronization project.

    The supported synchronization types include Schema Synchronization and Incremental Synchronization. Schema Synchronization creates a topic. Incremental Synchronization supports only the DML Synchronization option. The supported DML operations are Insert, Delete, and Update. Select the options based on your business needs. For more information, see DML filtering.

  4. Click Next. On the Select Synchronization Objects page, select the type and range of topics to be synchronized.

    Available topic types are Tuple and Blob. Tuple topics contain records that are similar to data records in databases. Each record contains multiple columns. You can only write a block of binary data as a record to a Blob topic. The data are Base64-encoded for transmission. For more information, visit the documentation center of DataHub.

    Select the type of topics to be synchronized and perform the following steps:

    1. In the left-side pane, select the objects to be synchronized.

    2. Click >.

    3. Select a mapping method.

      Notice

      When you set the topic type to tuple without selecting Schema Synchronization, you can only synchronize a single table to a single topic.

      • To synchronize a single table, select the mapping method as needed in the Map Object to Topic dialog box and click OK.

        If you do not select Schema Synchronization when you set the synchronization type and configuration, you can select only Existing Topics here. If you have selected Schema Synchronization when you set the synchronization type and configuration, you can select only one mapping method to create or select a topic.

        For example, if you selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.

        Parameter Description
        Create Topic Enter the name of the new topic in the text box. The topic name can contain letters, digits, and underscores (_) and must start with a letter. It must not exceed 128 characters in length.
        Select Topic OMS allows you to query DataHub topics. You can click Select Topic, and then find and select the topics to be synchronized from the Existing Topics drop-down list. You can also enter the name of an existing topic and select it after it appears.
        Batch Generate Topics The format for generating topics in batches is: Topic_${Database Name}_${Table Name}.

        If you select Create Topic or Batch Generate Topics, after the schema migration succeeds, you can query the created topics on the DataHub side. By default, the number of data shards is 2 and the data expiration time is 7 days. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the destination database as needed.

      • To synchronize multiple tables, click OK in the dialog box that appears.

        If you have selected a tuple topic and multiple tables without selecting Schema Synchronization, after you select a topic and click OK in the Map Object to Topic dialog box, multiple tables are displayed under the topic in the right pane, but only one table can be synchronized. Click Next. A prompt appears, indicating that only one-to-one mapping is supported between tuple topics and tables.

    4. Click OK.

    When you synchronize data from a MySQL database to a DataHub instance, OMS allows you to import objects from text data, set sharding columns for tables in the destination database, and remove a single object or all objects. Objects in the destination database are listed in the structure of Topic > Database > Table.

    Actions Steps
    Import Objects
    1. In the list on the right, click Import Objects in the upper-right corner.
    2. In the dialog box that appears, click OK.
      Notice
      This operation will overwrite previous selections. Proceed with caution.
    3. In the Import Synchronization Objects dialog box, import the objects to be synchronized.
      You can configure synchronization objects by importing a CSV file. For more information, see Download and import the settings of synchronization objects.
    4. Click Validate.
    5. After the validation succeeds, click OK.
    Change Topic When the topic type is set to Blob, you can change topics for objects in the destination database. For more information, see Change a topic.
    Parameter
    1. In the list on the right, move the pointer over the target object.
    2. Click Settings.
    3. In the Settings dialog box, click Shard Columns and select the target sharding columns in the drop-down list. You can select multiple fields as sharding columns. This parameter is optional.
      Unless otherwise specified, select the primary keys as sharding columns. If the primary keys are not load-balanced, select fields with unique identifiers and whose load is balanced as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:
      • Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the destination table supports concurrent writes.
      • Orderliness: OMS ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
    4. Click OK.
    Remove/Remove All During data mapping, OMS allows you to remove one or more selected objects to be migrated to the destination.
    • Remove a single synchronization object
      In the list on the right of the selection section, move the pointer over the target object, and click Remove. The synchronization object is removed.
    • Remove all synchronization objects
      In the list on the right of the selection section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
  5. Click Next. On the Synchronization Options page, specify the following parameters.

    Parameter Description
    Synchronization Settings Incremental Synchronization Start Timestamp This parameter specifies to synchronize data after the synchronization start timestamp. The default value is the current system time. You can select a point in time or enter a timestamp.
    Notice
    You can select the current time or a point in time earlier than the current time.
    This parameter is closely related to the retention period of archived logs. Generally, you can start data synchronization from the current timestamp.
    Advanced Options Serialization Method The message format for synchronizing data to a DataHub instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, and DefaultExtendColumnType. For more information, see Data formats.
    Notice
    This parameter is available only when the topic type is set to Blob on the Select Synchronization Type page.
    Advanced Options Enable Intra-Transaction Sequence Specifies whether to maintain order within a transaction. If this feature is enabled, OMS marks the sequence number for a transaction to be sent to a downstream node.
    Notice
    This parameter is valid only for the SharePlex format and is intended for you to obtain the sequence numbers of the DML statements that form a transaction.
    For example, if a transaction contains 10 DML statements numbered from 1 to 10, OMS will deliver data to the destination database in the same order.
    Advanced Options Partitioning Rule The rule for synchronizing data from the source database to a DataHub topic. Valid values: Hash and Table.
    • Hash indicates that OMS uses a hash algorithm to select the shard of a DataHub topic based on the value of the primary key or sharding column.
    • Table indicates that OMS delivers all data in a table to the same partition and uses the table name as the hash key.
    Advanced Options Business System Identification (Optional) Identifies the source business system of data. The business system identifier consists of 1 to 20 characters.
  6. Click Precheck.

    During the precheck, OMS checks whether the schema of the logical table is the same as that of the physical table. OMS checks only the column name, column type, and whether the column is empty, but does not check the length or default value. If an error is returned during the precheck:

    • You can identify and troubleshoot the issue and then perform the precheck again.

    • You can click Skip in the Actions column of the precheck item with the error. A dialog box will be displayed, prompting the impact caused if you skip this error. If you want to continue, click OK in the dialog box.

  7. Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data synchronization project. You can start the project later as needed.

    OMS allows you to modify the synchronization objects when the data synchronization project is running. For more information, see View and modify synchronization objects. After a data synchronization project is started, the synchronization objects will be executed based on the selected synchronization type. For more information, see the "View synchronization details" section in the View details of a data synchronization project topic.

    If data access fails due to a network failure or the slow startup of processes, go to the project list or the project details page and click Restore.

Previous topic

Synchronize data from an IDB logical table to a DataHub instance
Last

Next topic

Synchronize data from an Oracle database to a DataHub instance
Next
What is on this page
Prerequisites
Limits
Usage notes
Supplemental properties
Procedure