OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.0.2Enterprise Edition

    Download PDF

    OceanBase logo

    The Unified Distributed Database for the AI Era.

    Follow Us
    Products
    OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
    Resources
    DocsBlogLive DemosTraining & Certification
    Company
    About OceanBaseTrust CenterLegalPartnerContact Us
    Follow Us

    © OceanBase 2026. All rights reserved

    Cloud Service AgreementPrivacy PolicySecurity
    Contact Us
    Document Feedback
    1. Documentation Center
    2. OceanBase Migration Service
    3. V4.0.2
    iconOceanBase Migration Service
    V 4.0.2Enterprise Edition
    Enterprise Edition
    • V 4.3.2
    • V 4.3.1
    • V 4.3.0
    • V 4.2.5
    • V 4.2.4
    • V 4.2.3
    • V 4.0.2
    • V 3.4.0
    Community Edition
    • V 4.2.13
    • V 4.2.12
    • V 4.2.11
    • V 4.2.10
    • V 4.2.9
    • V 4.2.8
    • V 4.2.7
    • V 4.2.6
    • V 4.2.5
    • V 4.2.4
    • V 4.2.3
    • V 4.2.1
    • V 4.2.0
    • V 4.0.0
    • V 3.3.1

    Synchronize data from OceanBase Database to a DataHub instance

    Last Updated:2026-04-14 07:36:47  Updated
    share
    What is on this page
    Prerequisites
    Limits
    Usage notes
    Supported DDL operations for synchronization
    Data type mappings
    Data type mappings between MySQL tenants of OceanBase Database and DataHub instances
    Data type mappings between Oracle tenants of OceanBase Database and DataHub instances
    Supplemental properties
    Procedure

    folded

    share

    This topic describes how to synchronize data from a MySQL or Oracle tenant of OceanBase Database to a DataHub instance.

    Prerequisites

    • You have created a dedicated database user for data synchronization in the source OceanBase database, and granted corresponding privileges to the users. For more information, see Create a database user.

    • You have created data sources for the source and destination. For more information, see Create a physical OceanBase data source and Create a DataHub data source.

    Limits

    • In the full synchronization scenario, leaderless key tables cannot be synchronized.

    • DDL synchronization applies only to Blob topics.

    • During data synchronization, OceanBase Migration Service (OMS) allows you to drop a table before creating a new one. In other words, you can execute DROP TABLE and then execute CREATE TABLE. In OMS, you cannot create a new table by renaming a table. That is, you cannot execute RENAME TABLE a TO a_tmp.

    • OMS supports synchronization of data of the UTF8 and GBK character sets.

    • The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.

    • Data source identifiers, user accounts, and tags must be globally unique in OMS.

    • OMS supports only the synchronization of objects whose database name or table name is an ASCII string without special characters. The special characters are . | \ " ' ` ( ) = ; / & \n

    DataHub has the following limits:

    • DataHub limits the size of a message based on the cloud environment, usually to 1 MB.

    • DataHub sends messages in batches, with each batch sized no more than 4 MB. If a single message meets the conditions for sending, you can modify the batch.size parameter. By default, 20 messages are sent at a time within one second.

    • For more information about the limits and naming conventions of DataHub, see Limits of DataHub.

    Usage notes

    Notice

    The following table applies only to tuple topics.

    Type Description Value range
    BIGINT An 8-byte signed integer. -9223372036854775807 to 9223372036854775807
    DOUBLE An 8-byte double-precision floating-point number. -1.0 _10^308 to 1.0_10^308
    BOOLEAN Boolean
    • True/False
    • true/false
    • 0/1
    TIMESTAMP The timestamp type. It is accurate to microseconds.
    STRING A string that supports only UTF-8 encoding. A single STRING column supports a maximum of 2 MB.
    INTEGER A 4-byte integer. -2147483648 to 2147483647
    FLOAT A 4-byte single-precision floating-point number. -3.40292347_10^38 to 3.40292347_10^38
    DECIMAL The digital type. - 10^38 +1 to 10^38 - 1

    Supported DDL operations for synchronization

    Notice

    DDL synchronization applies only to Blob topics.

    • ALTER TABLE

    • CREATE INDEX

    • DROP INDEX

    • TRUNCATE TABLE

      In delayed deletion, two TRUNCATE TABLE DDL statements exist in the same transaction. In this case, downstream applications consume this DDL statement in idempotent mode.

    Data type mappings

    At present, a project that synchronizes data to a DataHub instance supports only the following data types: INTEGER, BIGINT, TIMESTAMP, FLOAT, DOUBLE, DECIMAL, STRING, and BOOLEAN.

    • If you create a topic of another type when you set topic mapping, data synchronization will fail.

    • The following table describes the default mapping rules, which are the most appropriate. If you change the mapping, an error may occur.

    Data type mappings between MySQL tenants of OceanBase Database and DataHub instances

    MySQL tenant of OceanBase Database Default mapped-to data type in DataHub
    BIT STRING (Base64-encoded)
    CHAR STRING
    BINARY STRING (Base64-encoded)
    VARBINARY STRING (Base64-encoded)
    INT BIGINT
    TINYTEXT STRING
    SMALLINT BIGINT
    MEDIUMINT BIGINT
    BIGINT DECIMAL (This data type is used because the maximum unsigned value exceeds the maximum LONG value in Java.)
    FLOAT DECIMAL
    DOUBLE DECIMAL
    DECIMAL DECIMAL
    DATE STRING
    TIME STRING
    YEAR BIGINT
    DATETIME STRING
    TIMESTAMP TIMESTAMP (accurate to milliseconds)
    VARCHAR STRING
    TINYBLOB STRING (Base64-encoded)
    TINYTEXT STRING
    BLOB STRING (Base64-encoded)
    TEXT STRING
    MEDIUMBLOB STRING (Base64-encoded)
    MEDIUMTEXT STRING
    LONGBLOB STRING (Base64-encoded)
    LONGTEXT STRING

    Data type mappings between Oracle tenants of OceanBase Database and DataHub instances

    Oracle tenant of OceanBase Database Default mapped-to data type in DataHub
    CHAR STRING
    NCHAR STRING
    VARCHAR2 STRING
    NVARCHAR2 STRING
    CLOB STRING
    BLOB STRING (Base64-encoded)
    NUMBER DECIMAL
    BINARY_FLOAT DECIMAL
    BINARY_DOUBLE DECIMAL
    DATE STRING
    TIMESTAMP STRING
    TIMESTAMP WITH TIME ZONE STRING
    TIMESTAMP WITH LOCAL TIME ZONE STRING
    INTERVAL YEAR TO MONTH STRING
    INTERVAL DAY TO SECOND STRING
    RAW STRING (Base64-encoded)

    Supplemental properties

    If you manually create a topic, add the following properties to the DataHub schema before you start a data synchronization project. If OMS automatically creates a topic and synchronizes the schema, OMS automatically adds the following properties.

    Notice

    The following table applies only to tuple topics.

    Parameter Type Description
    oms_timestamp STRING The time when the change was made.
    oms_table_name STRING The new table name of the source table.
    oms_database_name STRING The new database name of the source database.
    oms_sequence STRING The timestamp at which data is synchronized to the process memory. The value of this field consists of time and five incremental digits. A clock rollback will result in data inconsistency.
    oms_record_type STRING The change type. Valid values: UPDATE, INSERT, and DELETE.
    oms_is_before STRING Specifies whether the data is the original data when the change type is UPDATE. Y indicates that the data is the original data.
    oms_is_after STRING Specifies whether the data is the modified data when the change type is UPDATE. Y indicates that the data is the modified data.

    Procedure

    1. Create a data synchronization project.

      1. Log on to the OMS console.

      2. In the left-side navigation pane, click Data Synchronization.

      3. On the Data Synchronization page, click Create Synchronization Project in the upper-right corner.

    2. On the Select Source and Destination page, specify the following parameters.

      Parameter Description
      Synchronization Project Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length.
      Labels Click the field and select a target tag from the drop-down list. You can click Manage Tags to create, modify, and delete tags. For more information, see Manage data synchronization projects by using tags.
      Source If you have created a physical OceanBase data source, select it from the drop-down list. Otherwise, click Add Data Source in the drop-down list to create one in the dialog box on the right side. For more information, see Create a physical data source of OceanBase Database.
      Notice
      The source database cannot be an instance of OceanBase Database V4.0.0.
      Destination If you have created a DataHub data source, select it from the drop-down list. Otherwise, click Add Data Source in the drop-down list to create one in the dialog box on the right side. For more information about parameters, see Create a DataHub data source.
    3. Click Next. On the Select Synchronization Type page, select the synchronization type for the current data synchronization project.

      Valid values of Synchronization Type are Schema Synchronization, Full Synchronization, and Incremental Synchronization. Schema synchronization creates a topic. Options for Incremental Synchronization are DML Synchronization and DDL Synchronization.

      • Options for DML Synchronization are Insert, Delete, and Update, which are all selected by default. For more information, see DML filtering.

      • DDL Synchronization can be selected only for Blob topics. For more information, see Synchronize DDL operations.

    4. (Optional) Click Next.

      If the source database is an OceanBase database, you must configure the obconfig_url parameter, username, and password for incremental synchronization.

      If you have selected Incremental Synchronization without configuring the required parameters for the source database, the More About Data Sources dialog box appears to prompt you to configure the parameters. For more information, see Create a physical OceanBase data source.

      After you configure the parameters, click Test Connectivity. After the test succeeds, click Save.

    5. Click Next. On the Select Synchronization Objects page, select the type and range of topics to be synchronized.

      Available topic types are Tuple and Blob. Tuple topics contain records that are similar to data records in databases. Each record contains multiple columns. You can only write a block of binary data as a record to a Blob topic. The data are Base64-encoded for transmission. For more information, visit the documentation center of DataHub.

      Select the type of topics to be synchronized and perform the following steps:

      1. In the left-side pane, select the objects to be synchronized.

      2. Click >.

      3. Select a mapping method.

        • To synchronize a single table, select the mapping method as needed in the Map Object to Topic dialog box and click OK.

          If you do not select Schema Synchronization when you set the synchronization type and configuration, you can select only Existing Topics here. If you have selected Schema Synchronization when you set the synchronization type and configuration, you can select only one mapping method to create or select a topic.

          For example, if you selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.

          Parameter Description
          Create Topic Enter the name of the new topic in the text box. The topic name can contain letters, digits, and underscores (_) and must start with a letter. It must not exceed 128 characters in length.
          Select Topic OMS allows you to query DataHub topics. You can click Select Topic, and then find and select the topics to be synchronized from the Existing Topics drop-down list.
          You can also enter the name of an existing topic and select it after it appears.
          Batch Generate Topics The format for generating topics in batches is: Topic_${Database Name}_${Table Name}.

          If you select Create Topic or Batch Generate Topics, after the schema migration succeeds, you can query the created topics on the DataHub side. By default, the number of data shards is 2 and the data expiration time is 7 days. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the destination database as needed.

        • To synchronize multiple tables, click OK in the dialog box that appears.

          If you have selected a tuple topic and multiple tables without selecting Schema Synchronization, after you select a topic and click OK in the Map Object to Topic dialog box, multiple tables are displayed under the topic in the right pane, but only one table can be synchronized. Click Next. A prompt appears, indicating that only one-to-one mapping is supported between tuple topics and tables.

      4. Click OK.

      OMS allows you to import objects from text files, set row filters, sharding columns, and column filters for the target object, and remove a single object or all objects. Objects in the destination database are listed in the structure of Topic > Database > Table.

      Actions Steps
      Import Objects
      1. In the list on the right, click Import Objects in the upper-right corner.
      2. In the dialog box that appears, click OK. Notice
        This operation will overwrite previous selections. Proceed with caution.
      3. In the Import Synchronization Objects dialog box, import the objects to be synchronized.
        You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of synchronization objects.
      4. Click Validate.
      5. After the validation succeeds, click OK.
      Change Topic When the topic type is set to Blob, you can change topics for objects in the destination database. For more information, see Change a topic.
      Parameter OMS allows you to configure row-based filtering, select sharding columns, and select columns to be synchronized.
      1. In the list on the right, move the pointer over the target object.
      2. Click Settings.
      3. In the Settings dialog box, you can perform the following operations:
        • In the Row Filters section, specify a standard SQL WHERE clause to filter data by row. For more information, see Use SQL conditions to filter data.
        • Select the sharding columns that you want to use from the Sharding Columns drop-down list. You can select multiple fields as sharding columns. This parameter is optional.
          Unless otherwise specified, select the primary keys as sharding columns. If the primary keys are not load-balanced, select fields with unique identifiers and whose load is balanced as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:
          • Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the destination table supports concurrent writes.
          • Orderliness: OMS ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
      4. In the Select Columns section, select the columns to be synchronized. For more information, see Column filtering.
      5. Click OK.
      Remove/Remove All During data mapping, OMS allows you to remove one or more selected objects to be migrated to the destination.
      • Remove a single synchronization object
        In the list on the right of the selection section, move the pointer over the target object, and click Remove. The synchronization object is removed.
      • Remove all synchronization objects
        In the list on the right of the selection section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
    6. Click Next. On the Synchronization Options page, specify the following parameters.

      Parameter Description
      Incremental Synchronization Start Timestamp
      • If you have selected Full Synchronization when you set the synchronization type, here the value is the project start time by default and cannot be modified.
      • If you do not select Full Synchronization when you set the synchronization type, specify a point in time after which the data will be synchronized. The default value is the current system time. You can select a point in time or enter a timestamp.
        Notice
        You can select the current time or a point in time earlier than the current time.
        This parameter is closely related to the retention period of archived logs. Generally, you can start data synchronization from the current timestamp.
      Serialization Method The message format for synchronizing data to a DataHub instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, DefaultExtendColumnType, and Debezium. For more information, see Data formats.
      Notice:
      • This parameter is available only when the topic type is set to Blob on the Select Synchronization Type page.
      • At present, MySQL tenants of OceanBase Database supports only Debezium.
      Enable Intra-Transaction Sequence Specifies whether to maintain order within a transaction. If this feature is enabled, OMS marks the sequence number for a transaction to be sent to a downstream node.
      Notice
      This parameter is valid only for the SharePlex format and is intended for you to obtain the sequence numbers of the DML statements that form a transaction.
      For example, if a transaction contains 10 DML statements numbered from 1 to 10, OMS will deliver data to the destination database in the same order.
      Partitioning Rule The rule for synchronizing data from the source database to a DataHub topic. Valid values: Hash and Table. We recommend that you select Table to ensure DDL and DML consumption consistency when downstream applications are consuming messages.
      • Hash indicates that OMS uses a hash algorithm to select the shard of a DataHub topic based on the value of the primary key or sharding column.
      • Table indicates that OMS delivers all data in a table to the same partition and uses the table name as the hash key.
        Notice
        If you select DDL Synchronization on the Select Synchronization Type page, the partitioning rule can be set only to Table.
    7. Click Precheck.

      During the precheck, OMS checks the column name and column type, and checks whether the values are null. OMS does not check the value length or default value. If an error is returned during the precheck:

      • You can identify and troubleshoot the issue and then perform the precheck again.

      • You can click Skip in the Actions column of the precheck item with the error. A dialog box will be displayed, prompting the impact caused if you skip this error. If you want to continue, click OK in the dialog box.

    8. Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data synchronization project. You can start the project later as needed.

      OMS allows you to modify the synchronization objects when the data synchronization project is running. For more information, see View and modify synchronization objects. After a data synchronization project is started, the synchronization objects will be executed based on the selected synchronization type. For more information, see the "View synchronization details" section in the View details of a data synchronization project topic.

      If data access fails due to a network failure or the slow startup of processes, go to the project list or the project details page and click Restore.

    Previous topic

    Synchronize data from an OceanBase database to a RocketMQ instance
    Last

    Next topic

    Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database
    Next
    What is on this page
    Prerequisites
    Limits
    Usage notes
    Supported DDL operations for synchronization
    Data type mappings
    Data type mappings between MySQL tenants of OceanBase Database and DataHub instances
    Data type mappings between Oracle tenants of OceanBase Database and DataHub instances
    Supplemental properties
    Procedure