OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V3.4.0Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deployment Guide
    • Deployment type
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale-out and deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery project in OceanBase Database
    • Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start, pause, and resume a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
    • Supported DDL operations in incremental migration and limits
      • Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits
      • Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database
      • Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database
      • Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database
      • Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
      • Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from an OceanBase database to a Kafka instance
    • Create a project to synchronize data from an OceanBase database to a RocketMQ instance
    • Create a project to synchronize data from an OceanBase database to a DataHub instance
    • Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from a DBP logical table to a DataHub instance
    • Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from an IDB logical table to a DataHub instance
    • Create a project to synchronize data from a MySQL database to a DataHub instance
    • Create a project to synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start, pause, and resume a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update quotas
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Destroy a store
      • Connector
        • View details of a connector
        • Start and pause a connector
        • Migrate a connector
        • Update the configurations of a connector
        • Batch O\&M
        • Delete a connector
      • JDBCWriter
        • View details of a JDBCWriter
        • Start and pause a JDBCWriter
        • Migrate a JDBCWriter
        • Update the configurations of a JDBCWriter
        • Batch O\&M
        • Delete a JDBCWriter
      • Checker
        • View the information about a checker
        • Start and pause a checker
        • Rerun and reverify a checker
        • Update the configurations of a checker
        • Delete a checker
    • O&M tickets
      • View details of an O\&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • User management
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
    • Operation audit
  • O&M Guide
    • Manage OMS services
    • OMS logs
    • O&M operations for the Store component
    • Store parameters
      • Parameters of an Oracle store
      • Parameters of a DB2 store
      • Parameters of a MySQL store
      • Parameters of an OceanBase store
    • O&M operations for the Supervisor component
    • Parameters of the Supervisor component
    • O&M operations for the Connector component
    • Connector parameters
      • Parameters of a destination RocketMQ instance
      • Parameters of a DataflowSink instance
      • Parameters in the destination Kafka instance
      • Parameters of the source database in full migration
      • Parameters of the source database in incremental data synchronization
      • Parameters of a destination DataHub instance
      • Parameters of the source Sybase database
      • Parameters for intermediate-layer synchronization
    • Checker parameters
    • JDBCWriter parameters
    • Parameters of the CM component
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • FAQ about full migration
          • How do I query the ID of a checker?
          • How do I query log files of the Checker component of OMS?
          • How do I query the verification result files of the Checker component of OMS?
          • What do I do if the destination table does not exist?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update the configurations of a JDBCWriter?
        • How do I start or stop a JDBCWriter?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment type System and network requirements Memory and disk requirements Prepare the environment Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale-out and deployment Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery project in OceanBase Database Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start, pause, and resume a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from an OceanBase database to a Kafka instance Create a project to synchronize data from an OceanBase database to a RocketMQ instance Create a project to synchronize data from an OceanBase database to a DataHub instance Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database Create a project to synchronize data from a DBP logical table to a DataHub instance Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database Create a project to synchronize data from an IDB logical table to a DataHub instance Create a project to synchronize data from a MySQL database to a DataHub instance Create a project to synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start, pause, and resume a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source Delete a data source Create a database user User privileges
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V3.4.0
iconOceanBase Migration Service
V 3.4.0Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Create a project to synchronize data from an OceanBase database to a DataHub instance

Last Updated:2026-04-14 07:36:28  Updated
share
What is on this page
Limits
Supported DDL operations for synchronization
Data type mappings
Data type mappings between MySQL tenants of OceanBase Database and DataHub instances
Data type mapping between an Oracle tenant of OceanBase Database and a DataHub database
Supplemental properties
Procedure

folded

share

This topic describes how to synchronize data from a MySQL or Oracle tenant of OceanBase Database to a DataHub instance.

Limits

  • In the full synchronization scenario, leaderless key tables cannot be synchronized.

  • The DDL operations for synchronization apply only to Blob topics.

  • During data synchronization, OMS allows you to delete a table and then create a new one. In other words, you can first perform the drop table a operation and then the create table a_tmp operation without affecting the project. OMS does not allow you to create a table by renaming an existing table. After you perform the rename table a to a_tmp operation, the project cannot be carried out.

  • When data transfer is resumed on a link, some data within the last minute may be duplicated in the DataHub instance, and deduplication is required in downstream systems.

  • OMS supports synchronization of data of the UTF8 and GBK character sets.

  • The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.

DataHub has the following limits:

  • DataHub limits the size of a message based on the cloud environment, usually to 1 MB.

  • DataHub sends messages in batches, with each batch sized no more than 4 MB. If a single message meets the conditions for sending, you can modify the batch.size parameter at the connector sink. By default, 20 messages are sent at a time within one second.

  • For more information about the limits and naming conventions of DataHub, see Limits of DataHub.

When you synchronize incremental data from an OceanBase database to a DataHub instance, the MySQL schema is synchronized to the DataHub schema and then initialized. The following table lists data types supported by DataHub.

Notice

The following table applies only to tuple topics.

Type Description Value range
BIGINT An 8-byte signed integer. -9223372036854775807 to 9223372036854775807
DOUBLE An 8-byte double-precision floating-point number. -1.0 _10^308 to 1.0_10^308
BOOLEAN The Boolean type.
  • True/False
  • true/false
  • 0/1
TIMESTAMP The timestamp type. It is accurate to microseconds.
STRING A string that supports only UTF-8 encoding. A single-string column supports a maximum of 2 MB.
INTEGER A 4-byte integer. -2147483648 to 2147483647
FLOAT A 4-byte single-precision floating-point number. -3.40292347_10^38 to 3.40292347_10^38
DECIMAL The digital type. - 10^38 +1 to 10^38 - 1

Supported DDL operations for synchronization

Notice

The DDL operations for synchronization apply only to Blob topics.

  • ALTER TABLE

  • CREATE INDEX

  • DROP INDEX

  • TRUNCATE

Data type mappings

At present, a project that synchronizes data to a DataHub instance supports only the following data types: INTEGER, BIGINT, TIMESTAMP, FLOAT, DOUBLE, DECIMAL, STRING, and BOOLEAN.

  • If you create a topic of another type when you set topic mapping, the data synchronization will fail.

  • The following table describes the default mapping rules, which are the most appropriate. If you change the mapping, an error may occur.

Data type mappings between MySQL tenants of OceanBase Database and DataHub instances

MySQL tenant of OceanBase Database Default mapped-to data type in DataHub
BIT STRING (Base64-encoded)
CHAR STRING
BINARY STRING (Base64-encoded)
VARBINARY STRING (Base64-encoded)
INT BIGINT
TINYINT BIGINT
SMALLINT BIGINT
MEDIUMINT BIGINT
BIGINT DECIMAL (This data type is used because the maximum unsigned value exceeds the maximum LONG value in Java.)
FLOAT DECIMAL
DOUBLE DECIMAL
DECIMAL DECIMAL
DATE STRING
TIME STRING
YEAR BIGINT
DATETIME STRING
TIMESTAMP TIMESTAMP (accurate to milliseconds)
VARCHAR STRING
TINYBLOB STRING (Base64-encoded)
TINYTEXT STRING
BLOB STRING (Base64-encoded)
TEXT STRING
MEDIUMBLOB STRING (Base64-encoded)
MEDIUMTEXT STRING
LONGBLOB STRING (Base64-encoded)
LONGTEXT STRING
ENUM STRING
SET STRING

Data type mapping between an Oracle tenant of OceanBase Database and a DataHub database

Oracle tenant of OceanBase Database Default mapped-to data type in DataHub
CHAR STRING
NCHAR STRING
VARCHAR2 STRING
NVARCHAR2 STRING
CLOB STRING
NCLOB STRING
BLOB STRING (Base64-encoded)
NUMBER DECIMAL
BINARY_FLOAT DECIMAL
BINARY_DOUBLE DECIMAL
DATE STRING
TIMESTAMP STRING
TIMESTAMP WITH TIME ZONE STRING
TIMESTAMP WITH LOCAL TIME ZONE STRING
INTERVAL YEAR TO MONTH STRING
INTERVAL DAY TO SECOND STRING
LONG STRING (Base64-encoded)
RAW STRING (Base64-encoded)
LONG RAW STRING (Base64-encoded)
ROWID STRING
UROWID STRING
FLOAT DECIMAL

Supplemental properties

If you manually create a topic, add the following properties to the DataHub schema before you start a data synchronization project. If OMS automatically creates a topic and synchronizes the schema, OMS automatically adds the following properties.

Notice

The following table applies only to tuple topics.

Property Type Description
oms_timestamp STRING The time when the change was made.
oms_table_name STRING The new table name of the source table.
oms_database_name STRING The new database name of the source database.
oms_sequence STRING The timestamp at which data is synchronized to the process memory. The value of this field consists of the time and five incremental digits. A clock rollback will result in data inconsistency.
oms_record_type STRING The change type. Valid values: UPDATE, INSERT, and DELETE.
oms_is_before STRING Specifies whether the data is the original data when the change type is UPDATE. Y indicates that the data is the original data.
oms_is_after STRING Specifies whether the data is the modified data when the change type is UPDATE. Y indicates that the data is the modified data.

Procedure

  1. Create a data synchronization project.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, click Data Synchronization.

    3. On the Data Synchronization page, click Create Synchronization Project in the upper-right corner.

  2. On the Select Source and Destination page, configure the parameters.

    Parameter Description
    Synchronization Project Name We recommend that you set it to a combination of Chinese characters, digits, and letters. It must not contain any spaces and cannot exceed 64 characters in length.
    Label Click the field and select the target tag from the drop-down list. You can click Manage Tags to create, modify, and delete tags. For more information, see Manage data synchronization projects by using tags.
    Source If you have created a physical OceanBase data source, select it from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create a physical OceanBase data source.
    Destination If you have created a DataHub data source, select it from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create a DataHub data source.
  3. Click Next. On the Select Synchronization Type page, specify Synchronization Type and Configuration for the current data synchronization project.

    Options for Synchronization Type and Configuration are Schema Synchronization, Full Synchronization, and Incremental Synchronization. Schema synchronization creates a topic. Options for Incremental Synchronization are DML Synchronization and DDL Synchronization.

    • Options for DML Synchronization are Insert, Delete, and Update. By default, all of them are selected.

    • DDL Synchronization can be selected only for Blob topics.

  4. (Optional) Click Next.

    If the source database is an OceanBase database, you must configure the obconfig_url parameter, username, and password for incremental synchronization.

    If you have selected Incremental Synchronization without configuring the required parameters for the source database, the More About Data Sources dialog box appears to prompt you to configure the parameters. For more information, see Create a physical OceanBase data source.

    After you configure the parameters, click Test Connectivity. After the test succeeds, click Save.

  5. Click Next. On the Select Synchronization Objects page, select the type and range of topics to be synchronized.

    Available topic types are Tuple and Blob. Tuple topics contain records that are similar to data records in databases. Each record contains multiple columns. You can only write a block of binary data as a record to a Blob topic. The data are Base64 encoded for transmission. For more information, visit the documentation center of DataHub.

    Select the type of topics to be synchronized and perform the following steps:

    1. In the left-side pane, select the objects to be synchronized.

    2. Click >.

    3. Select a mapping method.

      • To synchronize a single table, select the mapping method as needed in the Map Object to Topic dialog box and click OK.

        If you do not select Schema Synchronization when you set the synchronization type and configuration, you can select only Existing Topics here. If you have selected Schema Synchronization when you set the synchronization type and configuration, you can select only one mapping method to create or select a topic.

        For example, if you have selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.

        Parameter Description
        Create Topic Enter the name of the new topic in the text box. The topic name can contain letters, digits, and underscores (_) and must start with a letter. It must not exceed 128 characters in length.
        Select Topic OMS allows you to query DataHub topics. You can click Select Topic, and then find and select the topics to be synchronized from the Existing Topics drop-down list.
        You can also enter the name of an existing topic and select it after it appears.
        Batch Generate Topics The format for generating topics in batches is: Topic_${Database Name}_${Table Name}.

        If you select Create Topic or Batch Generate Topics, you can query the newly created topics in the DataHub instance after schema synchronization is completed. By default, each DataHub topic has two partitions and the data expiration period is 7 days, which cannot be modified.

      • To synchronize multiple tables, click OK in the dialog box that appears.

        If you have selected a tuple topic and multiple tables without selecting Schema Synchronization, after you select a topic and click OK in the Map Object to Topic dialog box, multiple tables are displayed under the topic in the right pane, but only one table can be synchronized. Click Next. A prompt appears, indicating that only one-to-one mapping is supported between tuple topics and tables.

    4. Click OK.

    When you synchronize data from an OceanBase database to a DataHub instance, OMS allows you to import objects from text and perform the following operations on the objects in the destination database: set row filtering conditions, sharding columns, and column filtering conditions, and remove a single object or all objects. Objects in the destination database are listed in the structure of Topic > Database > Table.

    Operation Steps
    Import Objects
    1. In the list on the right, click Import Objects in the upper-right corner.
    2. In the dialog box that appears, click OK. Notice
      This operation will overwrite previous selections. Proceed with caution.
    3. In the Import Synchronization Objects dialog box, import the objects to be synchronized.
      You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of synchronization objects.
    4. Click Validate.
    5. After the validation succeeds, click OK.
    Change Topic When the topic type is set to Blob, you can change topics for objects in the destination database.
    1. In the list on the right, move the pointer over the object that you want to change.
    2. Click Change Topic.
    3. In the Map Object to Topic dialog box, change the topics to be synchronized.
    4. Click OK.
      Notice
      The selected topics and tables will be merged into the selected topic. Proceed with caution.
    Settings You can execute the WHERE clause in OMS to implement row filters and select sharding columns and columns to synchronize.
    1. In the list on the right, move the pointer over the object that you want to change.
    2. Click Settings.
    3. In the Settings dialog box, you can perform the following operations:
      • In the Row Filters section, specify a standard SQL WHERE clause to filter data by row. The setting takes effect for both full synchronization and incremental synchronization.
        Only the data meeting the WHERE condition is synchronized to the destination data source, thereby filtering data by row.
        If the statement contains a reserved SQL keyword, add an escape character (`).
      • Select the sharding columns that you want to use from the Sharding Columns drop-down list. You can select multiple fields as sharding columns. This parameter is optional.
        Unless otherwise specified, select the primary keys as sharding columns. If the primary keys are not load-balanced, select fields with unique identifiers and whose load is balanced as sharding columns.
        Ensure that the selected sharding columns are correct. An incorrect sharding column will cause data synchronization to fail. Sharding columns can be used for the following purposes:
        • Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the destination table supports concurrent writes.
        • Orderliness: OMS ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
    4. In the Select Columns section, select the columns to be synchronized. If you select all or no columns, OMS will synchronize all columns.
    5. Click OK.
    Remove/Remove All OMS allows you to remove one or more objects from the destination database during data mapping.
    • Remove a single synchronization object
      In the list on the right of the selection section, hover the pointer over the target object, and click Remove. The synchronization object is removed.
    • Remove all synchronization objects
      In the list on the right of the selection section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
  6. Click Next. On the Synchronization Options page, specify the following parameters.

    Parameter Description
    Incremental Synchronization Start Timestamp
    • If you have selected Full Synchronization when you set the synchronization type and configuration, the value here is the project start time by default and cannot be modified.
    • If you do not select Full Synchronization when you set the synchronization type and configuration, specify a point in time after which the data is to be synchronized. The default value is the current system time. You can select a point in time or enter a timestamp.
      Notice
      You can select the current time or a point in time earlier than the current time.
      This parameter is closely related to the retention period of archived logs. Generally, you can start data synchronization from the current timestamp.
    Serialization Method The message format for synchronizing data to a DataHub instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, and DefaultExtendColumnType. For more information, see Data formats.
    Notice
    This parameter is available only when the topic type is set to Blob on the Select Synchronization Type page.
    Enable Intra-Transaction Sequence Specifies whether to maintain order within a transaction. If this feature is enabled, OMS marks the sequence number for a transaction to be sent to a downstream node.
    Notice
    This parameter is valid only for the SharePlex format and is intended for you to obtain the sequence numbers of the DML statements that form a transaction.
    For example, if a transaction contains 10 DML statements numbered from 1 to 10, OMS will deliver these statements to the destination database in the same order.
    If this option is enabled, the system performance may be affected. Choose whether to enable it based on the business characteristics.
    Partitioning Rules The rule for synchronizing data from the source database to a DataHub topic. Valid values: Hash and Table. We recommend that you select Table to ensure DDL and DML consumption consistency when downstream applications are consuming messages.
    • Hash indicates that OMS uses a hash algorithm to select the shard of a DataHub topic based on the value of the primary key or sharding column.
    • Table indicates that OMS delivers all data in a table to the same partition and uses the table name as the hash key.
      Notice
      If you select DDL Synchronization on the Select Synchronization Type page, the partitioning rule can be set only to Table.
  7. Click Precheck.

    During the precheck, OMS checks the column name and column type, and checks whether the values are null. OMS does not check the value length or default value. If an error is returned:

    • You can troubleshoot the error and run the precheck again.

    • You can also click Skip in the Actions column of the precheck item that returns the error. Then, a dialog box appears, indicating the impact that may be caused if you choose to skip this check item. If you want to continue, click OK in the dialog box.

  8. Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data synchronization project. You can start the project later as needed.

    OMS allows you to modify synchronization objects when a data synchronization project is running. For more information, see View and modify synchronization objects. After a data synchronization project is started, the synchronization objects will be executed based on the selected synchronization type. For more information, see the "View synchronization details" section in the View details of a data synchronization project topic.

    If data access fails due to a network failure or the slow startup of processes, go to the project list or the project details page and click Restore.

Previous topic

Create a project to synchronize data from an OceanBase database to a RocketMQ instance
Last

Next topic

Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database
Next
What is on this page
Limits
Supported DDL operations for synchronization
Data type mappings
Data type mappings between MySQL tenants of OceanBase Database and DataHub instances
Data type mapping between an Oracle tenant of OceanBase Database and a DataHub database
Supplemental properties
Procedure