OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.2.3Community Edition

  • OMS Documentation
  • OMS Community Edition Introduction
    • What is OMS Community Edition?
    • Terms
    • OMS Community Edition HA
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS Community Edition
    • Deployment modes
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS Community Edition on a single node
    • Deploy OMS Community Edition on multiple nodes in a single region
    • Deploy OMS Community Edition on multiple nodes in multiple regions
    • Integrate the OIDC protocol into OMS Community Edition to implement SSO
    • Scale out OMS Community Edition
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS Community Edition console
    • Log on to the console of OMS Community Edition
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Overview
    • Migrate data from a MySQL database to OceanBase Database Community Edition
    • Migrate data from OceanBase Database Community Edition to a MySQL database
    • Migrate data from HBase to OBKV
    • Migrate data between instances of OceanBase Database Community Edition
    • Migrate data in active-active disaster recovery scenarios
    • Migrate data from a TiDB database to OceanBase Database Community Edition
    • Migrate data from a PostgreSQL database to OceanBase Database Community Edition
    • Manage data migration projects
      • View the details of a data migration project
      • Change the name of a data migration project
      • View and modify migration objects
      • Manage computing platforms
      • Use tags to manage data migration projects
      • Perform batch operations on data migration projects
      • Download and import settings of migration objects
      • Start and pause a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • DDL synchronization
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
      • Set an incremental synchronization timestamp
    • Supported DDL operations in incremental migration and limits
      • DDL synchronization from MySQL database to OceanBase Community Edition
        • Overview of DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between MySQL database and OceanBase Community Edition
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Supported DDL operations in incremental migration from OceanBase Community Edition to a MySQL database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from OceanBase Database Community Edition to a Kafka instance
    • Create a project to synchronize data from OceanBase Database Community Edition to a RocketMQ instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • Change the name of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Perform batch operations on data synchronization projects
      • Download and import the settings of synchronization objects
      • Start and pause a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • DDL synchronization
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase-CE data source
      • Create a MySQL data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a PostgreSQL data source
      • Create an HBase data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
  • OMS Community Edition O&M
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync or Full-Import tuning
    • Component parameters
      • Coordinator
      • Condition
      • Source Plugin
        • Overview
        • StoreSource
        • DataFlowSource
        • LogProxySource
        • KafkaSource (TiDB)
        • HBaseSource
      • Sink Plugin
        • Overview
        • JDBC-Sink
        • KafkaSink
        • DatahubSink
        • RocketMQSink
        • HBaseSink
      • Store parameters
        • Parameters of a MySQL store
        • Parameters of an OceanBase store
      • Parameters of the CM component
      • Parameters of the Supervisor component
      • Full-Verification parameters
    • Set throttling
  • Reference Guide
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • ListFullVerifyInconsistenciesResult
      • ListFullVerifyCorrectionsResult
      • UpdateStore
      • UpdateFullImport
      • UpdateIncrSync
      • UpdateFullVerification
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • Telemetry parameters
  • Upgrade Guide
    • Overview
    • Upgrade OMS Community Edition in single-node deployment mode
    • Upgrade OMS Community Edition in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • Clear files in the Store component
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Project diagnostics
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
      • How do I upgrade CDC?
      • What do I do when the "Failed to fetch" error is reported?
      • Change port numbers for components
      • Switch to the standby database

Download PDF

OMS Documentation What is OMS Community Edition? Terms OMS Community Edition HA Overview Hierarchical functional system Basic components Limitations Data migration process Data synchronization process Deployment modes System and network requirements Memory and disk requirements Prepare the environment Deploy OMS Community Edition on a single node Deploy OMS Community Edition on multiple nodes in a single region Deploy OMS Community Edition on multiple nodes in multiple regions Integrate the OIDC protocol into OMS Community Edition to implement SSO Scale out OMS Community Edition Check the deployment Deploy a time-series database (Optional) Log on to the console of OMS Community Edition Overview Configure user information Change your logon password Log off Overview Migrate data from a MySQL database to OceanBase Database Community Edition Migrate data from OceanBase Database Community Edition to a MySQL database Migrate data from HBase to OBKV Migrate data between instances of OceanBase Database Community Edition Migrate data in active-active disaster recovery scenarios Migrate data from a TiDB database to OceanBase Database Community Edition Migrate data from a PostgreSQL database to OceanBase Database Community Edition View the details of a data migration project Change the name of a data migration project View and modify migration objects Manage computing platforms Use tags to manage data migration projects Perform batch operations on data migration projects Download and import settings of migration objects Start and pause a data migration project Release and delete a data migration project DML filtering DDL synchronization Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Set an incremental synchronization timestamp Supported DDL operations in incremental migration from OceanBase Community Edition to a MySQL database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from OceanBase Database Community Edition to a Kafka instance Create a project to synchronize data from OceanBase Database Community Edition to a RocketMQ instance View details of a data synchronization project Change the name of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Perform batch operations on data synchronization projects Download and import the settings of synchronization objects Start and pause a data synchronization project Release and delete a data synchronization project DML filtering DDL synchronization Rename a topic Use SQL conditions to filter data Column filtering Data formats Create an OceanBase-CE data source Create a MySQL data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a PostgreSQL data source Create an HBase data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database O&M overview Go to the overview page View server information Update the quota View server logs View O&M tasks Skip a task or subtask Retry a task or subtask Overview Manage users Manage departments View project alerts View system alerts Manage alert settings
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.2.3
iconOceanBase Migration Service
V 4.2.3Community Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Create a project to synchronize data from OceanBase Database Community Edition to a Kafka instance

Last Updated:2024-04-18 03:40:56  Updated
share
What is on this page
Prerequisites
Limitations
Considerations
Supported DDL operations
Procedure

folded

share

Kafka is a widely used high-performance distributed stream computing platform. OceanBase Migration Service (OMS) Community Edition supports real-time data synchronization between OceanBase Database Community Edition and a self-managed Kafka instance, extending the message processing capability. Therefore, the data transmission feature is widely applied to business scenarios such as real-time data warehouse building, data query, and report distribution.

OMS Community Edition enables you to synchronize data to message queue products, extending the all-around application of your business in big data fields, such as data aggregation monitoring, streaming data processing, and online/offline analysis. For more information about the data formats for OceanBase Database Community Edition, see Data formats.

Prerequisites

You have created a dedicated database user for data synchronization in the source OceanBase Database Community Edition and granted the required privileges to the user. For more information, see Create a database user.

Limitations

  • Only physical tables can be synchronized.

  • OMS Community Edition supports Kafka V0.9, V1.0, and V2.x.

    Notice

    When the version of the Kafka instance is 0.9, schema synchronization is not supported.

  • During data synchronization, if you rename a source table to be synchronized and the new name is beyond the synchronization scope, the data of the source table will not be synchronized to the destination Kafka instance.

  • The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.

  • The data source identifiers and user accounts must be globally unique in OMS Community Edition.

  • OMS Community Edition supports the synchronization of only objects whose database name, table name, and column name are ASCII-encoded without special characters. The special characters are line breaks and | " ' ` ( ) = ; / &

  • The source cannot be a standby OceanBase database.

Considerations

  • In a data synchronization project where the source is OceanBase Database Community Edition and DDL synchronization is enabled, if a RENAME operation is performed on a table in the source, we recommend that you restart the project to avoid data loss during incremental synchronization.

  • When an updated row contains a LOB column:

    • If the LOB column is updated, do not use the value stored in the LOB column before the UPDATE or DELETE operation.

      The following data types are stored in LOB columns: JSON, GIS, XML, user-defined type (UDT), and TEXT such as LONGTEXT and MEDIUMTEXT.

    • If the LOB column is not updated, the value stored in the LOB column before and after the UPDATE or DELETE operation is NULL.

  • If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.

    For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.

  • When data transfer is resumed for a project, some data (within the last minute) may be duplicated in the Kafka instance, and deduplication is required in downstream systems.

  • During data synchronization from OceanBase Database Community Edition to a Kafka instance, if the execution of a statement to create a unique index fails in the source, the Kafka instance will consume the creation and deletion DDL statements. If the downstream DDL statement for unique index creation fails the execution, ignore this exception.

    Notice

    Liboblog V2.2.x does not guarantee the order of DDL or DML statements and may cause data quality issues.

Supported DDL operations

  • CREATE TABLE

    Notice

    The created table must be a synchronization object. To execute the CREATE TABLE statement on a synchronized table, execute the DROP TABLE statement on this table first.

  • ALTER TABLE

  • DROP TABLE

  • TRUNCATE TABLE

    In delayed deletion, the same transaction contains two identical TRUNCATE TABLE DDL statements. In this case, idempotence is implemented for downstream consumption.

  • ALTER TABLE…TRUNCATE PARTITION

  • CREATE INDEX

  • DROP INDEX

  • COMMENT ON TABLE

  • RENAME TABLE

    Notice

    The renamed table must be a synchronization object.

Procedure

  1. Create a data synchronization project.

    1. Log on to the console of OMS Community Edition.

    2. In the left-side navigation pane, click Data Synchronization.

    3. On the Data Synchronization page, click Create Synchronization Project in the upper-right corner.

  2. On the Select Source and Destination page, configure the parameters.

    Parameter Description
    Synchronization Project Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length.
    Tag (Optional) Click the field and select a target tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Manage data synchronization projects by using tags.
    Source If you have created an OceanBase-CE data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about parameters, see Create a data source of OceanBase Database Community Edition.
    Destination If you have created a Kafka data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information, see Create a Kafka data source.
  3. Click Next. On the Select Synchronization Type page, select the synchronization type for the current data synchronization project.

    Valid values: Schema Synchronization, Full Synchronization, and Incremental Synchronization. Full Synchronization supports the synchronization of tables without primary keys. Incremental Synchronization supports DML Synchronization and DDL Synchronization.

  4. (Optional) Click Next.

    To synchronize data from OceanBase Database Community Edition, you must specify OCP (Optional), Username, and Password for schema migration and incremental synchronization.

    If you have selected Schema Migration and Incremental Synchronization without configuring the required parameters for the source database, the More About Data Sources dialog box appears to prompt you to configure the parameters. For more information about parameters, see Create a data source of OceanBase Database Community Edition.

    After you configure the parameters, click Test Connectivity. After the test succeeds, click Save.

  5. Click Next. On the Select Synchronization Objects page, select a synchronization scope.

    When you synchronize data from OceanBase Database Community Edition to a Kafka instance, you can synchronize data from multiple tables to multiple topics.

    1. In the left-side pane, select the objects to be synchronized.

    2. Click >.

    3. In the Map Object to Topic dialog box, select a mapping method.

      If you did not select Schema Synchronization when you set the synchronization type, you can select only Existing Topics here. If you have selected Schema Synchronization when you set the synchronization type, you can select only one mapping method to create or select a topic.

      For example, if you have selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.

      Parameter Description
      Create Topic Enter the name of the new topic in the text box. The topic name contains 3 to 64 characters, and can contain only letters, digits, hyphens (-), underscores (_), and periods (.).
      Select Topic OMS Community Edition allows you to query Kafka topics. You can click Select Topic, and then find and select the topics to be synchronized from the Existing Topics drop-down list. You can also enter the name of an existing topic and select it after it appears.
      Batch Generate Topics The format for generating topics in batches is Topic_${Database Name}_${Table Name}.

      If you select Create Topic or Batch Generate Topics, after the schema migration succeeds, you can query the created topics in the Kafka instance. By default, the number of partitions is 3 and the number of partition replicas is 1. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the destination database as needed.

    4. Click OK.

    When you synchronize data from OceanBase Database Community Edition to a Kafka instance, OMS Community Edition allows you to import objects from text and perform the following operations on the objects in the destination database: change topics, set row filtering conditions, and remove a single object or all objects. Objects in the destination database are listed in the structure of Topic > Database > Table.

    Operation Description
    Import Objects
    1. In the list on the right, click Import Objects in the upper-right corner.
    2. In the dialog box that appears, click OK. Notice
      This operation will overwrite previous selections. Proceed with caution.
    3. In the Import Synchronization Objects dialog box, import the objects to be synchronized.
      You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of synchronization objects.
    4. Click Validate.
    5. After the validation succeeds, click OK.
    Change Topic OMS Community Edition allows you to change topics for objects in the destination. For more information, see Change topics.
    Settings OMS Community Edition allows you to configure row-based filtering, select sharding columns, and select columns to be synchronized.
    1. In the list on the right, move the pointer over the object that you want to set.
    2. Click Settings.
    3. In the Settings dialog box, you can perform the following operations:
      • In the Row Filters section, specify a standard SQL WHERE clause to filter data by row. For more information, see Use SQL conditions to filter data.
      • Select the sharding columns that you want to use from the Sharding Columns drop-down list. You can select multiple fields as sharding columns. This parameter is optional.
        Unless otherwise specified, select the primary keys as sharding columns. If the primary keys are not load-balanced, select load-balanced fields with unique identifiers as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:
        • Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the destination table supports concurrent writes.
        • Orderliness: OMS Community Edition ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
      • In the Select Columns section, select the columns to be synchronized. For more information, see Column filtering.
    4. Click OK.
    Remove/Remove All OMS Community Edition allows you to remove a single object or all objects to be migrated to the destination database during data mapping.
    • Remove a single synchronization object
      In the list on the right of the selection section, move the pointer over the target object, and click Remove. The synchronization object is removed.
    • Remove all synchronization objects
      In the list on the right of the selection section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
  6. Click Next. On the Synchronization Options page, specify the following parameters.

    Parameter Description
    Incremental Synchronization Start Timestamp
    • If you have selected Full Synchronization as the synchronization type, the default value of this parameter is the project startup time and cannot be modified.
    • If you do not select Full Synchronization as the synchronization type, set this parameter to a certain point of time, which is the current system time by default. You can select a point in time or enter a timestamp.
      Notice
      You can select the current time or a point in time earlier than the current time. This parameter is closely related to the retention period of archive logs. Generally, you can start data synchronization from the current timestamp.
    Serialization Method The message format for synchronizing data to the Kafka instance. Valid values: Default, Canal, Dataworks (V2.0), SharePlex, DefaultExtendColumnType, Debezium, DebeziumFlatten, Maxwell, and DebeziumSmt. For more information, see Data formats.
    Notice:
    • Only MySQL tenants of OceanBase Database support Debezium, DebeziumFlatten, and DebeziumSmt.
    • If the message format is set to Dataworks, DDL operations COMMENT ON TABLE and ALTER TABLE…TRUNCATE PARTITION cannot be synchronized.
    Partitioning Rules The rule for synchronizing data from an OceanBase database to a Kafka topic. The data transmission service supports Hash, Table, and One.
    • Hash indicates that OMS Community Edition uses a hash algorithm to select the partition of a Kafka topic based on a hashed value of the value of the primary key or sharding column.
      Notice
      The Hash rule supports only delivering data to all partitions.
    • Table indicates that OMS Community Edition delivers all data in a table to the same partition and uses the table name as the hash key.
    • One indicates that JSON messages are delivered to a partition under a topic to ensure ordering.
    Business System Identification (Optional) Identifies the source business system of data. The business system identifier consists of 1 to 20 characters.
  7. Click Precheck.

    During the precheck, OMS Community Edition detects the connection with the destination Kafka instance. If an error is returned during the precheck:

    • You can identify and troubleshoot the problem and then perform the precheck again.

    • You can also click Skip in the Actions column of the failed precheck item. A dialog box appears, prompting you the impact. If you want to skip this operation, click OK.

  8. Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data synchronization project. You can start the project later as needed.

    OMS Community Edition allows you to modify the synchronization objects during a synchronization project. For more information, see View and modify synchronization objects. After a data synchronization project is started, the synchronization objects will be executed based on the selected synchronization type. For more information, see the "View synchronization details" section in the View details of a data synchronization project topic.

    If the data synchronization project encounters a running exception due to a network failure or slow startup of processes, you can click Recover on the Synchronization Projects page or on the Details page of the synchronization project.

Previous topic

Data synchronization overview
Last

Next topic

Create a project to synchronize data from OceanBase Database Community Edition to a RocketMQ instance
Next
What is on this page
Prerequisites
Limitations
Considerations
Supported DDL operations
Procedure