Kafka is a widely used high-performance distributed stream computing platform. OceanBase Migration Service (OMS) Community Edition supports real-time data synchronization between OceanBase Database Community Edition and a self-managed Kafka instance, extending the message processing capability. Therefore, OMS Community Edition is widely applied to business scenarios such as real-time data warehouse building, data query, and report distribution.
OMS Community Edition allows you to synchronize data to message queue products, extending the all-around application of your business in big data fields, such as monitoring data aggregation, streaming data processing, and online/offline analysis. For more information about the data formats for OceanBase Database Community Edition, see Data formats.
Prerequisites
You have created a dedicated database user for data synchronization in the source OceanBase Database Community Edition and granted the required privileges to the user. For more information, see Create a database user.
Limitations
Only physical tables can be synchronized.
OMS Community Edition supports Kafka 0.9, 1.0, and 2.x.
Notice
When the version of the Kafka instance is 0.9, schema synchronization is not supported.
During data synchronization, if you rename a source table to be synchronized and the new name is beyond the synchronization scope, the data of the source table will not be synchronized to the target Kafka instance.
The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.
The data source identifiers and user accounts must be globally unique in OMS Community Edition.
OMS Community Edition supports the synchronization of only objects whose database name, table name, and column name are ASCII-encoded without special characters. The special characters are line breaks and
| " ' ` ( ) = ; / &.The source cannot be a standby OceanBase database.
Considerations
To ensure the performance of a data synchronization task, we recommend that you synchronize no more than 1,000 tables at a time.
In a data synchronization task where the source is of OceanBase Database Community Edition and DDL synchronization is enabled, if a
RENAMEoperation is performed on a table in the source, we recommend that you restart the task to avoid data loss during incremental synchronization.If the source is of an OceanBase Database Community Edition version in the range of V4.0.0 to V4.3.X, excluding V4.2.5 BP1, and you have selected
Incremental Synchronization , you need to configure theSTOREDattribute for generated columns. For more information, see Generated column operations. Otherwise, information about generated columns will not be saved in incremental logs, which may lead to exceptions during incremental data synchronization.Take note of the following items when an updated row contains a LOB column:
If the LOB column is updated, do not use the value stored in the LOB column before the
UPDATEorDELETEoperation.The following data types are stored in LOB columns: JSON, GIS, XML, user-defined type (UDT), and TEXT such as LONGTEXT and MEDIUMTEXT.
If the LOB column is not updated, the value stored in the LOB column before and after the
UPDATEorDELETEoperation is NULL.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
When data transfer is resumed for a task, some data (within the last minute) may be duplicated in the Kafka instance. Therefore, deduplication is required in downstream systems.
During data synchronization from OceanBase Database Community Edition to a Kafka instance, if the execution of a statement to create a unique index fails in the source, the Kafka instance will consume the creation and deletion DDL statements. If the downstream DDL statement for unique index creation fails the execution, ignore this exception.
Notice
Liboblog V2.2.x does not guarantee the order of DDL or DML statements and may cause data quality issues.
If the
binlog_row_imagevalue is notFULLwhen the application starts, you can set it toFULL. After that, you must restart the application. Otherwise, OceanBase Community Edition will lack log information, which leads to issues with data synchronization. The command for setting the value is as follows:set global binlog_row_image = 'FULL';
Supported DDL operations for synchronization
CREATE TABLENotice
The created table must be a synchronization object. In addition, you need to execute the
DROP TABLEstatement on a synchronized table, and then execute theCREATE TABLEstatement on this table.ALTER TABLEDROP TABLETRUNCATE TABLEIn delayed deletion, the same transaction contains two identical
TRUNCATE TABLEDDL statements. In this case, idempotence is implemented for downstream consumption.ALTER TABLE…TRUNCATE PARTITIONCREATE INDEXDROP INDEXCOMMENT ON TABLERENAME TABLENotice
The renamed table must be a synchronization object.
Procedure
Create a data synchronization task.
Log in to the console of OMS Community Edition.
In the left-side navigation pane, click
Data Synchronization .On the
Data Synchronization page, clickCreate Synchronization Task in the upper-right corner.
On the
Select Source and Target page, configure the parameters.Parameter Description Task Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length. Tag (Optional) Click the field and select a tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Use tags to manage data synchronization tasks.Source If you have created an OceanBase-CE data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list and create one in the dialog box that appears on the right. For more information about the parameters, see Create an OceanBase-CE data source.Target If you have created a Kafka data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list and create one in the dialog box that appears on the right. For more information, see Create a Kafka data source.Click
Next . On theSelect Synchronization Type page, specify the synchronization types for the current data synchronization task.Options for
Synchronization Type areSchema Synchronization ,Full Synchronization , andIncremental Synchronization .Full Synchronization supports the synchronization of tables without a primary key.Incremental Synchronization supports DML synchronization and DDL synchronization.(Optional) Click
Next .To synchronize data from OceanBase Database Community Edition, you need to specify
OCP (Optional) ,DRC User Name , andPassword for schema synchronization and incremental synchronization.If you have selected
Schema Synchronization andIncremental Synchronization without configuring the required parameters for the source database, theMore About Data Sources dialog box appears, prompting you to configure the parameters. For more information about the parameters, see Create an OceanBase-CE data source.After you configure the parameters, click
Test Connectivity . After the test succeeds, clickSave .Click
Next . On theSelect Synchronization Objects page, select a synchronization scope.You can select
Specify Objects orMatch Rules to specify the synchronization objects. The following procedure describes how to specify synchronization objects by using theSpecify Objects option. For information about the procedure for specifying synchronization objects by using theMatch Rules option, see Configure matching rules for synchronization objects.When you synchronize data from OceanBase Database Community Edition to a Kafka instance, you can synchronize data from multiple tables to multiple topics.
In the left-side pane, select the objects to be synchronized.
Click >.
In the
Map Object to Topic dialog box, select a mapping method.If you did not select
Schema Synchronization as the synchronization type, you can select onlyExisting Topics here. If you have selectedSchema Synchronization when you specify the synchronization type, you can select only one mapping method to create or select a topic.For example, if you have selected
Schema Synchronization , when you use both theCreate Topic andSelect Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.Parameter Description Create Topic Enter the name of the new topic in the text box. The topic name can contain 3 to 64 characters, and support only letters, digits, hyphens (-), underscores (_), and periods (.). Select Topic OMS Community Edition allows you to query Kafka topics. You can click Select Topic and then find and select a topic for synchronization from theExisting Topics drop-down list. You can also enter the name of an existing topic and select it after it appears.Batch Generate Topics The format for generating topics in batches is Topic_${Database Name}_${Table Name}.If you select
Create Topic orBatch Generate Topics , after the schema migration succeeds, you can query the created topics in the Kafka instance. By default, the number of partitions is 3 and the number of partition replicas is 1. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the target database as needed.Click
OK .
When you synchronize data from OceanBase Database Community Edition to a Kafka instance, OMS Community Edition allows you to import objects from text and perform the following operations on the objects in the target database: change topics, set row filtering conditions, and remove a single object or all objects. Objects in the target database are listed in the structure of Topic > Database > Table.
Operation Description Import objects - In the list on the right, click
Import Objects in the upper-right corner. - In the dialog box that appears, click
OK .
Notice
This operation will overwrite previous selections. Proceed with caution. - In the
Import Synchronization Objects dialog box, import the objects to be synchronized.
You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of synchronization objects. - Click
Validate . - After the validation succeeds, click
OK .
Change topics OMS Community Edition allows you to change the topic for objects in the target database. For more information, see Change the topic. Configure settings OMS Community Edition allows you to configure row-based filtering, select sharding columns, and select columns to be synchronized. - In the list on the right, move the pointer over the object that you want to set.
- Click
Settings . - In the
Settings dialog box, you can perform the following operations:- In the
Row Filters section, specify a standard SQLWHEREclause to filter data by row. For more information, see Use SQL conditions to filter data. - Select the sharding columns that you want to use from the
Sharding Columns drop-down list. You can select multiple columns as sharding columns. This parameter is optional.
Unless otherwise specified, select the primary key as sharding columns. If the primary key is not load-balanced, select load-balanced columns with unique identifiers as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:- Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the target table supports concurrent writes.
- Orderliness: OMS Community Edition ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
- In the
Select Columns section, select the columns to be synchronized. For more information, see Column filtering.
- In the
- Click
OK .
Remove one or all objects OMS Community Edition allows you to remove a single object or all objects to be synchronized to the target database during data mapping. - Remove a single synchronization object
In the list on the right, move the pointer over the object that you want to remove, and clickRemove to remove the synchronization object. - Remove all synchronization objects
In the list on the right, clickRemove All in the upper-right corner. In the dialog box that appears, clickOK to remove all synchronization objects.
Click
Next . On theSynchronization Options page, configure the following parameters.To view or modify parameters of the Connector or Incr-Sync component, click
Configuration Details in the upper-right corner of theFull synchronization orIncremental Synchronization section. For more information about the parameters, see Component parameters.Full synchronization
The following parameters are displayed only if you have selected
Full Synchronization on theSelect Synchronization Type page.Parameter Description Concurrency Speed Valid values: Stable ,Normal ,Fast , andCustom . The amount of resources to be consumed by a full synchronization task depends on the synchronization performance. If you selectCustom , you can setRead Concurrency ,Write Concurrency , andJVM Memory as needed.Incremental synchronization
The following parameters are displayed only if you have selected
Incremental Synchronization on theSelect Synchronization Type page.Parameter Description Concurrency Speed Valid values: Stable ,Normal ,Fast , andCustom . The amount of resources to be consumed by an incremental synchronization task depends on the synchronization performance. If you selectCustom , you can setRead Concurrency ,Write Concurrency , andJVM Memory as needed.Incremental Synchronization Start Timestamp - If you have selected
Full Synchronization as the synchronization type, this parameter is not displayed. - If you did not select
Full Synchronization as the synchronization type, set this parameter to a certain point in time, which is the current system time by default. For more information, see Set an incremental synchronization timestamp.
- If you have selected
Advanced options
Parameter Description Serialization Method The message format for synchronizing data to a Kafka instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, DefaultExtendColumnType, Debezium, DebeziumFlatten, Maxwell, and DebeziumSmt. For more information, see Data formats.
Notice- At present, only MySQL-compatible tenants of OceanBase Database support Debezium, DebeziumFlatten, and DebeziumSmt.
- If the message format is set to Dataworks, DDL operations
COMMENT ON TABLEandALTER TABLE…TRUNCATE PARTITIONcannot be synchronized.
Partitioning Rules The rule for synchronizing data from OceanBase Database to a Kafka topic. Valid values: Hash, Table, and One. For more information about the delivery of DDL statements in different scenarios and examples, see the description below. - Hash indicates that OMS Community Edition uses a hash algorithm to select the partition of a Kafka topic based on the hash value of the primary key or sharding column.
- Table indicates that OMS Community Edition delivers all data in a table to the same partition and uses the table name as the hash key.
- One indicates that JSON messages are delivered to a partition under a topic to ensure ordering.
Business System Identification (Optional) The identifier that identifies the source business system of data. The business system identifier consists of 1 to 20 characters. The following table describes the delivery of a DDL statement in different scenarios.
Partitioning rule When the DDL statement involves multiple tables (example: RENAME TABLE)When the DDL statement involves unknown tables (example: DROP INDEX)When the DDL statement involves a single table Hash The DDL statement is delivered to all partitions of the topics associated with the involved tables.
Assume that the DDL statement involves three tables, A, B, and C. If A is associated with Topic 1, B is associated with Topic 2, and C is not involved in the current task, the DDL statement is delivered to all partitions of Topic 1 and Topic 2.The DDL statement is delivered to all partitions of all topics of the current task.
Assume that the DDL statement cannot be identified by OMS Community Edition. If the current task has three topics, the DDL statement is delivered to all partitions of these three topics.The DDL statement is delivered to all partitions of the topic associated with the table. Table The DDL statement is delivered to specific partitions of the topics associated with the tables. The partitions correspond to the hash values of the names of involved tables.
Assume that the DDL statement involves three tables, A, B, and C. If A is associated with Topic 1, B is associated with Topic 2, and C is not involved in the current task, the DDL statement is delivered to specific partitions corresponding to the hash values of the involved table names in Topic 1 and Topic 2.The DDL statement is delivered to all partitions of all topics of the current task.
Assume that the DDL statement cannot be identified by OMS Community Edition. If the current task has three topics, the DDL statement is delivered to all partitions of these three topics.The DDL statement is delivered to a partition of the topic associated with the table. One The DDL statement is delivered to a fixed partition of the topics associated with the tables.
Assume that the DDL statement involves three tables, A, B, and C. If A is associated with Topic 1, B is associated with Topic 2, and C is not involved in the current task, the DDL statement is delivered to a fixed partition of Topic 1 and Topic 2.The DDL statement is delivered to a fixed partition of all topics of the current task.
Assume that the DDL statement cannot be identified by OMS Community Edition. If the current task has three topics, the DDL statement is delivered to a fixed partition of these three topics.The DDL statement is delivered to a fixed partition of the topic associated with the table.
Click
Pre-check .During the precheck, OMS Community Edition detects the connection with the target Kafka instance. If an error is returned during the precheck, you can perform the following operations:
Identify and troubleshoot the problem and then perform the precheck again.
Click Skip in the Operation column of the failed precheck item. In the dialog box that prompts the consequences of the operation, click OK.
Click Start Task. If you do not need to start the task now, click Save to go to the details page of the task. You can start the task later as needed.
OMS Community Edition allows you to modify the synchronization objects when the data synchronization task is running. For more information, see View and modify synchronization objects. After the data synchronization task is started, it will be executed based on the selected synchronization types. For more information, see the
View synchronization details section in the View details of a data synchronization task topic.If the data synchronization task encounters an execution exception due to a network failure or slow startup of processes, you can click
Resume on theSynchronization Tasks orDetails page of the synchronization task.