Kafka is a widely used high-performance distributed stream computing platform. OceanBase Migration Service (OMS) Community Edition supports real-time data synchronization between OceanBase Database Community Edition and a self-managed Kafka instance, extending the message processing capability. Therefore, OMS Community Edition is widely applied to business scenarios such as real-time data warehouse building, data query, and report distribution.
OMS Community Edition allows you to synchronize data to message queue products, extending the all-around application of your business in big data fields, such as monitoring data aggregation, streaming data processing, and online/offline analysis. For more information about the data formats for OceanBase Database Community Edition, see Data formats.
Prerequisites
You have created a dedicated database user for data synchronization in the source OceanBase Database Community Edition and granted the required privileges to the user. For more information, see Create a database user.
Limitations
Only physical tables can be synchronized.
OMS Community Edition supports Kafka 0.9, 1.0, and 2.x.
Notice
When the version of the Kafka instance is 0.9, schema synchronization is not supported.
During data synchronization, if you rename a source table to be synchronized and the new name is beyond the synchronization scope, the data of the source table will not be synchronized to the target Kafka instance.
The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.
The data source identifiers and user accounts must be globally unique in OMS.
OMS Community Edition supports the synchronization of only objects whose database name, table name, and column name are ASCII-encoded without special characters. The special characters are line breaks and
| " ' ` ( ) = ; / &.The source cannot be a standby OceanBase database.
When you select Incremental Migration > Synchronize Heartbeat Data in the Select Synchronization Type step, the following limitations apply:
Before sending heartbeat data, the source must have at least one piece of normally updated data. If multiple topics exist, at least one piece of data is changed in the table corresponding to the multiple topics. Otherwise, OMS Community Edition cannot obtain the topic information normally and no heartbeat data is sent to Kafka.
When the downstream receives the heartbeat data, since the heartbeat data is accurate to second, it is impossible to determine the order of heartbeat data within the second. It can only be ensured that the data of the second before the heartbeat has been sent.
Considerations
To ensure the performance of a data synchronization task, we recommend that you synchronize no more than 1,000 tables at a time.
In a data synchronization task where the source is of OceanBase Database Community Edition and DDL synchronization is enabled, if a
RENAMEoperation is performed on a table in the source, we recommend that you restart the task to avoid data loss during incremental synchronization.If the source is of an OceanBase Database Community Edition version in the range of V4.0.0 to V4.3.X, excluding V4.2.5 BP1, and you have selected Incremental Synchronization, you need to configure the
STOREDattribute for generated columns. For more information, see Generated column operations. Otherwise, information about generated columns will not be saved in incremental logs, which may lead to exceptions during incremental data synchronization.Take note of the following items when an updated row contains a LOB column:
If the LOB column is updated, do not use the value stored in the LOB column before the
UPDATEorDELETEoperation.The following data types are stored in LOB columns: JSON, GIS, XML, user-defined type (UDT), and TEXT such as LONGTEXT and MEDIUMTEXT.
If the LOB column is not updated, the value stored in the LOB column before and after the
UPDATEorDELETEoperation is NULL.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
When data transfer is resumed for a task, some data (within the last minute) may be duplicated in the Kafka instance. Therefore, deduplication is required in downstream systems.
During data synchronization from OceanBase Database Community Edition to a Kafka instance, if the execution of a statement to create a unique index fails in the source, the Kafka instance will consume the creation and deletion DDL statements. If the downstream DDL statement for unique index creation fails the execution, ignore this exception.
Notice
Liboblog V2.2.x does not guarantee the order of DDL or DML statements and may cause data quality issues.
If the
binlog_row_imagevalue is notFULLwhen the application starts, you can set it toFULL. After that, you must restart the application. Otherwise, OceanBase Community Edition will lack log information, which leads to issues with data synchronization. The command for setting the value is as follows:set global binlog_row_image = 'FULL';
Supported DDL operations for synchronization
CREATE TABLENotice
The created table must be a synchronization object. In addition, you need to execute the
DROP TABLEstatement on a synchronized table, and then execute theCREATE TABLEstatement on this table.ALTER TABLEDROP TABLETRUNCATE TABLEIn delayed deletion, the same transaction contains two identical
TRUNCATE TABLEDDL statements. In this case, idempotence is implemented for downstream consumption.ALTER TABLE…TRUNCATE PARTITIONCREATE INDEXDROP INDEXCOMMENT ON TABLERENAME TABLENotice
The renamed table must be a synchronization object.
Procedure
Create a data synchronization task.
Log in to the console of OMS Community Edition.
In the left-side navigation bar, click Data Synchronization.
On the Data Synchronization page, click Create Synchronization Task in the upper-right corner.
On the Select Source and Target page, configure the parameters.
Parameter Description Task Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length. Tag (Optional) Click the text box and select the target tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Manage Data Synchronization Tasks with Tags. Source If you have already created an OceanBase-CE data source, please select it from the drop-down list. If not, click New Data Source in the drop-down list and add it in the dialog box on the right. For parameter details, see Create an OceanBase-CE Data Source. Target If you have already created a Kafka data source, please select it from the drop-down list. If not, click New Data Source in the drop-down list and add it in the dialog box on the right. For parameter details, see Create a Kafka Data Source. Click Next. On the Select Synchronization Type page, specify the synchronization types for the current data synchronization task.
Synchronization types include Schema Synchronization, Full Synchronization, and Incremental Synchronization. Full Synchronization supports synchronization of tables without primary keys, while Incremental Synchronization supports DML synchronization, DDL synchronization, and heartbeat data synchronization.
(Optional) Click Next.
When the source is OceanBase Community Edition, structural migration and incremental synchronization require configuration of OCP Cluster (optional), DRC User Name, and Password.
If you select Schema Migration and Incremental Synchronization, but the source OceanBase Community Edition is not configured with the corresponding parameters, a dialog box named Data Source Supplementary Information will pop up to remind you to configure it. For details about the parameters, see Create a New OceanBase-CE Data Source.
After the supplement is completed, click Test connectivity. After the connection test is successful, click Save.
Click Next Step to go to the Select a Synchronization Range page, and select the synchronization scope.
You can select Specify Objects or Match Rules to specify the synchronization objects. The following procedure describes how to specify synchronization objects by using the Specify Objects option. For information about the procedure for specifying synchronization objects by using the Match Rules option, see Configure matching rules for synchronization objects.
When you synchronize data from OceanBase Database Community Edition to a Kafka instance, you can synchronize data from multiple tables to multiple topics.
In the left-side pane, select the objects to be synchronized.
Click >.
In the Map Object to Topic dialog box, select a mapping method.
If you did not select Schema synchronization as the synchronization type, you can select only Existing Topics here. If you have selected Schema synchronization when you specify the synchronization type, you can select only one mapping method to create or select a topic.
For example, if you have selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.
Parameter Description Create Topic Enter the name of the new topic in the text box. The topic name can contain 3 to 64 characters, and support only letters, digits, hyphens (-), underscores (_), and periods (.). Select Topic OMS Community Edition allows you to query Kafka topics. You can click Select Topic and then find and select a topic for synchronization from the Existing Topics drop-down list. You can also enter the name of an existing topic and select it after it appears. Batch Generate Topics The format for generating topics in batches is Topic_${Database Name}_${Table Name}.If you select Create Topic or Batch Generate Topics, after the schema migration succeeds, you can query the created topics in the Kafka instance. By default, the number of partitions is 3 and the number of partition replicas is 1. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the target database as needed.
Click OK.
When you synchronize data from OceanBase Database Community Edition to a Kafka instance, OMS Community Edition allows you to import objects from text and perform the following operations on the objects in the target database: change topics, set row filtering conditions, and remove a single object or all objects. Objects in the target database are listed in the structure of Topic > Database > Table.
Operation Description Import objects - In the list on the right, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK.
Notice
This operation will overwrite previous selections. Proceed with caution. - In the Import Synchronization Objects dialog box, import the objects to be synchronized.
You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of synchronization objects. - Click Validate.
- After the validation succeeds, click OK.
Change topics OMS Community Edition allows you to change the topic for objects in the target database. For more information, see Change the topic. Configure settings OMS Community Edition allows you to configure row-based filtering, select sharding columns, and select columns to be synchronized. - In the list on the right, move the pointer over the object that you want to set.
- Click Settings.
- In the Settings dialog box, you can perform the following operations:
- In the Row Filters section, specify a standard SQL
WHEREclause to filter data by row. For more information, see Use SQL conditions to filter data. - Select the sharding columns that you want to use from the Sharding Columns drop-down list. You can select multiple columns as sharding columns. This parameter is optional.
Unless otherwise specified, select the primary key as sharding columns. If the primary key is not load-balanced, select load-balanced columns with unique identifiers as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:- Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the target table supports concurrent writes.
- Orderliness: OMS Community Edition ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
- In the Select Columns section, select the columns to be synchronized. For more information, see Column filtering.
- In the Row Filters section, specify a standard SQL
- Click OK.
Remove one or all objects OMS Community Edition allows you to remove a single object or all objects to be synchronized to the target database during data mapping. - Remove a single synchronization object
In the list on the right, move the pointer over the object that you want to remove, and click Remove to remove the synchronization object. - Remove all synchronization objects
In the list on the right, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
Click Next. On the Synchronization Options page, configure the following parameters.
To view or modify parameters of the Connector or Incr-Sync component, click Configuration Details in the upper-right corner of the Full Synchronization or Incremental Synchronization section. For more information about the parameters, see Component parameters.
Full synchronization
The following parameters are displayed only if you have selected Full Synchronization on the Select Synchronization Type page.
Parameter Description Concurrency Speed Valid values: Stable, Normal, Fast, and Custom. The amount of resources to be consumed by a full synchronization task depends on the synchronization performance. If you select Custom, you can set Read Concurrency, Write Concurrency, and JVM Memory as needed. Incremental synchronization
The following parameters are displayed only if you have selected Incremental Synchronization on the Select Synchronization Type page.
Parameter Description Concurrency Speed Valid values: Stable, Normal, Fast, and Custom. The amount of resources to be consumed by an incremental synchronization task depends on the synchronization performance. If you select Custom, you can set Read Concurrency, Write Concurrency, and JVM Memory as needed. Incremental Synchronization Start Timestamp - If you have selected Full Synchronization as the synchronization type, this parameter is not displayed.
- If you did not select Full Synchronization as the synchronization type, set this parameter to a certain point in time, which is the current system time by default. For more information, see Set an incremental synchronization timestamp.
Advanced options
Parameter Description Serialization Method The message format for synchronizing data to a Kafka instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, DefaultExtendColumnType, Debezium, DebeziumFlatten, Maxwell, and DebeziumSmt. For more information, see Data formats.
Notice- At present, only MySQL-compatible tenants of OceanBase Database support Debezium, DebeziumFlatten, and DebeziumSmt.
- If the message format is set to Dataworks, DDL operations
COMMENT ON TABLEandALTER TABLE…TRUNCATE PARTITIONcannot be synchronized.
Partitioning Rules The rule for synchronizing data from OceanBase Database to a Kafka topic. Valid values: Hash, Table, and One. For more information about the delivery of DDL statements in different scenarios and examples, see the description below. - Hash indicates that OMS Community Edition uses a hash algorithm to select the partition of a Kafka topic based on the hash value of the primary key or sharding column.
- Table indicates that OMS Community Edition delivers all data in a table to the same partition and uses the table name as the hash key.
- One indicates that JSON messages are delivered to a partition under a topic to ensure ordering.
Business System Identification (Optional) The identifier that identifies the source business system of data. The business system identifier consists of 1 to 20 characters. The following table describes the delivery of a DDL statement in different scenarios.
Partitioning rule When the DDL statement involves multiple tables (example: RENAME TABLE)When the DDL statement involves unknown tables (example: DROP INDEX)When the DDL statement involves a single table Hash The DDL statement is delivered to all partitions of the topics associated with the involved tables.
Assume that the DDL statement involves three tables, A, B, and C. If A is associated with Topic 1, B is associated with Topic 2, and C is not involved in the current task, the DDL statement is delivered to all partitions of Topic 1 and Topic 2.The DDL statement is delivered to all partitions of all topics of the current task.
Assume that the DDL statement cannot be identified by OMS Community Edition. If the current task has three topics, the DDL statement is delivered to all partitions of these three topics.The DDL statement is delivered to all partitions of the topic associated with the table. Table The DDL statement is delivered to specific partitions of the topics associated with the tables. The partitions correspond to the hash values of the names of involved tables.
Assume that the DDL statement involves three tables, A, B, and C. If A is associated with Topic 1, B is associated with Topic 2, and C is not involved in the current task, the DDL statement is delivered to specific partitions corresponding to the hash values of the involved table names in Topic 1 and Topic 2.The DDL statement is delivered to all partitions of all topics of the current task.
Assume that the DDL statement cannot be identified by OMS Community Edition. If the current task has three topics, the DDL statement is delivered to all partitions of these three topics.The DDL statement is delivered to a partition of the topic associated with the table. One The DDL statement is delivered to a fixed partition of the topics associated with the tables.
Assume that the DDL statement involves three tables, A, B, and C. If A is associated with Topic 1, B is associated with Topic 2, and C is not involved in the current task, the DDL statement is delivered to a fixed partition of Topic 1 and Topic 2.The DDL statement is delivered to a fixed partition of all topics of the current task.
Assume that the DDL statement cannot be identified by OMS Community Edition. If the current task has three topics, the DDL statement is delivered to a fixed partition of these three topics.The DDL statement is delivered to a fixed partition of the topic associated with the table.
Click Pre-check.
In the Precheck section, OMS Community Edition checks the connection with the target Kafka instance. If an error is returned during the precheck, you can perform the following operations:
Identify and troubleshoot the problem and then perform the precheck again.
Click Skip in the Actions column of the failed precheck item. In the dialog box that prompts the consequences of the operation, click OK.
Click Start Task. If you do not need to start the task now, click Save to go to the details page of the task. You can start the task later as needed.
OMS Community Edition allows you to modify the synchronization objects when the data synchronization task is running. For more information, see View and modify synchronization objects. After the data synchronization task is started, it will be executed based on the selected synchronization types. For more information, see the View synchronization details section in the View details of a data synchronization task topic.
If the data synchronization task encounters an execution exception due to a network failure or slow startup of processes, you can click Resume on the Synchronization Tasks or Details page of the synchronization task.