This topic describes how to synchronize data from a MySQL or Oracle tenant of OceanBase Database to a DataHub instance.
Prerequisites
You have created a dedicated database user for data synchronization in the source OceanBase database and granted corresponding privileges to the user. For more information, see Create a database user.
You have created data sources for the source and destination. For more information, see Create a physical OceanBase data source and Create a DataHub data source.
Limitations
OceanBase Migration Service (OMS) supports full synchronization of tables with unique keys only.
DDL synchronization applies only to BLOB topics.
During data synchronization, OMS allows you to drop a table before creating a new one. In other words, you can execute
DROP TABLEand then executeCREATE TABLE. In OMS, you cannot create a new table by renaming a table. That is, you cannot executeRENAME TABLE a TO a_tmp.OMS supports synchronization of data of the UTF8 and GBK character sets.
The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.
Data source identifiers and user accounts must be globally unique in OMS.
OMS supports the migration of only objects whose database name, table name, and column name are ASCII-encoded and do not contain special characters. The special characters are spaces line breaks, and the following characters:
. | " ' ` ( ) = ; / & \.OMS does not support a standby OceanBase database as the source.
DataHub has the following limitations:
DataHub limits the size of a message based on the cloud environment, usually to 1 MB.
DataHub sends messages in batches, with each batch sized no more than 4 MB. If a single message meets the conditions for sending, you can modify the
batch.sizeparameter. By default, 20 messages are sent at a time within one second.For more information about the limits and naming conventions of DataHub, see Limits of DataHub.
Considerations
In a data synchronization project where the source is an OceanBase database and DDL synchronization is enabled, if a
RENAMEoperation is performed on a table in the source, we recommend that you restart the project to avoid data loss during incremental synchronization.During incremental synchronization from OceanBase Database V4.x, if the STORED attribute is not specified for a generated column, the values of this column synchronized to the destination become NULL for a Tuple topic. For a BLOB topic, this column will not be synchronized to the destination.
Take note of the following items when an updated row contains a LOB column:
If the LOB column is updated, do not use the value stored in the LOB column before the
UPDATEorDELETEoperation.The following data types are stored in LOB columns: JSON, GIS, XML, user-defined type (UDT), and TEXT such as LONGTEXT and MEDIUMTEXT.
If the LOB column is not updated, the value stored in the LOB column before and after the
UPDATEorDELETEoperation is NULL.
The following table lists the data types supported by DataHub. These data types apply only to Tuple topics.
Type Description Value range BIGINT An 8-byte signed integer. -9223372036854775807 to 9223372036854775807 DOUBLE An 8-byte double-precision floating-point number. -1.0 _10^308 to 1.0_10^308 BOOLEAN A boolean value. - True or False
- true or false
- 0 or 1
TIMESTAMP A timestamp. It is accurate to microseconds. STRING A string that supports only UTF-8 encoding. A single STRING column supports a maximum of 2 MB. INTEGER A 4-byte integer. -2147483648 to 2147483647 FLOAT A 4-byte single-precision floating-point number. -3.40292347_10^38 to 3.40292347_10^38 DECIMAL A digital value. - 10^38 + 1 to 10^38 - 1
Supported DDL
Notice
DDL synchronization applies only to BLOB topics.
ALTER TABLECREATE INDEXDROP INDEXTRUNCATE TABLEIn delayed deletion, the same transaction contains two identical
TRUNCATE TABLEDDL statements. In this case, idempotence is implemented for downstream consumption.
Data type mappings
A project that synchronizes data to a DataHub instance supports only the following data types: INTEGER, BIGINT, TIMESTAMP, FLOAT, DOUBLE, DECIMAL, STRING, and BOOLEAN.
If you create a topic of another type when you set topic mapping, data synchronization will fail.
The following table describes the default mapping rules, which are the most appropriate. If you change the mapping, an error may occur.
Data type mappings between MySQL tenants of OceanBase Database and DataHub instances
| Data type in a MySQL tenant of OceanBase Database | Default mapped-to data type in a DataHub instance |
|---|---|
| BIT | STRING (Base64-encoded) |
| CHAR | STRING |
| BINARY | STRING (Base64-encoded) |
| VARBINARY | STRING (Base64-encoded) |
| INT | BIGINT |
| TINYTEXT | STRING |
| SMALLINT | BIGINT |
| MEDIUMINT | BIGINT |
| BIGINT | DECIMAL (This data type is used because the maximum unsigned value exceeds the maximum LONG value in Java.) |
| FLOAT | DECIMAL |
| DOUBLE | DECIMAL |
| DECIMAL | DECIMAL |
| DATE | STRING |
| TIME | STRING |
| YEAR | BIGINT |
| DATETIME | STRING |
| TIMESTAMP | TIMESTAMP (accurate to milliseconds) |
| VARCHAR | STRING |
| TINYBLOB | STRING (Base64-encoded) |
| TINYTEXT | STRING |
| BLOB | STRING (Base64-encoded) |
| TEXT | STRING |
| MEDIUMBLOB | STRING (Base64-encoded) |
| MEDIUMTEXT | STRING |
| LONGBLOB | STRING (Base64-encoded) |
| LONGTEXT | STRING |
Data type mappings between Oracle tenants of OceanBase Database and DataHub instances
| Data type in an Oracle tenant of OceanBase Database | Default mapped-to data type in a DataHub instance |
|---|---|
| CHAR | STRING |
| NCHAR | STRING |
| VARCHAR2 | STRING |
| NVARCHAR2 | STRING |
| CLOB | STRING |
| BLOB | STRING (Base64-encoded) |
| NUMBER | DECIMAL |
| BINARY_FLOAT | DECIMAL |
| BINARY_DOUBLE | DECIMAL |
| DATE | STRING |
| TIMESTAMP | STRING |
| TIMESTAMP WITH TIME ZONE | STRING |
| TIMESTAMP WITH LOCAL TIME ZONE | STRING |
| INTERVAL YEAR TO MONTH | STRING |
| INTERVAL DAY TO SECOND | STRING |
| RAW | STRING (Base64-encoded) |
Supplemental properties
If you manually create a topic, add the following properties to the DataHub schema before you start a data synchronization project. If OMS automatically creates a topic and synchronizes the schema, OMS automatically adds the following properties.
Notice
The following table applies only to Tuple topics.
| Parameter | Type | Description |
|---|---|---|
| oms_timestamp | STRING | The time when the change was made. |
| oms_table_name | STRING | The new table name of the source table. |
| oms_database_name | STRING | The new database name of the source database. |
| oms_sequence | STRING | The timestamp at which data is synchronized to the process memory. The value of this field consists of time and five incremental digits. A clock rollback will result in data inconsistency. |
| oms_record_type | STRING | The change type. Valid values: UPDATE, INSERT, and DELETE. |
| oms_is_before | STRING | Specifies whether the data is the original data when the change type is UPDATE. Y indicates that the data is the original data. |
| oms_is_after | STRING | Specifies whether the data is the modified data when the change type is UPDATE. Y indicates that the data is the modified data. |
Procedure
Create a data synchronization project.
Log on to the OMS console.
In the left-side navigation pane, click Data Synchronization.
On the Data Synchronization page, click Create Synchronization Project in the upper-right corner.
On the Select Source and Destination page, configure the parameters.
Parameter Description Synchronization Project Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length. Tag (Optional) Click the field and select a target tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Manage data synchronization projects by using tags. Source If you have created an OceanBase data source, which can be a physical data source or an ApsaraDB for OceanBase data source, select it from the drop-down list. If not, click New Data Source in the drop-down list and create one in the dialog box that appears on the right. For more information about the parameters, see Create a physical OceanBase data source or Create a public cloud OceanBase data source. Destination If you have created a DataHub data source, select it from the drop-down list. If not, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about the parameters, see Create a DataHub data source. Click Next. On the Select Synchronization Type page, select the synchronization type for the current data synchronization project.
Valid values: Schema Synchronization, Full Synchronization, and Incremental Synchronization. Schema synchronization creates a topic. Options for Incremental Synchronization are DML Synchronization and DDL Synchronization.
Options for DML Synchronization are
Insert,Delete, andUpdate, which are all selected by default. For more information, see DML filtering.If you select Synchronize DDL here, you can select only topics of the BLOB type on the Select Synchronization Objects page. For more information, see Synchronize DDL operations.
(Optional) Click Next.
If you have selected Incremental Synchronization without configuring the required parameters for the source OceanBase database, the More About Data Sources dialog box appears to prompt you to configure the parameters. For more information about the parameters, see Create a physical OceanBase data source or Create a public cloud OceanBase data source.
After you configure the parameters, click Test Connection. After the test succeeds, click OK.
Click Next. On the Select Synchronization Objects page, select the topic type and scope for synchronization.
Available topic types are Tuple and BLOB. Tuple topics do not support DDL synchronization but support records similar to database records. Each record contains multiple columns. BLOB topics only support a binary block as a record. Data is Base64-encoded for transmission. For more information, visit the documentation center of DataHub.
After you select the topic type for synchronization, perform the following operations:
In the left-side pane, select the objects to be synchronized.
Click >.
Select a mapping method.
To synchronize a single Tuple table or a single or multiple BLOB tables, select the required mapping method and click OK in the Map Object to Topic dialog box.
If you do not select Schema Synchronization when you specify the synchronization type, you can select only Existing Topics here. If you have selected Schema Synchronization when you specify the synchronization type, you can select only one mapping method to create or select a topic.
For example, if you have selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.
Parameter Description Create Topic Enter the name of the new topic in the text box. The topic name can contain letters, digits, and underscores (_) and must start with a letter. It must not exceed 128 characters in length. Select Topic OMS allows you to query DataHub topics. You can click Select Topic, and then find and select a topic for synchronization from the Existing Topics drop-down list.
You can also enter the name of an existing topic and select it after it appears.Batch Generate Topics The format for generating topics in batches is Topic_${Database Name}_${Table Name}.If you select Create Topic or Batch Generate Topics, after the schema migration succeeds, you can query the created topics on the DataHub side. By default, the number of data shards is 2 and the data expiration time is 7 days. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the destination database as needed.
To synchronize multiple Tuple tables, click OK in the dialog box that appears.
If you have selected the Tuple type without selecting Schema Synchronization, when you select multiple tables to synchronize and select one topic and click OK in the Map Object to Topic dialog box, the selected tables are displayed under the topic in the right pane but only one table can be synchronized. Click Next. A prompt appears, indicating that only one-to-one mapping is supported between Tuple topics and tables.
Click OK.
Note
OMS automatically filters out unsupported tables.
OMS allows you to import objects from text files, set row filters, sharding columns, and column filters for the target object, and remove a single object or all objects. Objects in the destination database are listed in the structure of Topic > Database > Table.
Operation Description Import objects - In the list on the right, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK. Notice
This operation will overwrite previous selections. Proceed with caution. - In the Import Synchronization Objects dialog box, import the objects to be synchronized.
You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of synchronization objects. - Click Validate.
- After the validation is passed, click OK.
Change topics When the topic type is set to BLOB, you can change the topic for objects in the destination. For more information, see Change topics. Configure settings OMS allows you to configure row-based filtering, select sharding columns, and select columns to be synchronized. - In the list on the right, move the pointer over the object that you want to set.
- Click Settings.
- In the Settings dialog box, you can perform the following operations:
- In the Row Filters section, specify a standard SQL
WHEREclause to filter data by row. For more information, see Use SQL conditions to filter data. - Select the sharding columns that you want to use from the Sharding Columns drop-down list. You can select multiple fields as sharding columns. This parameter is optional.
Unless otherwise specified, select the primary key as sharding columns. If the primary keys are not load-balanced, select load-balanced fields with unique identifiers as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:- Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the destination table supports concurrent writes.
- Orderliness: OMS ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
- In the Select Columns section, select the columns to be synchronized. For more information, see Column filtering.
- In the Row Filters section, specify a standard SQL
- Click OK.
Remove one or all objects During data mapping, OMS allows you to remove one or more selected objects to be migrated or synchronized to the destination. - Remove a single synchronization object
In the list on the right of the selection section, hover over the target object, and click Remove. The synchronization object is removed. - Remove all synchronization objects
In the list on the right of the selection section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
Click Next. On the Synchronization Options page, specify the following parameters.
Full synchronization
The following table describes the full synchronization parameters, which are displayed only if you have selected Full Synchronization on the Select Synchronization Type page.
Parameter Description Full Synchronization Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Read Concurrency, Write Concurrency, and Memory. You can also customize the resource configurations for full synchronization. Through resource configuration for the Full-Import component, you can limit the resource consumption of a project in the full synchronization phase. Notice
In the case of custom configurations, the minimum value is 1 and only integers are supported.
Incremental synchronization
The following table describes the incremental synchronization parameters, which are displayed only if you have selected Incremental Synchronization on the Select Synchronization Type page.
Parameter Description Incremental Log Pull Resource Configuration You can select Small, Medium, or Large to use the corresponding default value of Memory. You can also customize the resource configurations for incremental log pull. Through resource configuration for the Store component, you can limit the resource consumption of a project in log pull in the incremental synchronization phase. Notice
In the case of custom configurations, the minimum value is 1 and only integers are supported.
Incremental Data Write Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Write Concurrency and Memory. You can also customize the resource configurations for incremental data write. Through resource configuration for the Incr-Sync component, you can limit the resource consumption of a project in data writes in the incremental synchronization phase. Notice
In the case of custom configurations, the minimum value is 1 and only integers are supported.
Incremental Record Retention Time The duration that incremental parsed files are cached in OMS. A longer retention period results in more disk space occupied by the Store component. Incremental Synchronization Start Timestamp - If you have selected Full Synchronization as the synchronization type, the default value of this parameter is the project startup time and cannot be modified.
- If you do not select Full Synchronization as the synchronization type, set this parameter to a certain point of time, which is the current system time by default. For more information, see Set an incremental synchronization timestamp.
Advanced options
Parameter Description Serialization Method The message format for synchronizing data to a DataHub instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, DefaultExtendColumnType, Debezium, DebeziumFlatten, and DebeziumSmt. For more information, see Data formats.
Notice- This parameter is available only when the topic type is set to BLOB on the Select Synchronization Type page.
- Only MySQL tenants of OceanBase Database support Debezium, DebeziumFlatten, and DebeziumSmt.
Partitioning Rules The rule for synchronizing data from the source database to a DataHub topic. Valid values: Hash and Table. We recommend that you select Table to ensure DDL and DML consumption consistency when downstream applications are consuming messages. - Hash indicates that OMS uses a hash algorithm to select the shard of a DataHub topic based on the value of the primary key or sharding column.
- Table indicates that OMS delivers all data in a table to the same partition and uses the table name as the hash key.
Notice
If you select DDL Synchronization on the Select Synchronization Type page, the partitioning rule can be set only to Table.
Business System Identification (Optional) Identifies the source business system of data. The business system identifier consists of 1 to 20 characters.
If the parameter settings on the page cannot meet your requirements, you can click Parameter Configuration in the lower part of the page to configure more specific settings. You can also reference an existing project or component template.
Click Precheck.
During the precheck, OMS checks the column name and column type, and checks whether the values are null. OMS does not check the value length or default value. If an error is returned during the precheck, you can perform the following operations:
Identify and troubleshoot the problem and then perform the precheck again.
Click Skip in the Actions column of a failed precheck item. In the dialog box that appears, you can view the prompt for the consequences of the operation and click OK.
Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data synchronization project. You can start the project later as needed.
OMS allows you to modify the synchronization objects when the data synchronization project is running. For more information, see View and modify synchronization objects. After a data synchronization project is started, the synchronization objects will be executed based on the selected synchronization type. For more information, see the "View synchronization details" section in the View details of a data synchronization project topic.
If the data synchronization project encounters a running exception due to a network failure or slow startup of processes, you can click Recover on the Synchronization Projects page or on the Details page of the synchronization project.