Alibaba Cloud DataHub is a streaming data processing platform for publishing, subscribing to, and distributing streaming data, enabling easy analysis and application based on streaming data. This topic describes how to use OceanBase Migration Service (OMS) to synchronize data from an Oracle database to a DataHub instance.
Limitations
OMS does not support incremental synchronization of a table if all columns in the table are of the LOB type.
The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.
The
Shard Columnsparameter must be set for tables without a primary key in the Oracle database for successful synchronization to the DataHub instance.OMS cannot parse the actual values of the generated columns used in the Oracle database. Therefore, when data is synchronized to the DataHub instance, the corresponding values are NULL.
Data source identifiers and user accounts must be globally unique in OMS.
OMS supports the synchronization of only objects whose database name, table name, and column name are ASCII-encoded and do not contain special characters. The special characters are line breaks, spaces, and the following characters:
. | " ' ` ( ) = ; / & \.OMS does not support the synchronization of database objects, such as schemas, tables, and columns, whose names exceed 30 bytes in length from an Oracle database of version 12c or later. For more information about how to synchronize a database object whose name exceeds 30 bytes in length, see How do I synchronize an Oracle database object whose name exceeds 30 bytes in length?
Considerations
When the Oracle database is in standby database only or primary/standby databases mode, if the number of instances that run in the primary Oracle database differs from that in the standby database, incremental logs of some instances may not be pulled. You need to manually set the parameters of the Store component to specify the instances for which incremental logs are to be pulled from the standby database. The procedure is as follows:
Stop the Store component as soon as it starts.
On the Update Configuration page of the Store component, add the
deliver2store.logminer.instance_threadsparameter and specify the instances for which logs are to be pulled.Separate multiple threads with vertical bars (|), for example, 1|2|3. For more information about how to update the Store component, see Archive logs are lost when OMS synchronizes incremental data from a standby Oracle database.
Restart the Store component.
Wait 5 minutes, and then run the
grep 'log entries from' connector/connector.logcommand to check the instances for which logs are pulled. Thethreadfield indicates the instances for which logs are pulled.
If you need to synchronize incremental data from an Oracle database, we recommend that you restrict the size of a single archive file in the Oracle database to within 2 GB. An excessively large archive file may incur the following risks:
The log pulling time increases not in proportion to the size of a single archive file, but much more sharply.
When the Oracle database is in standby database only or primary/standby databases mode, the incremental data is pulled from the standby database. In this case, only archive files can be pulled. An archive file is pulled after it is generated. A larger archive file means a longer delay before the archive file is processed, and a longer time for processing the archive file.
A larger size of a single archive file means larger memory required by the Store component under the same data pulling concurrency.
The archive files must be stored for more than two days in the Oracle database. Otherwise, in the case of a sharp increase in the number of archive files or an exception in the Store component, restoration may fail due to the lack of required archive files.
If a DML operation is performed to exchange primary keys in the source Oracle database, errors occur when OMS parses logs. This causes data loss when data is synchronized to the target database. Here is a sample DML statement that exchanges primary keys:
UPDATE test SET c1=(CASE WHEN c1=1 THEN 2 WHEN c1=2 THEN 1 END) WHERE c1 IN (1,2);If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
When data transmission is resumed for a task, some data (transmitted within the last minute) may be duplicate in the DataHub instance. Therefore, data deduplication is required in downstream applications.
If LogMiner generates invalid time data, such as 13621-11-11 11:32:08, during data synchronization, the Store component generates an error.
In this case, you can perform the following operations: Choose OPS & Monitoring > Component > Store. On the page that appears, click Update for the target store, add the
deliver2store.logminer.replace_invalid_dateparameter, and set it totrue. Then skip the data in the data synchronization task.If users are not sensitive to DATE data, you can set the
deliver2store.logminer.replace_invalid_dateparameter totrueto enable the reader to continue running. When the Store component generates data, it converts abnormal DATE data into the date when the logs were written to the disk.If you select only Incremental Synchronization when you create a data synchronization task, OMS requires that the local incremental logs in the source database be retained for more than 48 hours.
Data type mappings
Notice
Data of the LONG, ROWID, BFILE, LONG RAW, XMLType, UROWID, UNDEFINED, and UDT types cannot be synchronized.
| Data type in an Oracle database | Default mapped-to data type in a DataHub instance |
|---|---|
| CHAR | STRING |
| NCHAR | STRING |
| VARCHAR2 | STRING |
| NVARCHAR2 | STRING |
| CLOB | STRING |
| BLOB | STRING (Base64-encoded) |
| NUMBER | DECIMAL |
| BINARY_FLOAT | DECIMAL |
| BINARY_DOUBLE | DECIMAL |
| DATE | STRING |
| TIMESTAMP | STRING |
| TIMESTAMP WITH TIME ZONE | STRING |
| TIMESTAMP WITH LOCAL TIME ZONE | STRING |
| INTERVAL YEAR TO MONTH | STRING |
| INTERVAL DAY TO SECOND | STRING |
| RAW | STRING (Base64-encoded) |
Check and modify the configurations of the source Oracle database
Check the character set configurations
OMS allows you to synchronize data from the source Oracle database based on the AL32UTF8, AL16UTF16, ZHS16GBK, or GB18030 character set.
Check and modify the system configurations of the Oracle instance
Enable archivelogs and supplemental logging for the source Oracle database.
In the Oracle database, perform the following operations as the SYS user.
Execute the following statement to check whether
log_modeis set toarchivelogandsupplemental_logparameters are set toyesorimplicit:select log_mode, supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_min from v$database;If not, use the following syntax to modify the configuration of the Oracle database:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA(PRIMARY KEY) COLUMNS; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA(UNIQUE) COLUMNS;Restart the Oracle database.
Supplemental properties
If you manually create a topic, add the following properties to the DataHub schema before you start a data synchronization task. If OMS automatically creates a topic and synchronizes the schema, OMS automatically adds the following properties.
Notice
The following table applies only to Tuple topics.
| Name | Data type | Description |
|---|---|---|
| oms_timestamp | STRING | The time when the change was made. |
| oms_table_name | STRING | The new table name of the source table. |
| oms_database_name | STRING | The new database name of the source database. |
| oms_sequence | STRING | The modified sequence number. The primary key on one server progressively increases. |
| oms_record_type | STRING | The change type. Valid values: UPDATE, INSERT, and DELETE. |
| oms_is_before | STRING | Specifies whether the data is the original data when the change type is UPDATE. The value Y indicates that the data is the original data. |
| oms_is_after | STRING | Specifies whether the data is the modified data when the change type is UPDATE. The value Y indicates that the data is the modified data. |
Procedure
Create a data synchronization task.

Log in to the OMS console.
In the left-side navigation pane, click Data Synchronization.
On the Data Synchronization page, click Create Synchronization Task in the upper-right corner.
On the Select Source and Target page, configure the parameters.
Parameter Description Task Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length. Source If you have created an Oracle data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list and create one in the dialog box that appears on the right. For more information about the parameters, see Create an Oracle data source. Target If you have created a DataHub data source, select it from the drop-down list. Otherwise, click New Data Source in the drop-down list and create one in the dialog box that appears on the right. For more information about the parameters, see Create a DataHub data source. Tag Click the field and select a tag from the drop-down list. This parameter is optional. You can also click Manage Tags to create, modify, and delete tags. For more information, see Use tags to manage data synchronization tasks. Click Next. On the Select Synchronization Type page, specify the synchronization types for the current data synchronization task.
The supported synchronization types are Schema Synchronization and Incremental Synchronization. Schema Synchronization creates a topic. Incremental Synchronization supports only the DML Synchronization option. The supported DML operations are
INSERT,DELETE, andUPDATE. Select the options based on your business needs. For more information, see Configure DDL/DML synchronization.Click Next. On the Select Synchronization Objects page, select the target topic type and objects.
Available topic types are Tuple and BLOB. Tuple topics do not support DDL synchronization but support records similar to database records. Each record contains multiple columns. BLOB topics only support a binary block as a record. Data is Base64-encoded for transmission. For more information, visit the documentation center of DataHub.
After you select the topic type for synchronization, you can select Specify Objects or Match Rules to specify the synchronization objects. The following procedure describes how to specify synchronization objects by using the Specify Objects option. For information about the procedure for specifying synchronization objects by using the Match Rules option, see the Configure matching rules for data migration or synchronization from a database to a Message Queue instance section in the Configure matching rules topic.
In the Select Synchronization Objects section, select Specify Objects.
In the left-side pane, select the objects to be synchronized.
Click >.

Select a mapping method.
To synchronize one Tuple table or one or more BLOB tables, select the required mapping method and click OK in the Map Object to Topic dialog box.

If you did not select Schema Synchronization as the synchronization type, you can select only Existing Topics here. If you have selected Schema Synchronization when you specify the synchronization type, you can select only one mapping method to create or select a topic.
For example, if you have selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.
Parameter Description Create Topic Enter the name of the new topic in the text box. The topic name can contain letters, digits, and underscores (_) and must start with a letter. It must not exceed 128 characters in length. Select Topic OMS allows you to query DataHub topics. You can click Select Topic, and then find and select a topic for synchronization from the Existing Topics drop-down list. You can also enter the name of an existing topic and select it after it appears. Batch Generate Topics The format for generating topics in batches is Topic_${Database Name}_${Table Name}.If you select Create Topic or Batch Generate Topics, after the schema migration succeeds, you can query the created topics on the DataHub side. By default, the number of data shards is 2 and the data expiration time is 7 days. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the target database as needed.
To synchronize multiple Tuple tables, click OK in the dialog box that appears.

If you have selected a Tuple topic and multiple tables without selecting Schema Synchronization, you must select an existing topic and click OK in the Map Object to Topic dialog box.

Click OK.
Note
OMS automatically filters out unsupported tables. For information about the SQL statements for querying table objects, see SQL statements for querying table objects.
OMS allows you to import objects from text files, set sharding columns for tables in the target database, and remove a single object or all objects. Objects in the target database are listed in the structure of Topic > Database > Table.
Operation Description Import objects - In the list on the right, click Import Objects in the upper-right corner.
- In the dialog box that appears, click OK.
Notice
This operation will overwrite previous selections. Proceed with caution. - In the Import Synchronization Objects dialog box, import the objects to be synchronized.
You can import CSV files to configure synchronization objects. For more information, see Download and import the settings of synchronization objects. - Click Validate.
- After the validation succeeds, click OK.
Change topics When the topic type is set to BLOB, you can change the topic for objects in the target database. For more information, see Change the topic. Configure settings - In the list on the right, move the pointer over the object that you want to set.
- Click Settings.
- Click the Shard Columns drop-down list and select the sharding columns that you want to use. You can select multiple fields as sharding columns. This parameter is optional.
Unless otherwise specified, select the primary key as sharding columns. If the primary keys are not load-balanced, select load-balanced fields with unique identifiers as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:- Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the target table supports concurrent writes.
- Orderliness: OMS ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
- In the Select Columns section, select the columns to be synchronized. For more information, see Column filtering.
- Click OK.
Remove one or all objects OMS allows you to remove a single object or all objects to be synchronized to the target database during data mapping. - Remove a single synchronization object
In the list on the right, move the pointer over the object that you want to remove, and click Remove to remove the synchronization object. - Remove all synchronization objects
In the list on the right, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
Click Next. On the Synchronization Options page, configure the following parameters.
Incremental synchronization
The following parameters are displayed only if you have selected Incremental Synchronization on the Select Synchronization Type page.
Parameter Description Incremental Log Pull Resource Configuration You can select Small, Medium, or Large to use the corresponding default value of Memory. You can also customize the resource configurations for incremental log pull. By setting the resource configuration for the Store component, you can limit the resource consumption of a task in log pull in the incremental synchronization phase. Notice
In the case of custom configurations, the minimum value is
1, and only integers are supported.Incremental Data Write Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Write Concurrency and Memory. You can also customize the resource configurations for incremental data write. By setting the resource configuration for the Incr-Sync component, you can limit the resource consumption of a task in data writes in the incremental synchronization phase. Notice
In the case of custom configurations, the minimum value is
1, and only integers are supported.Incremental Record Retention Time The duration that incremental parsed files are cached in OMS. A longer retention period results in more disk space occupied by the Store component. Incremental Synchronization Start Timestamp The timestamp after which data is to be synchronized. The default value is the current system time. For more information, see Set an incremental synchronization timestamp. Advanced options
Parameter Description Serialization Method The message format for synchronizing data to a DataHub instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, and DefaultExtendColumnType. For more information, see Data formats.
Notice
This parameter is available only when the topic type is set to BLOB on the Select Synchronization Type page.Partitioning Rules The rule for synchronizing data from the source database to a DataHub topic. Valid values: - Hash: indicates that OMS uses a hash algorithm to select the shard of a DataHub topic based on the value of the primary key or sharding column.
- Table: indicates that OMS delivers all data in a table to the same partition and uses the table name as the hash key.
Business System Identification (Optional) Identifies the source business system of data. This parameter is displayed only when Serialization Method is set to Dataworks. The business system identifier consists of 1 to 20 characters.
If the parameter settings on the page cannot meet your requirements, you can click Parameter Configuration in the lower part of the page to configure more specific settings. You can also reference an existing task or component template.

Click Precheck.
During the precheck, OMS detects the connection with the target data source. If an error is returned during the precheck, you can perform the following operations:
Identify and troubleshoot the problem and then perform the precheck again.
Click Skip in the Actions column of a failed precheck item. In the dialog box that prompts the consequences of the operation, click OK.
Click Start Task. If you do not need to start the task now, click Save to go to the details page of the data synchronization task. You can start the task later as needed.
OMS allows you to modify the synchronization objects when the data synchronization task is running. For more information, see View and modify synchronization objects. After the data synchronization task is started, it will be executed based on the selected synchronization types. For more information, see the View synchronization details section in the View details of a data synchronization task topic.
If the data synchronization task encounters an execution exception due to a network failure or slow startup of processes, you can click Recover on the Synchronization Tasks or Details page of the synchronization task.