OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.2.3Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • What is OMS?
    • Terms
    • OMS HA
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full data migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Single-node deployment
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Integrate the OIDC protocol to OMS to implement SSO
    • Scale out OMS
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database
    • Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Active-active disaster recovery between OceanBase databases
    • Migrate data from a TiDB database to a MySQL tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Migrate incremental data from an Oracle tenant of OceanBase Database to a MySQL database
    • Manage data migration projects
      • View details of a data migration project
      • Rename a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Perform batch operations on data migration projects
      • Download and import settings of migration objects
      • Start and pause a data migration project
      • Release and delete a data migration project
    • Supported DDL operations and limits for synchronization
      • DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • Overview of DDL synchronization from a MySQL database to a MySQL tenant of OceanBase Database
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle tenant of OceanBase Database
        • Overview
        • CREATE TABLE
          • Overview
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Normal indexes
        • ALTER TABLE
          • Modify tables
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop partitions
            • Drop subpartitions
            • Add partitions and subpartitions
            • Modify partitions
            • Truncate partitions
        • DROP TABLE
        • COMMENT
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle tenant of OceanBase Database
      • Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database
      • Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database
      • DDL synchronization between MySQL tenants of OceanBase Database
      • DDL synchronization between Oracle tenants of OceanBase Database
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • Change the name of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Perform batch operations on data synchronization projects
      • Download and import the settings of synchronization objects
      • Start and pause a data synchronization project
      • Release and delete a data synchronization project
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Project Template
        • Create a project template
        • View and edit project templates
        • Copy and export a project template
        • Delete a project template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store parameters
        • Incr-Sync parameters
        • Full-Import parameters
        • Full-Verification parameters
        • CM parameters
        • Supervisor parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • DML filtering
      • DDL synchronization
      • Rename a migration or synchronization object
      • Use SQL conditions to filter data
      • Set an incremental synchronization timestamp
      • Configure matching rules for migration objects
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change the topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL instance
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V4.2
      • OMS V4.2.2
      • OMS V4.2.1
      • OMS V4.2.0
    • V4.1
      • OMS V4.1.0
    • V4.0
      • OMS V4.0.2
      • OMS V4.0.1
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What is OMS? Terms OMS HA Overview Hierarchical functional system Basic components OMS Oracle full data migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Single-node deployment Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Integrate the OIDC protocol to OMS to implement SSO Scale out OMS Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Overview Migrate data from a MySQL database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to a MySQL tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to an Oracle database Migrate data from an Oracle database to an Oracle tenant of OceanBase Database Migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL tenant of OceanBase Database Migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Active-active disaster recovery between OceanBase databases Migrate data from a TiDB database to a MySQL tenant of OceanBase Database Migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database Migrate incremental data from an Oracle tenant of OceanBase Database to a MySQL database View details of a data migration project Rename a data migration project View and modify migration objects Use tags to manage data migration projects Perform batch operations on data migration projects Download and import settings of migration objects Start and pause a data migration project Release and delete a data migration project Synchronize DDL operations from a MySQL tenant of OceanBase Database to a MySQL database Synchronize DDL operations from an Oracle tenant of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle tenant of OceanBase Database Synchronize DDL operations from an Oracle tenant of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL tenant of OceanBase Database Synchronize DDL operations from a MySQL tenant of OceanBase Database to a DB2 LUW database DDL synchronization between MySQL tenants of OceanBase Database DDL synchronization between Oracle tenants of OceanBase Database Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a physical table in a MySQL tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project Change the name of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Perform batch operations on data synchronization projects Download and import the settings of synchronization objects Start and pause a data synchronization project Release and delete a data synchronization project Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database Minimum privileges required when an Oracle database serves as the source O&M overview Go to the overview page View server information Update the quota View server logs View O&M tasks Skip a task or subtask Retry a task or subtask
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.2.3
iconOceanBase Migration Service
V 4.2.3Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Synchronize data from an Oracle database to a DataHub instance

Last Updated:2024-11-20 07:06:22  Updated
share
What is on this page
Limitations
Considerations
Data type mappings
Check and modify the configurations of the source Oracle database
Supplemental properties
Procedure

folded

share

Alibaba Cloud DataHub is a streaming data processing platform for publishing, subscribing to, and distributing streaming data, enabling easy analysis and application based on streaming data.

Limitations

  • OceanBase Migration Service (OMS) does not support incremental synchronization of a table if all the columns in the table that are of the LOB type.

  • The Shard Columns parameter must be set for tables without a primary key in the Oracle database for successful synchronization to the DataHub instance.

  • OMS cannot parse the actual values of the generated columns used in the Oracle database. Therefore, when data is synchronized to the DataHub instance, the corresponding values are NULL.

  • Data source identifiers and user accounts must be globally unique in OMS.

  • OMS supports the synchronization of only objects whose database name, table name, and column name are ASCII-encoded without special characters. The special characters are . | " ' ` ( ) = ; / & \n

  • OMS does not support the synchronization of database objects, such as schemas, tables, and columns, whose name exceeds 30 bytes in length from an Oracle database of version 12c or later.

Considerations

  • When the Oracle database is in standby database only or primary/standby databases mode, if the number of instances that run on the primary Oracle database differs from that on the standby database, incremental logs of some instances may not be pulled. You need to manually set the parameters of the Store component to specify the instances for which incremental logs are to be pulled from the standby database. The procedure is as follows:

    1. Stop the Store component as soon as it starts.

    2. On the Update Configuration page of the Store component, add the deliver2store.logminer.instance_threads parameter and specify the instances for which logs are to be pulled.

      Separate multiple threads with a vertical bar (|), for example, 1|2|3. For more information about how to update a store component, see Update a store component.

    3. Restart the Store component.

    4. Wait for five minutes, and then run the grep 'log entries from' connector/connector.log command to check the instances for which logs are pulled. The thread field indicates the instances for which logs are pulled.

  • If you need to synchronize incremental data from an Oracle database, we recommend that you restrict the size of a single archive file in the Oracle database within 2 GB. An excessively large archive file may incur the following risks:

    • The log pulling time increases not in proportion to the size of a single archive file, but much more sharply.

    • When the Oracle database is in standby database only or primary/standby databases mode, the incremental data is pulled from the standby database. In this case, only archive files can be pulled. An archive file is pulled after it is generated. A larger archive file means a longer delay before the archive file is processed, and a longer time for processing the archive file.

    • In addition, a larger size of a single archive file means larger memory required by the Store component under the same data pulling concurrency.

  • The archive files must be stored for more than two days in the Oracle database. Otherwise, in the case of a sharp increase in the number of archive files or an exception in the Store component, restoration may fail due to the lack of required archive files.

  • If a DML operation is performed to exchange primary keys in the source Oracle database, errors occur when OMS parses logs. This causes data loss when data is synchronized to the destination. Here is a sample DML statement that exchanges primary keys:

    UPDATE test SET c1=(CASE WHEN c1=1 THEN 2 WHEN c1=2 THEN 1 END) WHERE c1 IN (1,2);
    
  • If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.

    For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.

  • When data transmission is resumed for a project, some data (transmitted within the last minute) may be duplicate in the DataHub instance. Therefore, data deduplication is required in downstream applications.

  • If LogMiner generates invalid time data, such as 13621-11-11 11:32:08, during data synchronization, the Store component generates an error.

    In this case, you can perform the following operations: Choose OPS & Monitoring > Components > Store. On the page that appears, click Update for the target store, add the deliver2store.logminer.replace_invalid_date parameter, and set it to true. Then skip the data in the data synchronization project.

    If users are not sensitive to DATE data, you can set the deliver2store.logminer.replace_invalid_date parameter to TRUE to enable the reader to continue running. When the Store component generates data, it converts abnormal DATE data into the date when the logs were written to the disk.

Data type mappings

Notice

Data of the LONG, ROWID, BFILE, LONG RAW, XMLType, UROWID, UNDEFINED, and UDT types cannot be synchronized.

Oracle database Mapped-to data type in DataHub
CHAR STRING
NCHAR STRING
VARCHAR2 STRING
NVARCHAR2 STRING
CLOB STRING
BLOB STRING (Base64-encoded)
NUMBER DECIMAL
BINARY_FLOAT DECIMAL
BINARY_DOUBLE DECIMAL
DATE STRING
TIMESTAMP STRING
TIMESTAMP WITH TIME ZONE STRING
TIMESTAMP WITH LOCAL TIME ZONE STRING
INTERVAL YEAR TO MONTH STRING
INTERVAL DAY TO SECOND STRING
RAW STRING (Base64-encoded)

Check and modify the configurations of the source Oracle database

  • Check the character set configurations

    OMS allows you to synchronize data from the source Oracle database based on the AL32UTF8, AL16UTF16, ZHS16GBK, or GB18030 character set.

  • Check and modify the system configurations of the Oracle instance

    1. Enable archivelog and supplemental_log for the source Oracle database.

    2. In the Oracle database, perform the following operations as the sys user.

      Execute the following statement to check whether log_mode is set to archivelog and supplemental_log parameters are set to yes or implicit:

      select log_mode, supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_min from v$database;
      

      If not, use the following syntax to modify the configuration of the Oracle database:

      ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
      ALTER DATABASE ADD SUPPLEMENTAL LOG DATA(PRIMARY KEY) COLUMNS;
      ALTER DATABASE ADD SUPPLEMENTAL LOG DATA(UNIQUE) COLUMNS;
      
    3. Restart the Oracle database.

Supplemental properties

If you manually create a topic, add the following properties to the DataHub schema before you start a data synchronization project. If OMS automatically creates a topic and synchronizes the schema, OMS automatically adds the following properties.

Notice

The following table applies only to Tuple topics.

Name Data type Description
oms_timestamp STRING The time when the change was made.
oms_table_name STRING The new table name of the source table.
oms_database_name STRING The new database name of the source database.
oms_sequence STRING The modified sequence number. The primary key on one server progressively increases.
oms_record_type STRING The change type. Valid values: UPDATE, INSERT, and DELETE.
oms_is_before STRING Specifies whether the data is the original data when the change type is UPDATE. Y indicates that the data is the original data.
oms_is_after STRING Specifies whether the data is the modified data when the change type is UPDATE. Y indicates that the data is the modified data.

Procedure

  1. Create a data synchronization project.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, click Data Synchronization.

    3. On the Data Synchronization page, click Create Synchronization Project in the upper-right corner.

  2. On the Select Source and Destination page, configure the parameters.

    Parameter Description
    Synchronization Project Name We recommend that you set it to a combination of digits and letters. It must not contain any spaces and cannot exceed 64 characters in length.
    Tag (Optional) Click the field and select a target tag from the drop-down list. You can also click Manage Tags to create, modify, and delete tags. For more information, see Manage data synchronization projects by using tags.
    Source If you have created an Oracle data source, select it from the drop-down list. If not, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about the parameters, see Create an Oracle data source.
    Destination If you have created a DataHub data source, select it from the drop-down list. If not, click New Data Source in the drop-down list to create one in the dialog box on the right side. For more information about parameters, see Create a DataHub data source.
  3. Click Next. On the Select Synchronization Type page, select the synchronization type for the current data synchronization project.

    The supported synchronization types include Schema Synchronization and Incremental Synchronization. Schema Synchronization creates a topic. Incremental Synchronization supports only the DML Synchronization option. The supported DML operations are Insert, Delete, and Update. Select the options based on your business needs. For more information, see DML filtering.

  4. Click Next. On the Select Synchronization Objects page, select the topic type and scope for synchronization.

    Available topic types are Tuple and BLOB. Tuple topics support records similar to database records. Each record contains multiple columns. BLOB topics only support a binary block as a record. Data is Base64-encoded for transmission. For more information, visit the documentation center of DataHub.

    After you select the topic type for synchronization, perform the following operations:

    1. In the left-side pane, select the objects to be synchronized.

      Notice

      The name of a table to be synchronized, as well as the names of columns in the table, must not contain Chinese characters.

    2. Click >.

    3. Select a mapping method.

      Notice

      When you set the topic type to Tuple without selecting Schema Synchronization, you can only synchronize a single table to a single topic.

      • To synchronize a single table, select the mapping method as needed in the Map Object to Topic dialog box and click OK.

        If you do not select Schema Synchronization when you set the synchronization type and configuration, you can select only Existing Topics here. If you have selected Schema Synchronization when you specify the synchronization type, you can select only one mapping method to create or select a topic.

        For example, if you have selected Schema Synchronization, when you use both the Create Topic and Select Topic mapping methods or rename the topic, a precheck error will be returned due to option conflicts.

        Parameter Description
        Create Topic Enter the name of the new topic in the text box. The topic name can contain letters, digits, and underscores (_) and must start with a letter. It must not exceed 128 characters in length.
        Select Topic OMS allows you to query DataHub topics. You can click Select Topic, and then find and select a topic for synchronization from the Existing Topics drop-down list.
        You can also enter the name of an existing topic and select it after it appears.
        Batch Generate Topics The format for generating topics in batches is Topic_${Database Name}_${Table Name}.

        If you select Create Topic or Batch Generate Topics, after the schema migration succeeds, you can query the created topics on the DataHub side. By default, the number of data shards is 2 and the data expiration time is 7 days. These parameters cannot be modified. If the topics do not meet your business needs, you can create topics in the destination database as needed.

      • To synchronize multiple tables, click OK in the dialog box that appears.

        If you have selected the Tuple type without selecting Schema Synchronization, when you select multiple tables to synchronize and select one topic and click OK in the Map Object to Topic dialog box, the selected tables are displayed under the topic in the right pane but only one table can be synchronized. Click Next. A prompt appears, indicating that only one-to-one mapping is supported between Tuple topics and tables.

    4. Click OK.

    When you synchronize data from an Oracle database to a DataHub instance, OMS allows you to import objects from text data, set sharding columns for tables in the destination database, and remove a single object or all objects. Objects in the destination database are listed in the structure of Topic > Database > Table.

    Operation Description
    Import objects
    1. In the list on the right, click Import Objects in the upper-right corner.
    2. In the dialog box that appears, click OK.
      Notice
      This operation will overwrite previous selections. Proceed with caution.
    3. In the Import Synchronization Objects dialog box, import the objects to be synchronized.
      You can configure synchronization objects by importing a CSV file. For more information, see Download and import the settings of synchronization objects.
    4. Click Validate.
    5. After the validation succeeds, click OK.
    Change the topic When the topic type is set to BLOB, you can change the topic for objects in the destination. For more information, see Change the topic.
    Configure settings
    1. In the list on the right, hover over the object that you want to set.
    2. Click Settings.
    3. Click the Shard Columns drop-down list and select the target sharding columns. You can select multiple fields as sharding columns. This parameter is optional.
      Unless otherwise specified, select the primary key as sharding columns. If the primary key is not load-balanced, select load-balanced fields with unique identifiers as sharding columns to avoid potential performance issues. Sharding columns can be used for the following purposes:
      • Load balancing: Threads used for sending messages can be recognized based on the sharding columns if the destination table supports concurrent writes.
      • Orderliness: OMS ensures that messages are received in order if the values of the sharding columns are the same. The orderliness specifies the sequence of executing DML statements for a column.
    4. In the Select Columns section, select the columns to be synchronized. For more information, see Column filtering.
    5. Click OK.
    Remove one or all objects During data mapping, OMS allows you to remove one or more selected objects to be migrated or synchronized to the destination.
    • Remove a single synchronization object
      In the list on the right of the selection section, hover over the target object, and click Remove. The synchronization object is removed.
    • Remove all synchronization objects
      In the list on the right of the selection section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all synchronization objects.
  5. Click Next. On the Synchronization Options page, specify the following parameters.

    • Incremental synchronization

      The following table describes the incremental synchronization parameters, which are displayed only when you have selected Incremental Synchronization on the Select Synchronization Type page.

      Parameter Description
      Incremental Log Pull Resource Configuration You can select Small, Medium, or Large to use the corresponding default value of Memory. You can also customize the resource configurations for incremental log pull. Through resource configuration for the Store component, you can limit the resource consumption of a project in log pull in the incremental synchronization phase.

      Notice

      In the case of custom configurations, the minimum value is 1 and only integers are supported.

      Incremental Data Write Resource Configuration You can select Small, Medium, or Large to use the corresponding default values of Write Concurrency and Memory. You can also customize the resource configurations for incremental data write. Through resource configuration for the Incr-Sync component, you can limit the resource consumption of a project in data writes in the incremental synchronization phase.

      Notice

      In the case of custom configurations, the minimum value is 1 and only integers are supported.

      Incremental Record Retention Time The duration that incremental parsed files are cached in OMS. A longer retention period results in more disk space occupied by the Store component.
      Incremental Synchronization Start Timestamp
      • If you have selected Full Synchronization as the synchronization type, the default value of this parameter is the project startup time and cannot be modified.
      • If you do not select Full Synchronization as the synchronization type, set this parameter to a certain point of time, which is the current system time by default. For more information, see Set an incremental synchronization timestamp.
    • Advanced options

      Parameter Description
      Serialization Method The message format for synchronizing data to a DataHub instance. Valid values: Default, Canal, Dataworks (version 2.0 supported), SharePlex, and DefaultExtendColumnType. For more information, see Data formats.
      Notice
      This parameter is available only when the topic type is set to BLOB on the Select Synchronization Type page.
      Partitioning Rules The rule for synchronizing data from the source database to a DataHub topic. Valid values: Hash and Table.
      • Hash indicates that OMS uses a hash algorithm to select the shard of a DataHub topic based on the value of the primary key or sharding column.
      • Table indicates that OMS delivers all data in a table to the same partition and uses the table name as the hash key.
      Business System Identification (Optional) Identifies the source business system of data. The business system identifier consists of 1 to 20 characters.

    If the parameter settings on the page cannot meet your requirements, you can click Parameter Configuration in the lower part of the page to configure more specific settings. You can also reference an existing project or component template.

  6. Click Precheck.

    During the precheck, OMS detects the connection with the destination data source. Take note of the following items if an error is returned during the precheck:

    • You can identify and troubleshoot the error and then perform the precheck again.

    • You can also click Skip in the Actions column of a failed precheck item. In the dialog box that appears, you can view the prompt for the consequences of the operation and click OK.

  7. Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data synchronization project. You can start the project later as needed.

    OMS allows you to modify the synchronization objects when the data synchronization project is running. For more information, see View and modify synchronization objects. After a data synchronization project is started, the synchronization objects will be executed based on the selected synchronization type. For more information, see the "View synchronization details" section in the View details of a data synchronization project topic.

    If the data synchronization project encounters a running exception due to a network failure or slow startup of processes, you can click Recover on the Synchronization Projects page or on the Details page of the synchronization project.

Previous topic

Synchronize data from a MySQL database to a DataHub instance
Last

Next topic

View details of a data synchronization project
Next
What is on this page
Limitations
Considerations
Data type mappings
Check and modify the configurations of the source Oracle database
Supplemental properties
Procedure