Version information
Version number: V3.1.0
Previous version: V2.2.0
Version release date: June 25, 2021
Supported versions of data terminals
| Feature | OceanBase Database versions | Other data terminal versions |
|---|---|---|
| Data migration | V1.4.72, V1.4.78, V1.4.79, V2.1.1, V2.2.20, V2.2.30, V2.2.50, V2.2.52, V2.2.70, V2.2.72, V2.2.74, V2.2.75, V2.2.76, V2.2.76BP1, V2.2.77, V3.1.0 |
|
| Data synchronization | V1.4.79, V2.2.20, V2.2.30, V2.2.50, V2.2.52, V2.2.70, V2.2.72, V2.2.74, V2.2.75, V2.2.76, V2.2.76BP1, V2.2.77, V3.1.0 |
|
New features
Data migration
During data migration from an Oracle database to an Oracle-compatible tenant of OceanBase Database, OMS supports the migration of a Pluggable Database (PDB) of Oracle Database 12c, 18c, or 19c.
OMS allows you to migrate data to OceanBase Database V3.1.0 as the target database.
The data migration service supports reverse incremental migration.
Data synchronization
OMS allows you to synchronize data from a MySQL database to a DataHub instance. This feature supports synchronizing data from a single physical table of the MySQL database to a single topic of the DataHub instance, initializing the schema, and synchronizing incremental data.
The logical table aggregation synchronization feature is improved to support real-time synchronization from logical tables to physical tables in database sharding without table splitting. In this scenario, the database name contains a suffix, but the table name does not.
When you synchronize data from OceanBase Database to a Kafka, RocketMQ, or Sybase instance, OMS allows you to import objects by using text. In the import object dialog box that appears, select the objects to be imported.
OMS allows you to use OceanBase Database V3.1.0 as the data source for data synchronization.
System parameters
OMS allows you to modify the system parameters of the high availability (HA) feature according to your business scenarios and environment on the OMS page.
Bug fixes
After upgrading OMS from version 2.1.0 to 3.1.0, the links created by regular users were converted to those created by the admin user.
During the reverse incremental synchronization from an Oracle database to an OceanBase database, the store heartbeat detection time was set to the default value of 240s. If an interruption occurred during the reverse synchronization and the standby store became abnormal, high availability (HA) created a new store. After the network was restored, the abnormal store also returned to normal, but the writer's position did not advance.
A synchronization link was created for synchronizing data from an OceanBase database to a Kafka instance, with multiple tables synchronized to a single topic. The user then deleted one or more tables from the source database, edited the synchronization objects, and added a table from another database. After the synchronization objects were saved, the link was started, but the connector failed to start.
During the migration of data from a MySQL database to an OceanBase database, an error was reported in the reverse incremental component.
When data was synchronized from a MySQL database to a DataHub instance, the failure time was incorrectly displayed.
Known issues
An error is returned when the system attempts to migrate Chinese column names from an Oracle database to an Oracle-compatible tenant of OceanBase Database.
Solution: OMS does not support the migration of Chinese column names. We recommend that you change the column names to English and then perform the migration.
The data of a full verification is inconsistent with the data of the source table after the primary key column of the BIT data type is migrated from a MySQL database to a MySQL-compatible tenant of OceanBase Database and the incremental synchronization is completed.
Solution: OMS does not support the use of the BIT data type as the primary key.
Limitations on version usage
Data migration
Take note of the following limitations when you migrate data from a MySQL database to an OceanBase MySQL database.
| Limits | Impact |
|---|---|
| The host where the MySQL database is located must have sufficient outbound bandwidth. | If the host does not have sufficient outbound bandwidth, log parsing and data migration will be affected, potentially leading to increased synchronization latency. |
| Only MySQL InnoDB storage engines are supported. | Data migration is not supported for other types of storage engines. |
| Incremental synchronization does not support automatic DDL synchronization. | During data migration, incremental synchronization may be interrupted due to DDL operations. |
If you want to perform Incremental Data Migration, you must enable binlogs for the MySQL source and meet the following requirements:
|
OMS will report an error during precheck and provide a prompt, preventing you from starting the data migration task. |
| If you want to perform Incremental Data Migration, the retention period of MySQL Binlog cannot be shorter than 24 hours. | Incremental data migration may fail due to missing Binlogs, causing the connection to be interrupted and unrecoverable. |
| Cascade foreign key migration is not supported for MySQL sources. | A data migration task involving cascade foreign keys will fail. |
Take note of the following limitations when you migrate data from an Oracle database to an OceanBase Oracle database.
| Limitations | Impact |
|---|---|
Tables that use the empty_clob() function are not supported for incremental synchronization from the Oracle source. |
This will not be intercepted during precheck, but may result in errors during incremental synchronization. |
| If the version of the target OceanBase Database is earlier than V2.2.70, there may be incompatibilities when supplementary foreign keys and check constraints are added during switchover. | Foreign keys and check constraints from the Oracle source may fail to be created in the OceanBase Database. |
| Restrictions on LOB field migration. | If the LOB field in the source is too large, it cannot be stored in the OceanBase Database, causing errors in data synchronization. |
| Restrictions on character encoding and reverse synchronization: When the character encodings of the source and target are different, the strategy of expanding the defined field length is provided during schema migration. For example, the field length is expanded to 1.5 times its original value, and the length unit is changed from BYTE to CHAR. After the conversion, data of different character sets in the source can be successfully migrated to the target. However, incremental reverse synchronization may fail, causing data overflow and unable to be written back to the source. | After business data is migrated to the target, it cannot be synchronized back to the source. |
Data synchronization
When you synchronize data in real time from OceanBase Database to a self-managed Kafka instance, note the following limitations:
Limitation Impact The host of OceanBase Database must have sufficient outbound bandwidth. If the host of OceanBase Database does not have sufficient outbound bandwidth, log parsing and data migration will be affected, potentially leading to increased synchronization latency. If a table in OceanBase Database in the MySQL compatible mode serves as the source for a data synchronization task, the table must have a primary key. A table in OceanBase Database in the MySQL compatible mode without a primary key cannot be used to create a real-time synchronization task. When you synchronize data in real time from a Sybase database to a RocketMQ instance, note the following limitations:
Limitation Impact The following table types in Sybase are not supported: tables without a primary key and partitioned tables. You cannot create a real-time synchronization task for a table without a primary key or a partitioned table in the Sybase database. The following data types are supported in Sybase: CHAR,NUMERIC,INT,DATETIME,VARCHAR,DECIMAL,SMALLINT,TINYINT,BIT,BINARY,REAL,NVARCHAR, andFLOAT.A table in the Sybase database that contains data of other types cannot be used as the source for synchronization. When you synchronize data in real time from an Oracle database to a DataHub instance, note the following limitations:
If a table in the Oracle database does not have a primary key, you must specify a column as the Sharding Columns. Otherwise, the table cannot be synchronized to the DataHub instance.
If the Oracle database uses generated columns, OMS does not parse the actual values of generated columns. The corresponding values in the DataHub instance are NULL.
If the Oracle database uses data types such as
CLOB,BLOB,RAW, andLONG RAW,INSERToperations on the columns that contain these data types will triggerINSERTandUPDATEmessage delivery to the DataHub instance.When data transmission is resumed for a broken link, some data (transmitted within the last minute) may be duplicated. Therefore, data deduplication is required in downstream applications.
When you synchronize data in real time from a MySQL database to a DataHub instance, note the following limitations:
MySQL tables without a primary key must have a column specified as the Sharding Columns. Otherwise, the tables cannot be synchronized to the DataHub instance.
When you synchronize data in real time from OceanBase Database in the MySQL compatible mode to a DataHub instance, note the following limitations:
Full data synchronization from a table without a primary key in OceanBase Database in the MySQL compatible mode to the DataHub instance is not supported.
When data transmission is resumed for a broken link, some data (transmitted within the last minute) may be duplicated. Therefore, data deduplication is required in downstream applications.
DataHub limits the size of a message based on the cloud environment, usually to 1 MB.
DataHub sends messages in batches, with each message sized no more than 4 MB. If a single message meets the conditions for sending, you can modify the configuration item
batch.sizein the Connector Sink. By default, 20 messages are sent at a time within one second.
We recommend that you do not include more than 2,000 tables in a single synchronization task when you synchronize data from a database to a Kafka or RocketMQ instance. Otherwise, the synchronization performance may decrease.
You cannot create a synchronization task that synchronizes data to the same Kafka or RocketMQ instance but with topics that differ only in capitalization of the topic names.
If you upgrade OMS from Version 2.0.0 to Version 2.1.0 or later, the modify synchronization object feature is not supported for synchronization projects created in OMS 2.0.0.
The content of a single message synchronized from a database to a RocketMQ instance can be at most 4 MB in size.
Get support
Access relevant documentation
For the latest documentation of OMS, visit OceanBase Migration Service documentation.
| Document name | Description |
|---|---|
| Overview | Introduces the main features, advantages, architecture, use cases, and limitations of OMS. |
| User Guide | Introduces the features such as data source management, data migration, and data synchronization, and provides corresponding feature descriptions and operation guides. |
| O&M Guide | Provides troubleshooting solutions for servers and components related to data migration and synchronization tasks in OMS. |
| Alerts | Describes the impact of various alerts on the system, possible causes, and solutions in OceanBase Migration Service 3.1.0. |
| Deployment Guide | Introduces the deployment solutions and process of OMS. Note: Contact technical support to obtain the Deployment Guide. |
| Upgrade Guide | Introduces the considerations and process for upgrading OMS from an earlier version to 3.1.0. Note: Contact technical support to obtain the Upgrade Guide. |
Contact technical support
For any questions, contact OceanBase Technical Support.