OceanBase Migration Service (OMS) V3.1.0 has improved the features of the data migration and data synchronization modules, and allows you to modify the high availability (HA) configurations.
Version information
Version number: V3.1.0
Previous version: V2.2.0
Version release date: June 25, 2021
Compatible database versions
| Feature | OceanBase database versions | Other database versions |
|---|---|---|
| Data migration | V1.4.72、V1.4.78、V1.4.79、V2.1.1、V2.2.20、V2.2.30、V2.2.50、V2.2.52、V2.2.70、V2.2.72、V2.2.74、V2.2.75、V2.2.76、V2.2.76BP1、V2.2.77、V3.1.0 |
|
| Data synchronization | V1.4.79, V2.2.20, V2.2.30, V2.2.50, V2.2.52, V2.2.70, V2.2.72, V2.2.74, V2.2.75, V2.2.76, V2.2.76BP1, V2.2.77, and V3.1.0 |
|
New features
Data migration
Allows you to migrate data from an Oracle 12c, 18c, or 19c PDB to an Oracle-compatible tenant of OceanBase Database.
Allows you to migrate data to OceanBase Database V3.1.0.
Supports reverse incremental migration.
Data synchronization
Allows you to synchronize data from a MySQL database to a DataHub instance. Specifically, you can synchronize data from one MySQL physical table to one DataHub topic, initialize schemas, and synchronize incremental data.
Improves the logical table aggregation and synchronization feature, and supports synchronization of data from logical tables to physical tables in scenarios of database sharding without table sharding where database names have a suffix but table names do not.
Allows you to import objects from text on the GUI when you synchronize data from multiple tables of an OceanBase database to a Kafka or RocketMQ instance, or from a Sybase database to a RocketMQ instance.
Allows you to migrate data from OceanBase Database V3.1.0.
System parameters
Allows you to modify the HA configurations. You can modify HA configurations on the OMS console based on the business scenario and environment.
Fixed issues
After OMS is upgraded from V2.1.0 to V3.1.0, links created by common users are transferred to the admin user.
During reverse incremental migration from an Oracle database to an Oracle-compatible tenant of OceanBase Database, the store heartbeat detection time is 240s by default. When a reverse store exception is triggered by network disconnection, the HA service successfully creates a new store. After the network resumes, the abnormal stores also restore to normal, but the reverse writer timestamp does not move forward.
After a synchronization link from an OceanBase database to a Kafka instance is created to synchronize data from multiple tables to a single topic, when one or multiple tables are dropped from the source database, the synchronization objects are edited, or a table is added to the destination database, Connector fails to be started after you save the synchronization objects and start the link.
During data migration from a MySQL database to an OceanBase database, the reverse incremental migration component reports an error.
During data synchronization from a MySQL database to a DataHub instance, the time when a failure occurred is displayed incorrectly.
Known issues
During data migration from an Oracle database to an Oracle-compatible tenant of OceanBase Database, an error occurs when columns with Chinese names are migrated.
Solution : OMS does not support migration of columns with Chinese names. We recommend that you rename such columns and use English names instead before migration.
The fields of the BIT type are used as primary keys (PKs) during incremental data migration from a MySQL database to a MySQL-compatible tenant of OceanBase Database, resulting in data inconsistency during full verification.
Solution : Do not use fields of the BIT type as PKs in OMS.
Limits
Data migration
Take note of the following limits when you migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase.
| Limit | Impact |
|---|---|
| The host of the MySQL database must have sufficient outbound bandwidth. | Insufficient outbound bandwidth on the host will slow down log analysis and data migration and may increase the latency of data synchronization. |
| Only the MySQL InnoDB storage engine is supported. | Data migration is not supported for other storage engines. |
| Automatic synchronization of DDL statements is not supported in incremental synchronization. | During data migration, DDL operations may interrupt the incremental synchronization process. |
| To perform incremental data migration, you must enable binary logging for the source MySQL database and perform the following steps: Set binlog_format to row. Set binlog_row_image to full. | OMS will report an error during precheck and you will not be able to start the data migration task. |
| If you want to perform incremental data migration, the binary logs of the MySQL database must be retained for no less than 24 hours. | During incremental data migration, the migration link may be interrupted due to the absence of binary logs, and cannot be recovered. |
| CASCADE foreign key migration is not supported for the source MySQL database. | Migration tasks with CASCADE foreign keys will fail. |
Take note of the following limits when you migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database.
| Limit | Impact |
|---|---|
The source Oracle database does not support incremental synchronization of tables using the empty_clob() function. |
The tables will not be blocked during precheck, posing risks to incremental synchronization. |
| When the destination OceanBase database is earlier than V2.2.70, foreign keys, checks, and other objects added during the switchover may not be supported. | Foreign key and check constraints in the source Oracle database may not be created in OceanBase Database. |
| The size of the LOB field is limited. | If the LOB field in the source database is too large in size, it cannot be stored in OceanBase Database, causing data synchronization errors. |
| Character encoding and reverse synchronization are limited: If the source and destination databases use different character sets, a field length extension policy will be provided during schema migration. For example, the field length is extended by 1.5 times, and the length Unit is changed from BYTE to CHAR. This ensures that data in different character sets can be migrated from the source database to the destination database. However, after cutover, data may fail to be written back to the source database during reverse incremental data migration due to an extra long data length. | Business data is written to the destination database after cutover, and cannot be written back to the source database. |
Data synchronization
Take note of the following limits when you synchronize data from an OceanBase database to a self-managed Kafka instance.
Limit Impact The host of the OceanBase database must have sufficient outbound bandwidth. Insufficient outbound bandwidth on the host will slow down log analysis and data migration and may increase the latency of data synchronization. When you synchronize data from a MySQL-compatible tenant of OceanBase Database, the OMS console does not support tables without a PK. Real-time synchronization is not supported for tables without a PK in MySQL-compatible tenants of OceanBase Database. Take note of the following limits when you synchronize data from a Sybase database to a RocketMQ instance.
Limit Impact Limit on tables: Real-time synchronization is not supported for tables without a PK and partitioned tables in the Sybase database. You cannot create real-time synchronization tasks for tables without a PK or partitioned tables in the Sybase database. Tables in the Sybase database support the following data types: CHAR,NUMERIC,INT,DATETIME,VARCHAR,DECIMAL,SMALLINT,TINYINT,BIT,BINARY,REAL,NVARCHAR, andFLOAT.Tables that contain other data types in the Sybase database cannot be synchronized. Take note of the following limits when you synchronize data from an Oracle database to a DataHub instance:
For tables without a PK in an Oracle database, Shard Columns must be set to ensure successful synchronization to the DataHub instance.
OMS cannot parse the actual values of the generated columns used in the Oracle database. Therefore, the corresponding values are NULL when the data is synchronized to the DataHub instance.
If
CLOB,BLOB,RAW, andLONG RAWdata types are used in the Oracle database, whenINSERToperations are performed on data of these types, the DataHub instance will receive the correspondingINSERTandUPDATEmessages.When data transfer is resumed on a link, some data within the last minute may be duplicate, and deduplication is required in downstream applications.
Take note of the following limits when you synchronize data from a MySQL database to a DataHub instance:
Shard Columns must be set for tables without a PK in the MySQL database to ensure successful synchronization to the DataHub instance.
Take note of the following limits when you synchronize data from a MySQL-compatible tenant of OceanBase Database to a DataHub instance:
Full synchronization of an OB_MySQL table without a PK to a DataHub instance is not supported.
When data transfer is resumed on a link, some data within the last minute may be duplicate, and deduplication is required in downstream applications.
DataHub limits the size of a message based on the cloud environment, usually to 1 MB.
DataHub sends messages in batches, with each batch sized no more than 4 MB. If a single message meets the conditions for sending, you can modify
batch.sizeat the connector sink. By default, 20 messages are sent at a time within 1 second.
We recommend that you do not synchronize more than 2,000 tables to a Kafka or RocketMQ instance in one task. Otherwise, the synchronization performance may deteriorate.
You cannot create synchronization tasks to synchronize data from an OceanBase database to Kafka or RocketMQ instances that only differ in the capitalization of topic names.
If you have upgraded OMS from V2.0.0 to V2.1.0 or later, you cannot modify the synchronization objects in the synchronization projects created in OMS V2.0.0.
The size of a single message synchronized from a database to a RocketMQ instance cannot exceed 4 MB.