OceanBase Migration Service (OMS) V3.2.1 is a milestone version since OMS is commercially available.
OMS V3.2.1 upgrades the replication capabilities between OceanBase Database and common heterogeneous databases such as Oracle, DB2 LUW, and MySQL, comprehensively improving the completeness and performance of OMS. OMS V3.2.1 supports automatic synchronization of DDL statements between heterogeneous databases, which is an industry-leading capability that intelligently improves the continuity in database replication and addresses the issue of manual O&M during database synchronization.
Moreover, OMS V3.2.1 has improved the ease-of-use from the perspective of users. In scenarios with a high data volume and a large number of transactions, OMS V3.2.1 is comprehensively improved in terms of stability, processing efficiency, and resource usage. This greatly reduces the learning costs and operation difficulties, thereby helping you implement database migration and upgrades, cross-region database disaster recovery, active-active disaster recovery, and real-time data analysis more conveniently and efficiently.
Version information
Version number: V3.2.1
Previous version: V3.1.0
Version release date: November 10, 2021
Version upgrade support: OMS V2.1.0, V2.1.2, V2.2.0, and V3.1.0 can be directly upgraded to V3.2.1.
Compatible database versions
| Feature | OceanBase database versions | Other database versions |
|---|---|---|
| Data migration |
|
|
| Data synchronization |
|
|
New features
Data migration and synchronization
Supports multiple types of data sources and multiple combinations of data migration and synchronization links from the source database to the destination database.
Supports data migration and synchronization between a DB2 LUW database and an Oracle tenant of OceanBase Database.
Supports data migration and synchronization between Oracle tenants or MySQL tenants of OceanBase Database, and supports cross-region disaster recovery and active-active disaster recovery.
Supports data migration and synchronization from an Oracle tenant of OceanBase Database to an Oracle database.
Supports data migration and synchronization from a MySQL tenant of OceanBase Database to a MySQL database.
Supports data synchronization from an Oracle tenant of OceanBase Database to a DataHub instance.
Supports automatic synchronization of incremental DDL statements during data migration.
Supports row filters for database-to-database data migration and synchronization projects.
Supports data extraction from data sources of the standby database, such as Oracle ADG and MySQL Slave, to reduce the consumption of resources in the primary database.
Supports security authentication for a Kafka instance when data is synchronized from an OceanBase database to the Kafka instance.
Allows you to configure row filters when data is synchronized from an OceanBase database to a RocketMQ instance.
Allows you to add a DBP data source as the source for synchronization, thereby implementing the synchronization of logical tables. This feature supports automatic synchronization of incremental DDL statements.
Allows you to migrate and synchronize homogeneous or heterogeneous data between OceanBase clusters that are managed by different OceanBase Cloud Platforms (OCPs) in the same OMS.
Management and control
Allows you to configure migration object matching rules when you create a data migration project, to help you quickly select migration objects in batches.
Allows you to import to-be-migrated databases and tables through text and also enter mapping rules for the to-be-migrated databases and tables.
Adds the project tagging feature, so that you can manage data migration and synchronization projects based on project tags.
System management
Allows you to configure DingTalk alert channels.
Adds the regions of OMS nodes and data sources on the overview page of the O&M and monitoring module, so that you can monitor the running status of the OMS system and manage data sources based on regions.
Feature changes and optimization
Feature and performance optimization
The data synchronization performance when the source and destination databases use heterogeneous indexes is improved.
Support for large transactions in the source database is improved when data is migrated from an OceanBase database to a Kafka instance, and the memory usage in the event of synchronous data writes is reduced.
Multi-table join and merge operations are optimized.
The check on incremental synchronization configurations is optimized when the source database is an Oracle database. Only table-level supplemental_log needs to be enabled for the Oracle database.
obproxy dependencies no longer need to bound in the deployment phase, and the OBProxies of OceanBase databases are specified when data sources are added.
Interaction experience optimization
The database and table renaming controls are upgraded. The options for saving the renaming results and setting data filters are more obvious and convenient.
The information structure and style of the product list page are improved.
The overview page of the O&M and monitoring module is redesigned. The overview page now displays the region information and allows you to view top 5 servers by server resource utilization and resources occupied by components.
Documentation optimization
The product documentation in HTML format is provided in the OMS console. You can conveniently view the documentation in real time when you use OMS.
The Alert Reference is provided, which describes the possible causes and handling methods of alerts.
Detailed descriptions of system parameters are provided.
Fixed issues
Data migration
If the BINARY_FLOAT field is used as the primary key during data migration from an Oracle tenant of OceanBase Database to a DB2 LUW database, the data is inconsistent during full verification.
If a field of the binary type is used as the primary key in a table during data synchronization from a DB2 LUW database to an Oracle tenant of OceanBase Database, the data is inconsistent during verification.
During data migration between MySQL tenants of OceanBase Database, the error message
Failed to complete http request: Read timed outis displayed on the page for starting full migration and full verification.During data migration from an Oracle 11g database to an Oracle tenant of OceanBase Database, incremental migration fails to be started and the error message
ORA-00911: invalid characteris displayed.When the
WITH [CASCADED | LOCAL] CHECK OPTION clausein a view building statement is formed, a logic judgement error occurs.When data is migrated from an Oracle database to an Oracle tenant of OceanBase Database, the
unknown columnexception is thrown during schema migration.During full verification, the Date type in the Oracle database is invalid.
During full verification, the MySQL database does not support data of JSON types.
Data synchronization
When data is synchronized from a DBP logical table to a physical table, the columns added to the logical table are not synchronized to the physical table.
After OMS is upgraded from V3.1.0 to V3.2.1, an error message indicating an internal server error is displayed when you view data synchronization projects.
On the details page of a project for migrating data from a MySQL tenant of OceanBase Database to a DataHub instance, the displayed synchronization type is incorrect.
The topic naming limits do not conform to the specifications actually supported by RocketMQ.
In the full synchronization settings of a data synchronization project, the default maximum number of threads in the thread pool for pulling full data shards is set to 16 by using a system parameter.
During the synchronization of logical tables, data transfer fails to resume after interruption and data is lost.
During data synchronization from logical tables to a MySQL tenant of OceanBase Database, the attribute cannot be Not Null for all indexed columns.
During the synchronization of logical tables, data transfer fails to resume after interruption and data is lost.
During the synchronization of logical tables, the specified number of threads in the thread pool for pulling full data shards is too small on the connector.
System management
When a user is created, the usernames configured in the frontend and backend are inconsistent in length.
When you view migration objects and migration rules, the rules are separated by spaces.
Known issues
The frontend log component cannot display consecutive spaces of the backend. As a result, the
SKIPDDLparameter on the real-time synchronization component becomes invalid.After OMS V3.1.0 is upgraded to V3.2.1, the INNER_ERROR message is displayed on the details page of data migration projects.
Limits
Oracle Store does not support tables whose column names exceed 30 characters, and will display the error message "Schema or table or column name exceeding 30 bytes not supported".
Take note of the following limits when you migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database:
Tables with virtual columns cannot be migrated.
If the value type in the destination database is longer than the corresponding value type in the source database, data overflow may occur during reverse incremental migration.
When you migrate data from a RANGE-partitioned table of the DB2 LUW database to the Oracle tenant of OceanBase Database, if the first partition of the tenant has no upper limit, data overflow may occur during reverse incremental migration.
Take note of the following limits when you migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database:
Tables with a null unique key (UK) cannot be migrated. Otherwise, data will be lost.
Table names cannot be too long. Otherwise, the primary key name will be extra long and an error will be reported.
Non-decimal numbers cannot participate in operation. Otherwise, an error will be reported during schema migration.
During full verification, data of the CHAR type may be inconsistent.
Special functions, such as INTERVAL and ROUND, cannot be used in default values of columns. Otherwise, an error will be reported.
In the DB2 LUW database, data of the CHAR or NCHAR type will be inconsistent during full verification.
You cannot configure the "Define Character Set and Length" option when you migrate data from an Oracle tenant of OceanBase Database to an Oracle database.
When you clone a two-way data migration project in OceanBase Database, if you remove the destination database and then select the removed database as the source database, the number of tables displayed is inconsistent with the actual number.
When you migrate data from a logical table in OceanBase Database to a DataHub instance, special characters in the row filters in the ETL option cannot be parsed.