Version information
Version number: V4.0.1
Previous version: V3.4.0
Version release date: February 26, 2023
Supported upgrade versions:
Versions earlier than OMS V3.2.1 require an upgrade to V3.2.1 first.
OMS V3.2.1 and later can be directly upgraded to V4.0.1.
Supported versions of data terminals
| Feature | OceanBase Database versions | Other data terminal versions | OCP versions |
|---|---|---|---|
| Data migration | V1.4.79, V2.1.1, V2.2.20, V2.2.30, V2.2.50, V2.2.52, V2.2.70, V2.2.72, V2.2.74, V2.2.75, V2.2.76, V2.2.76BP1, V2.2.77, V3.1.0, V3.1.1, V3.1.2, V3.2.1, V3.2.2, V3.2.3, V3.2.4, V4.0.0 |
|
|
| Data synchronization | V2.2.20, V2.2.30, V2.2.50, V2.2.52, V2.2.70, V2.2.72, V2.2.74, V2.2.75, V2.2.76, V2.2.76BP1, V2.2.77, V3.1.0, V3.1.1, V3.1.2, V3.2.1, V3.2.2, V3.2.3, V3.2.4 |
|
|
New features
Data migration and synchronization
Added support for migrating data from a MySQL database to MySQL compatible mode of OceanBase Database V4.0.0, including schema migration, full migration, incremental synchronization, and full verification.
Added support for RocketMQ instances of commercial edition V4.x and V5.x.
Added full synchronization feature from OceanBase Database to RocketMQ.
Added the capability to select incremental synchronization (excluding full migration scenarios) and choose the incremental synchronization timestamp.
Added support for the synchronization DDL feature from OceanBase database to Kafka, allowing users to promptly perceive Schema changes and simplify the docking solution.
Added support for Debezium JSON format for synchronization to Kafka, DataHub (Blob type), and RocketMQ, facilitating easy integration with downstream big data ecosystems.
Added the ability for users to freely set cluster_id, facilitating processing based on actual situations. Commonly used in business scenarios such as data cleansing and historical databases.
Added support for mapping multiple selection objects to a single target object using matching rules. This facilitates database migration from products like MyCat and TDDL.
Added self-service throttling and rate limiting based on RPS and IOPS. Users can enable this feature during peak hours at the source or target to minimize impact on online business.
Feature updates and optimizations
Enhanced user interaction experience
Provides process guidance and feature introduction to facilitate project creation.
Provides 35 standardized error codes (total 110+), enhancing users' self-maintenance capabilities.
Optimized rule matching and display logic to improve processing efficiency in scenarios with multiple matching rules.
Added documentation creation, project renaming, and three selection object functionality scope descriptions during project creation.
Standardized component names for data migration and synchronization to Store/Incr-Sync/Full-Import/Full-Verification, and unified optimization of underlying technical logic to reduce user learning curve and facilitate autonomous maintenance.
Feature upgrades
Supports ROOT/ADMIN/USER three-layer permission model and read-only roles, with permission control by department, enhancing system permission management granularity.
Supports SSO (OAuth2 protocol) & ASO (Alicloud Private Cloud) single sign-on for unified management and improved work efficiency.
Provides object removal functionality during precheck, facilitating easy project creation and enhancing product usability.
Added account lock logic after 5 consecutive failed login attempts, increasing the difficulty of brute force attacks and enhancing product security.
Supports dynamic project name changes for easier project management.
Added heartbeat table support for DB2 LUW data sources to address high latency in source systems with no business writes.
Optimized OMS & OCP alert display information and logic for easier alert handling.
Provide standard performance indicators for full-volume import and incremental synchronization, facilitating users in resource planning and project duration estimation during the early stage of a project.
Bug fixes
| Bug | Introduced in |
|---|---|
| Fixed the issue where data inconsistency occurs during the migration of data from the Oracle compatible mode of OceanBase Database to an Oracle database. The unique primary key contains a column of the CHAR or NCHAR type. | V3.4.0 |
| Fixed the issue where, during the migration of data from an Oracle database to the Oracle compatible mode of OceanBase Database, if the to-be-migrated table belongs to the igfile tablespace, the full migration step is restored but then reinitialized after the step is completed. | V3.4.0 |
| Fixed the issue where, during the migration of data from an Oracle database to the Oracle compatible mode of OceanBase Database, the session-level character set encoding is incorrectly set when schema migration is performed. | V3.3.0 |
| Fixed the issue where the data verification performance between the source (single table) and the target (partitioned table) is poor during the migration of data from the MySQL compatible mode of OceanBase Database to the MySQL compatible mode of OceanBase Database. | V2.x |
Fixed the issue where, during the migration of data from a DB2 LUW database to the Oracle compatible mode of OceanBase Database, an error is reported indicating that the version 10.5.800.381 of DB2 LUW is not supported. |
V3.2.1 |
| Fixed the issue where the latency displayed in the incremental synchronization step is inconsistent with that displayed in time-series databases. | V3.3.0 |
| Fixed the issue where, during incremental synchronization to an Oracle database, CHAR or NCHAR columns cannot correctly match data in WHERE conditions. | V3.3.0 |
| Fixed the issue where negative zero (-00) and positive zero (00) cannot be correctly distinguished when data is synchronized to a Kafka broker. | V3.3.0 |
| Fixed the issue where some DDL statements of stored procedures are not correctly filtered, causing an execution error and project interruption. | V3.3.0 |
| Fixed the issue where, during incremental DDL operations such as partition creation, an error may occur when data is migrated from an Oracle database to the Oracle compatible mode of OceanBase Database. | V3.3.0 |
| Fixed the issue where data synchronization is abnormally slow due to possible partial store (PS) leaks. | V3.3.0 |
| Fixed the issue where, after a large file table in an Oracle database is paused, data cannot be pulled from the table when it is resumed. | V3.4.0 |
| Fixed the issue where, during incremental migration, local unique indexes are not obtained for subpartitioned tables, leading to data inconsistency. | V2.x |
Known issues
| Ticket | Introduced in | Solution |
|---|---|---|
| Structure migration of TIMESTAMP types from a TiDB database to the MySQL compatible mode OceanBase database fails. | V3.4.0 | Manually modify the table schema. |
| Structure migration of BIT types from a TiDB database to the MySQL compatible mode OceanBase database fails. | V3.4.0 | Manually modify the table schema. |
| Data consistency check of binary types during data migration from a DB2 LUW database to the MySQL compatible mode OceanBase database fails. | V3.2.1 | N/A |
| When you migrate data from an OceanBase database to an Oracle database, the order of the PK_INCREMENT column in the indexes created on tables without primary keys is adjusted to improve the partition reading efficiency. | V3.3.1 | N/A |
| During data migration from an Oracle database to the MySQL compatible mode OceanBase database, data consistency check fails due to the difference in date and time formats. | V3.3.1 | N/A |
| During data migration from a DB2 LUW database to the MySQL compatible mode OceanBase database, the verification of virtual columns fails. | V3.2.1 | N/A |
| The formatting precision of incremental data of the TIMESTAMP(N) type in the Oracle database is inconsistent with that of full data. | V3.2.1 | N/A |
| During data migration from the Oracle compatible mode OceanBase database to the Oracle compatible mode OceanBase database, if the table contains generated columns, the correction statement is incorrect. | V3.2.1 | N/A |
| During data migration from an Oracle database to the MySQL compatible mode OceanBase database, data consistency check of time types fails. | V3.3.0 | N/A |
| The loading status is not displayed after you click the Refresh button on the Monitoring Metrics page. | V3.2.1 | N/A. You can ignore this issue. |
| The status of a data migration task is inaccurately displayed when the task is paused. | V3.2.1 | N/A. You can ignore this issue. |
| During data migration from a MySQL database or an Oracle database to an OceanBase database, the verification of time types is inconsistent in scenarios involving large tables. | V2.x | N/A |
The three system parameters: migration.checker.params.* still take effect on the concurrency of data migration. |
V3.0.1 | N/A. You can ignore this issue. |
| When migrating data from a MySQL database to OceanBase Database MySQL compatible mode, if the columns in the source table are specified with different Chinese character sets, incremental synchronization writes to OceanBase Database MySQL compatible mode version 2.2.77 result in garbled data. | V2.x | None |
Product behavior changes
The system parameter
perceiveStoreClientCheckpointspecified in theha.configparameter is enabled by default.This parameter is mainly intended to resolve issues where the project fails during incremental synchronization due to the target point written by the Incr-Sync component being later than the earliest point that can be pulled by the Store component in scenarios such as upgrades or write exceptions in the target. Starting from this version, OMS automatically starts an appropriate store to respond to point requests from downstream systems, enhancing the ease of use of the product.
The serialization formats Default and DefaultExtendColumnType are extended to include TransID.
In an OceanBase database, TransID is a transaction ID. inc is an incremental number, addr is the address of the coordinator, ts is the time when the transaction ID is generated, and hash is the hash value of the three preceding items. If the transaction is incomplete, TransID is null. In a MySQL or Oracle database, TransID is a checkpoint.
Downstream systems can use this value for transactional delivery to ensure the integrity of transactions.
The serialization formats Default and DefaultExtendColumnType are extended to include Cluster_ID.
In an OceanBase database,
ob_org_cluster_idspecifies the cluster ID for a session and is persisted in the transaction log. You can set this parameter by user or transaction, and downstream systems perform corresponding business processing based on the value of this parameter. This feature is often used in business scenarios such as data cleansing and historical databases.
Feature updates and guides
Component name changes
OMS V4.0.1 standardizes component names at the product layer to improve usability. The following table describes the corresponding relationships between new and old component names.
Old component name New component name Store (log pull component) Store (incremental pull component) JDBCWriter (real-time sync component) or Connector (synchronization write component) Incr-Sync (incremental synchronization component) Connector (full synchronization component) or Checker-Full (full migration component) Full-Import (full import component) Checker-Verify (full verification component) Full-Verification (full verification component) How to set throttling and speed limits
The
throttleRpsparameter in thecoordinatorsection of the Full-Import or Incr-Sync component limits the Records Per Second (RPS), and thethrottleIOPSparameter limits the I/O Per Second (IOPS). Set related parameters as needed to reduce the impact of data synchronization on the source or target and effectively prevent business losses. For more information, see How to set throttling and speed limits.Upgrade of role-based privilege model
OMS V4.0.1 now supports the three-layer privilege model with levels ROOT, ADMIN, and USER, and provides department-based privilege isolation for ADMIN and USER privileges. It also supports read/write splitting for ROOT and ADMIN privileges, meeting the privilege management requirements of enterprise customers.
How to set password expiration policies
OMS V4.0.1 provides role-based password expiration policies, which are disabled by default. To change the settings, go to System Management > System Parameters and modify the value of
oms.user.password.expiration.date.config. This meets the password management requirements of enterprise customers.Parameter changes of components
The parameters of Full-Import and Incr-Sync components in OMS V4.0.1 are significantly different from those in earlier versions. For more information, see Component parameters.
How to enable SSO (third-party login)
If you use SSO for logging in to OMS, you need to integrate with the OIDC protocol and add new parameters to the
config.yamltemplate file. For more information, see Integrate OMS with OIDC for SSO.