OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V4.2.4Enterprise Edition

  • OMS Documentation
  • OMS Introduction
    • What is OMS?
    • Terms
    • OMS HA
    • Principles of Store
    • Principles of Full-Import and Incr-Sync
    • Data verification principles
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • OMS Oracle full data migration design and impact
    • Limitations
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deploy OMS
    • Deployment types
    • System and network requirements
    • Memory and disk requirements
    • Environment preparations
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Integrate the OIDC protocol to OMS to implement SSO
    • Scale out
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log in to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your login password
      • Log out
  • Data migration
    • Overview
    • Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database
    • Migrate data from an Oracle database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database
    • Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
    • Migrate data from a DB2 LUW database to an Oracle-compatible OceanBase database
    • Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery task in OceanBase Database
    • Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a PostgreSQL database to a MySQL-compatible tenant of OceanBase Database
    • Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database
    • Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database
    • Manage data migration tasks
      • View details of a data migration task
      • Rename a data migration task
      • View and modify migration objects
      • Use tags to Manage data migration tasks
      • Perform batch operations on data migration tasks
      • Download and import settings of migration objects
      • View and modify the parameter configurations of a data migration task
      • Start and pause a data migration task
      • Release and delete a data migration task
    • Supported DDL operations and limits for synchronization
      • Synchronize DDL operations from a MySQL database to a MySQL-compatible tenant of OceanBase Database
        • Overview
        • CREATE TABLE
          • Create a table
          • Create a column
          • Create an index or a constraint
          • Create partitions
        • Data type conversion
        • ALTER TABLE
          • Modify a table
          • Operations on columns
          • Operations on constraints and indexes
          • Operations on partitions
        • TRUNCATE TABLE
        • RENAME TABLE
        • DROP TABLE
        • CREATE INDEX
        • DROP INDEX
        • DDL incompatibilities between a MySQL database and a MySQL-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of the CREATE TABLE statement
            • Incompatibilities of CREATE TABLE
            • Column types that are supported to create indexes or constraints
          • Incompatibilities of the ALTER TABLE statement
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
            • Delete a constrained column
          • Incompatibilities of DROP INDEX operations
      • Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a MySQL database
      • DDL operations for synchronizing data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
        • Overview of DDL synchronization from Oracle to an Oracle-compatible tenant of OceanBase Database
        • CREATE TABLE
          • Overview
          • Create a relational table
            • Create a relational table
            • Define columns of a relational table
          • Virtual columns
          • Regular columns
          • Create partitions
            • Overview
            • Partitioning
            • Subpartitioning
            • Composite partitioning
            • User-defined partitioning
            • Subpartition templates
          • Constraints
            • Overview
            • Inline constraints
            • Out-of-line constraints
        • CREATE INDEX
          • Overview
          • Normal indexes
        • ALTER TABLE
          • Modify tables
          • Modify, drop, and add table attributes
          • Column attribute management
            • Modify, drop, and add column attributes
            • Rename a column
            • Add columns and column attributes
            • Modify column attributes
            • Drop columns
          • Modify, drop, and add constraints
          • Partition management
            • Modify, drop, and add partitions
            • Drop partitions
            • Drop subpartitions
            • Add partitions and subpartitions
            • Modify partitions
            • Truncate partitions
        • DROP TABLE
        • RENAME OBJECT
        • TRUNCATE TABLE
        • DROP INDEX
        • DDL incompatibilities between an Oracle database and an Oracle-compatible tenant of OceanBase Database
          • Overview
          • Incompatibilities of CREATE TABLE
          • Incompatibilities in table modification operations
            • Incompatibilities of ALTER TABLE
            • Change the type of a constrained column
            • Change the type of an unconstrained column
            • Change the length of a constrained column
            • Change the length of an unconstrained column
      • DDL synchronization from an Oracle-compatible tenant of OceanBase Database to an Oracle database
      • Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database
      • Synchronize DDL operations from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
      • Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database
      • Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database
      • DDL synchronization between MySQL-compatible tenants of OceanBase Database
      • DDL synchronization between Oracle-compatible tenants of OceanBase Database
  • Data synchronization
    • Overview
    • Synchronize data from OceanBase Database to a Kafka instance
    • Synchronize data from OceanBase Database to a RocketMQ instance
    • Synchronize data from OceanBase Database to a DataHub instance
    • Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an ODP logical table to a DataHub instance
    • Synchronize data from an IDB logical table to a physical table in a MySQL-compatible tenant of OceanBase Database
    • Synchronize data from an IDB logical table to a DataHub instance
    • Synchronize data from a MySQL database to a DataHub instance
    • Synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization tasks
      • View details of a data synchronization task
      • Change the name of a data synchronization task
      • View and modify synchronization objects
      • Use tags to Manage data synchronization tasks
      • Perform batch operations on data synchronization tasks
      • Download and import the settings of synchronization objects
      • View and modify the parameter configurations of a data synchronization task
      • Start and pause a data synchronization task
      • Release and delete a data synchronization task
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create an ODP data source
        • Create an IDB data source
        • Create a public cloud OceanBase data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
      • Create a PolarDB-X 1.0 data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update the quota
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Delete a store
      • Incr-Sync
        • View details of an Incr-Sync component
        • Start and pause an Incr-Sync component
        • Migrate an Incr-Sync component
        • Update the configurations of an Incr-Sync component
        • Batch O&M
        • Delete an Incr-Sync component
      • Full-Import
        • View details of a Full-Import component
        • Pause a Full-Import component
        • Rerun and resume a Full-Import component
        • Update the configurations of a Full-Import component
        • Delete a Full-Import component
      • Full-Verification
        • View details of a Full-Verification component
        • Pause a Full-Verification component
        • Rerun and resume a Full-Verification component
        • Update the configurations of a Full-Verification component
        • Delete a Full-Verification component
    • O&M Task
      • View O&M tasks
      • Skip a task or subtask
      • Retry a task or subtask
    • Parameter Template
      • Overview
      • Task Template
        • Create a task template
        • View and edit task templates
        • Copy and export a task template
        • Delete a task template
      • Component Template
        • Create a component template
        • View and edit component templates
        • Copy and export a component template
        • Delete a component template
      • Component parameters
        • Store parameters
        • Incr-Sync parameters
        • Full-Import parameters
        • Full-Verification parameters
        • CM parameters
        • Supervisor parameters
  • System management
    • Permission Management
      • Overview
      • Manage users
      • Manage departments
    • Alert center
      • View task alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
      • oblogproxy parameters
    • Operation audit
  • Troubleshooting Guide
    • Manage OMS services
    • OMS logs
    • Component O&M
      • O&M operations for the Supervisor component
      • CLI-based O&M for the Connector component
      • O&M operations for the Store component
    • Component tuning
      • Incr-Sync/Full-Import tuning
      • Oracle store tuning
    • Set throttling
    • Store performance diagnostics
  • Reference Guide
    • Features
      • Configure DDL/DML synchronization
      • Supported DDL operations for synchronization
      • Rename a migration or synchronization object
      • Use SQL conditions to filter data
      • Set an incremental synchronization timestamp
      • Configure matching rules
      • Wildcard patterns supported for matching rules
      • Hidden column mechanisms
      • Instructions on schema migration
      • Create and update a heartbeat table
      • Change the topic
      • Column filtering
      • Data formats
    • API Reference
      • Overview
      • CreateProject
      • StartProject
      • StopProject
      • ResumeProject
      • ReleaseProject
      • DeleteProject
      • ListProjects
      • DescribeProject
      • DescribeProjectSteps
      • DescribeProjectStepMetric
      • DescribeProjectProgress
      • DescribeProjectComponents
      • ListProjectFullVerifyResult
      • StartProjectsByLabel
      • StopProjectsByLabel
      • CreateMysqlDataSource
      • CreateOceanBaseDataSource
      • CreateOceanBaseODPDataSource
      • CreatePolarDBDataSource
      • ListDataSource
      • CreateLabel
      • ListAllLabels
      • DeleteDataSource
      • CreateProjectModifyRecords
      • ListProjectModifyRecords
      • StopProjectModifyRecords
      • RetryProjectModifyRecords
      • CancelProjectModifyRecord
      • SubmitPreCheck
      • GetPreCheckResult
      • UpdateProjectConfig
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
    • OMS error codes
    • SQL statements for querying table objects
    • Create a trigger
    • Change the log level for a PostgreSQL instance
    • Online DDL tools
    • Oracle supplemental logging
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
      • Deploy InfluxDB for OMS
      • Increase the disk space of the OMS host
    • Task diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • How do I query the ID of a checker?
        • How do I query log files of the Checker component of OMS?
        • How do I query the verification result files of the Checker component of OMS?
        • What do I do if the destination table does not exist?
        • What can I do when the full migration failed due to LOB fields?
        • What do I do if garbled characters cannot be written into OceanBase Database V3.1.2?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I migrate an Oracle database object whose name exceeds 30 bytes in length?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V4.2
      • OMS V4.2.4
      • OMS V4.2.3
      • OMS V4.2.2
      • OMS V4.2.1
      • OMS V4.2.0
    • V4.1
      • OMS V4.1.0
    • V4.0
      • OMS V4.0.2
      • OMS V4.0.1
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What is OMS? Terms OMS HA Principles of Store Principles of Full-Import and Incr-Sync Data verification principles Overview Hierarchical functional system Basic components OMS Oracle full data migration design and impact Limitations Data migration process Data synchronization process Deployment types System and network requirements Memory and disk requirements Environment preparations Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Integrate the OIDC protocol to OMS to implement SSO Scale out Check the deployment Deploy a time-series database (Optional) Log in to the OMS console Overview Configure user information Change your login password Log out Overview Migrate data from a MySQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a MySQL database Migrate data from an Oracle database to a MySQL-compatible tenant of OceanBase Database Migrate data from an Oracle-compatible tenant of OceanBase Database to an Oracle database Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database Migrate data from a DB2 LUW database to an Oracle-compatible OceanBase database Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Migrate data from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery task in OceanBase Database Migrate data from a TiDB database to a MySQL-compatible tenant of OceanBase Database Migrate data from a PostgreSQL database to a MySQL-compatible tenant of OceanBase Database Migrate data from a PolarDB-X 1.0 database to a MySQL-compatible tenant of OceanBase Database Migrate incremental data from an Oracle-compatible tenant of OceanBase Database to a MySQL database View details of a data migration task Rename a data migration task View and modify migration objects Use tags to Manage data migration tasks Perform batch operations on data migration tasks Download and import settings of migration objects View and modify the parameter configurations of a data migration task Start and pause a data migration task Release and delete a data migration task Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a MySQL database DDL synchronization from an Oracle-compatible tenant of OceanBase Database to an Oracle database Synchronize DDL operations from a DB2 LUW database to an Oracle-compatible tenant of OceanBase Database Synchronize DDL operations from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database Synchronize DDL operations from a DB2 LUW database to a MySQL-compatible tenant of OceanBase Database Synchronize DDL operations from a MySQL-compatible tenant of OceanBase Database to a DB2 LUW database DDL synchronization between MySQL-compatible tenants of OceanBase Database DDL synchronization between Oracle-compatible tenants of OceanBase Database Overview Synchronize data from OceanBase Database to a Kafka instance Synchronize data from OceanBase Database to a RocketMQ instance Synchronize data from OceanBase Database to a DataHub instance Synchronize data from an ODP logical table to a physical table in a MySQL-compatible tenant of OceanBase Database Synchronize data from an ODP logical table to a DataHub instance Synchronize data from an IDB logical table to a physical table in a MySQL-compatible tenant of OceanBase Database Synchronize data from an IDB logical table to a DataHub instance Synchronize data from a MySQL database to a DataHub instance Synchronize data from an Oracle database to a DataHub instance View details of a data synchronization task Change the name of a data synchronization task View and modify synchronization objects Use tags to Manage data synchronization tasks Perform batch operations on data synchronization tasks Download and import the settings of synchronization objects View and modify the parameter configurations of a data synchronization task Start and pause a data synchronization task Release and delete a data synchronization task Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source Create a PolarDB-X 1.0 data source View data source information Copy a data source Edit a data source Delete a data source Create a database user User privileges Enable binlogs for the MySQL database Minimum privileges required when an Oracle database serves as the source O&M overview
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V4.2.4
iconOceanBase Migration Service
V 4.2.4Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Migrate data from a DB2 LUW database to an Oracle-compatible OceanBase database

Last Updated:2026-04-14 07:36:49  Updated
share
What is on this page
Prerequisites
Limitations
Considerations
Data type mapping
Migration conversion rules
Limitations
Procedure

folded

share

This topic describes how to use OceanBase Migration Service (OMS) to migrate data from a DB2 LUW database to an Oracle-compatible OceanBase database, including physical data sources and public cloud data sources.

Prerequisites

  • You have created the corresponding schema in the target OceanBase database in Oracle compatible mode.

    You need to create the schema in advance. OMS will migrate the tables and views to be migrated to the schema you created.

  • You have created a database user for data migration tasks in the source DB2 LUW database and the target OceanBase database in Oracle compatible mode, and granted the corresponding permissions to the user. For more information, see Create a database user.

  • The Archive Log feature is enabled for the DB2 LUW database.

    If the Archive Log feature is not enabled, perform the following steps:

    1. Connect to the database.

      db2 connect to ${db_name}
      
    2. Change the directory for the archive log.

      db2 update db cfg for ${db_name} using LOGARCHMETH1 logpath(${your_logpath})
      
    3. Backup the database.

      db2 backup database ${db_name} to dbbackuppath(${your_logpath})
      
    4. Stop the database.

      db2stop
      
    5. Start the database.

      db2start
      
    6. Connect to the database.

      db2 connect to ${db_name}
      
    7. Manually archive the logs.

      db2 archive log for db ${db_name}
      
    8. View the archive logs.

      db2 get db cfg|grep LOG
      
  • The Data Changes feature is enabled for the tables in the DB2 LUW database.

    If the Data Changes feature is not enabled, execute the following statement to enable it:

    alter table ${table_name} data capture changes
    
  • The log_ddl_stmt feature is enabled for the DB2 LUW database.

    If the log_ddl_stmt feature is not enabled, execute the following statement to enable it:

    db2 update db cfg using LOG_DDL_STMTS YES
    

Limitations

  • Limitations on operations of the source database

    Do not perform DDL operations for changing the schema or table structure during schema migration or full migration. Otherwise, the data migration task may be interrupted.

  • DB2 LUW databases support V9.7, V10.1, V10.5, V11.1, and V11.5 on Linux or AIX operating systems.

  • When a DB2 LUW database is used as the source database, you cannot synchronize DDL operations.

  • On ARM architecture, incremental synchronization from a DB2 LUW database to an OceanBase database in Oracle compatible mode is not supported.

  • DB2 LUW databases support only objects whose names consist of letters, underscores, and digits, and must start with a letter or an underscore. The object names cannot be keywords of the DB2 LUW database.

  • During migration from a DB2 LUW database to an OceanBase database in Oracle compatible mode, full migration and incremental synchronization support only tables with unique constraints in the DB2 LUW database.

  • OMS does not support triggers on the target database. If triggers exist, data migration may fail.

  • Unique constraints that allow null values are not supported. This may cause data inconsistency. In OceanBase Database, multiple null values can exist in the same column of a unique constraint, because null != null. In DB2 LUW Database, a unique constraint requires that the column is not null, but a unique index allows null values, because null = null.

    For example, in OceanBase Database, the unique index unique (c1, c2) (null, null) can be inserted multiple times, but in DB2 LUW Database, a unique constraint does not allow null values. If a unique index is used, only (null, null) can be inserted once.

    Therefore, the presence of null values is not compatible with unique indexes in OceanBase Database. Do not use unique constraints that allow null values to avoid errors during schema migration. Incremental synchronization will add the NOT NULL constraint to the constrained column, and if null data is written, an error will occur.

    Additionally, during DDL synchronization, if a unique index is created in OceanBase Database, ensure that all constrained columns are not null. Otherwise, an error will occur in the DB2 LUW database.

  • The user who parses the DB2 LUW database must have the sysadm privilege on the corresponding schema. Otherwise, the user cannot obtain logs.

  • Data source identifiers and user accounts are globally unique in the OMS system.

Considerations

  • If the source character set is UTF-8, we recommend that you use a character set that is compatible with the source character set (such as UTF-8 or UTF-16) on the destination. Otherwise, garbled characters may appear on the destination.

  • When you update LOB type data in a DB2 LUW database, a large number of log-level row migrations occur. If an unknown combination of row migrations causes the Store to exit abnormally, retain the logs and provide them to technical support.

  • Do not use the UPDATE statement to change the primary key. Otherwise, data inconsistency may occur during row migration.

  • Currently, we mainly test the log pull without compression format in DB2 LUW databases. The stability of logs in the compressed format is not verified. We recommend that you use the compressed log format with caution.

  • Retain logs of DB2 LUW databases and OceanBase databases for at least 3 days to prevent data loss due to accidental log pull.

  • If the clocks on the nodes are not synchronized or the clocks on the client and server are not synchronized, the delay time (incremental synchronization or reverse incremental) may be inaccurate.

    For example, if the clock is earlier than the standard time, the delay time may be negative. If the clock is later than the standard time, the delay time may be positive.

  • If a table field in the Oracle compatible mode of the destination OceanBase database has the NOT NULL constraint, empty strings generated by the source DB2 LUW database cannot be written to the destination.

  • In reverse incremental synchronization from a DB2 LUW database to an OceanBase database in the Oracle compatible mode, if the OceanBase database is of a version earlier than 3.2.x and contains a global unique index, updating the value of the partitioning key of the table may cause data loss during data migration.

  • If the synchronized DDL statement is rename and the source table or destination table is not in the synchronization list, the rename statement is ignored. After you execute the synchronized DDL statement, restart full verification. If the new table created by the rename statement is not synchronized to the destination, an error is reported during full verification.

  • If you change the unique index on the destination without enabling synchronized DDL, you must restart incremental synchronization. Otherwise, data inconsistency may occur.

  • In the scenario where data is migrated from a source database to a destination database:

    • We recommend that you map the relationships between the source and destination by using matching rules.

    • We recommend that you create the table structure on the destination. If you use OMS to create the table structure, skip some failed objects in the structure migration step.

  • If you configure only Incremental Synchronization when you create a data migration task, OMS requires that the archived logs of the source database be retained for more than 48 hours.

    If you configure Full Migration + Incremental Synchronization when you create a data migration task, OMS requires that the archived logs of the source database be retained for at least 7 days. Otherwise, the data migration task may fail because it cannot obtain incremental logs. In addition, the data on the source and destination may be inconsistent.

  • If a table object exists on the source or destination with only the case different, the data migration result may be inconsistent with the expected result because the source or destination is case-insensitive.

Data type mapping

Migration conversion rules

DB2 LUW database OceanBase Database in Oracle compatible mode
TIME DATE
Warning:
If the default value is incompatible, please modify it manually.
TIMESTAMP(n) TIMESTAMP(n>0)
DATE DATE
  • 10.1 version: CHAR (n)
  • 10.5 and later versions: CHAR (n OCTETS|CODEUNITS32)
    Note:
    Only DB2 LUW databases of version 10.5 and later support the OCTETS and CODEUNITS32 encoding units.
  • DB2 LUW database 10.1: CHAR(n)
  • DB2 LUW database 10.5 and later: CHAR(n BYTE|CHAR)
CHAR(n) FOR BIT DATA RAW(n<=255)
  • 10.1 version: VARCHAR(n)
  • 10.5 and later versions: VARCHAR(n OCTETS|CODEUNITS32)
    Note:
    Only DB2 LUW databases of version 10.5 and later support the OCTETS and CODEUNITS32 encoding units.
  • DB2 LUW database 10.1: VARCHAR2(n)
  • DB2 LUW database 10.5 and later: VARCHAR2(n BYTE|CHAR)
VARCHAR(n) FOR BIT DATA RAW(n<=2000) or BLOB
NCHAR(m) NCHAR(m)
NVARCHAR(m) NVARCHAR2(m)
CLOB CLOB
NCLOB CLOB
GRAPHIC(n) NCHAR(n)
VARGRAPHIC(n) NVARCHAR2(n)
LONG VARGRAPHIC CLOB
LONG VARCHAR VARCHAR2(m BYTE)
DBCLOB CLOB
BINARY(m < 256) RAW
VARBINARY(m < 32672) BLOB
BLOB BLOB
BOOLEAN NUMBER(1)
SMALLINT NUMBER(6, 0)
INTEGER NUMBER(11,0)
BIGINT NUMBER(19, 0)
DECIMAL(p,s) NUMBER(p,s)
NUMERIC(p,s) NUMBER(p,s)
DECFLOAT(16|34) FLOAT(53|113)
REAL BINARY_FLOAT
DOUBLE BINARY_DOUBLE
XML --

Notice

  • In OceanBase Database in Oracle compatible mode, the CHAR and VARCHAR2 types can usually store multi-byte encoded data. Therefore, during reverse conversion, using single-byte encoded units directly may result in insufficient length issues.

  • In DB2 LUW databases, data storage must consider not only the type length but also the OCTETS, CODEUNITS16, and CODEUNITS32 encoding units.
    Only DB2 LUW databases of version 10.5 and later support the OCTETS and CODEUNITS32 encoding units.

  • If the target is OceanBase Database in Oracle compatible mode of a version earlier than V4.2.0, the CLOB and BLOB data must be less than 48 MB.

    If the target is OceanBase Database in Oracle compatible mode of V4.2.0 or later, the CLOB and BLOB data can be up to 512 MB.

  • Migration of data of the LONG, ROWID, BFILE, LONG RAW, XMLType, and UDT types is not supported.

  • Tables with FLOAT, DOUBLE, and REAL types as primary keys may have inconsistent full data.

Limitations

  • The maximum precision of TIMESTAMP in DB2 LUW databases is 12, while that in OceanBase Database in Oracle compatible mode is 9. Therefore, data will be truncated. Data types that cause truncation cannot be used as primary keys or unique keys.

  • Length limitations

    • The maximum length of the CHAR and BINARY types in DB2 LUW databases is 255. If the data written to OceanBase Database in Oracle compatible mode exceeds 255 during reverse synchronization, the data migration task will fail.

    • The maximum length of the VARCHAR and BINARY types in DB2 LUW databases is 32 K. If the data written to OceanBase Database in Oracle compatible mode exceeds 32 K, the data migration task will fail.

    • In the DECIMAL(dp, ds) type in DB2 LUW databases, dp cannot exceed 31, and ds must be less than or equal to dp. Therefore, the corresponding type in OceanBase Database in Oracle compatible mode is the NUMBER type.

      The maximum storage size of a number in OceanBase Database in Oracle compatible mode is limited. The default length of the NUMBER, INT, SMALLINT, and NUMBER(*, s) types in OceanBase Database in Oracle compatible mode is 38. Therefore, you must explicitly define the NUMBER(p,s) type and set its length to a value that is compatible with both the business requirements and the source and destination databases.

  • Data type limitations

    • If you convert a data type in DB2 LUW databases to the LOB type in OceanBase Database in Oracle compatible mode, the data stored in the LOB type cannot exceed 48 MB.

    • The TIME type in DB2 LUW databases cannot be used as a partitioning key for migration.

    • XML data types are not supported.

    • We recommend that you do not define the OCTUNIT16/32 type or use multibyte storage types such as NCHAR or GRAPHC.

    • You cannot modify the default value of the BLOB data type.

Procedure

  1. Create a data migration task.

    oms40-en

    1. Log in to the OMS console.

    2. In the left-side navigation pane, click Data Migration.

    3. On the Data Migration page, click Create Task at the top right.

  2. On the Select Source and Target page, configure parameters.

    Please translate the following technical document into English: | Parameter | Description |

    Migration task name You can use the combination of Chinese, digits, and English letters. The name cannot contain spaces, and its maximum length is 64 characters.
    Source If you have created a DB2 LUW data source, select it from the drop-down list. If you have not created one, click New Data Source in the drop-down list and create it in the dialog box that appears on the right. For more information, see Create a DB2 LUW data source.
    Note:
    The columns that represent unique keys in a DB2 LUW database must be nonnull.

    | Target | If you have created an Oracle data source for OceanBase Database (physical or public cloud data source), select one from the drop-down list. If not, click New Data Source in the drop-down list and create a data source in the dialog box that appears. For more information, see Create an OceanBase physical data source or Create an OceanBase public cloud data source. |

| Optional label | Click the label in the text box and select the target label from the drop-down list. You can also click Manage Tags to create, edit, and delete the tag. For more information, see Manage data migration tasks by using labels. |

  1. After you click Next, click Noted in the message that appears.

Note: This task applies to tables and views with a primary key or a non-null unique index, and automatically filters out others.

migration-11-en

  1. On the Select Migration Type page, configure the parameters.

oms60-en

Migration Type includes Schema Migration, Full Migration, Incremental Synchronization, Full Validation, and Reverse Increment. | Migration type | Description | |------|----------| | Schema migration | After the schema migration task starts, OMS migrates schema objects from the source to the target database, including tables, indexes, constraints, comments, and views. OMS also filters out temporary tables. |

| Full migration | After the full migration task is initiated, the OMS migrates the existing data in the source table to the corresponding table in the target database. If you select Full Migration, we recommend that you use the RUNSTATS statement to collect statistical information about a DB2 LUW database before data migration. | | Incremental synchronization | After an incremental synchronization task starts, OMS synchronizes the data (added, modified, or deleted) that has changed in the source database to the corresponding table in the destination database.
Incremental Synchronization includes DML synchronization and DDL synchronization, which you can customize as required. For more information, see Customize DDL/DML settings. Incremental Synchronization has the following limitations:

  • Incremental Synchronization is not supported if OMS is deployed in ARM mode.
  • If you choose DDL synchronization, data migration might be interrupted if DDL operations not supported by OMS are performed on the source database.
  • If a DDL operation adds a column, we recommend that you set the nullability of the column to nullable, which can prevent data migration from being interrupted.
| | Full data verification | After the full migration is completed and the incremental data is synchronized to the target instance, OMS automatically verifies the data of the source table and the target table.
  • We recommend that you collect the statistics of the DB2 LUW database and the OceanBase Database in Oracle compatible mode before initiating a full data verification task. For more information, see Manually collect statistics.
  • If you have selected Incremental Synchronization, and have not selected all DML operations, OMS does not support the full data verification in the current scenario.
|

| Reverse Incremental Sync | After the reverse incremental sync task starts, you can synchronize incremental data generated on the destination database to the source database in real time. Typically, reverse incremental sync reuses the incremental synchronization configuration. However, you can also customize the configuration based on your specific requirements. Selecting Reverse Increment is not supported in the following cases:

  • When multiple tables are involved in a join.
  • When the schema has a one-to-many mapping.
|

  1. (Optional) Click Next.

If you select Reverse Increment, but the corresponding parameters for the target OceanBase database in Oracle compatible mode are not configured, the Add Data Source Information dialog box pops up, prompting you to configure the parameters. For more information, see Create an OceanBase physical data source or Create an OceanBase public cloud data source.

Click Test Connection. After the test connection succeeds, click OK.

  1. Click Next, and then, on the Select Migration Objects page, select migration objects and migration scope.

You can select a migration object through the Specify Objects and Match by Rule tabs. This topic describes how to select a migration object through the Specify Objects tab. For more information about how to configure the matching rules, see Configure matching rules.

Caution

  • The name of the table and its columns to be migrated cannot contain Chinese characters.

  • Data migration tasks cannot be created when the names of the database or table contain the $$ symbol.

      </li>
      <li>
      <p>When you select <b>DDL synchronization</b> in the <b>Choose Migration Type</b> step, it is recommended that you select the migration objects based on matching rules to ensure that all new objects that match the migration object rules are synchronized. If you select the migration objects by specifying objects, new objects or renamed objects will not be synchronized.</p>
      </li>
      <li>
      <p>OMS will automatically filter out tables that are not supported. For more information about how to query tables, see <a href="../1200.reference-guide/500.select-sql.md">SQL for Querying Table Objects</a>.</p>
      </li>
      </ul>
    

    migration-5-en

    1. In the Select Migration Objects section, select Specify Objects.

    2. In the Specify Migration Scope section, select the objects you want to migrate from the Source Object(s) list. You can select tables and views from one or more databases as migration objects.

    Click ** >** and add it to the Target Object(s) list.

    You can use the OMS text import feature to import objects, rename objects, set row filters, view column information, and remove single or all migration objects.

    Note

      <p>When <b>matching rules</b> is used to select migration objects, the renaming ability is covered by the matching rule syntax, and you can only set filter conditions. For details, see <a href="../1200.reference-guide/90.function-introduction/600.configure-matching-rules-for-migration-objects.md">Configure Matching Rules</a>. </p>
    
    | Action | Procedure |
    Import objects
    1. In the right-side list of the Specify Migration Scope area, click Import Objects in the upper right corner.
    2. In the dialog box that appears, click OK.
      Note:
      The imported objects will overwrite your previous selections. Proceed with caution.
    3. In the Import Migration Objects dialog box, import the objects to migrate.
      You can rename databases and tables and set row filter conditions by importing a CSV file. For more information, see Download and import migration object settings.
    4. Click Validate.
    5. After the legality test is passed, click OK.
    Rename a migration object OMS allows you to rename migration objects. For more information, see Rename database tables.
    Setting OMS supports the WHERE condition to filter rows. For more information, see SQL condition-based data filtering.
    For information about the columns of a migrated object, see the View Column section.

    Please translate the following technical document into English:

    | Remove/Remove All | OMS allows you to remove one or more migrated objects temporarily selected to the target server when you perform data mapping.

    • Remove one migrated object
      Hover the mouse over a target object in the right-side list in the Specify Migration Scope region, and click the Remove button that appears.
    • Remove all migrated objects
      In the right-side list in the Specify Migration Scope region, click Remove All in the upper-right corner. In the dialog box that appears, click OK.
    |

      7. Click **<UI-TERM key="oms-v2.components.ParameterConfig.NextStep">Next</UI-TERM>**. On the **<UI-TERM key="oms-v2.components.ParameterConfig.IncrParamsConfig.MigrationOptions">Migration Options</UI-TERM>** page, configure the parameters.
    
      Please translate the following technical document into English:
    
      * Full migration.
      On the **<UI-TERM key="oms-v2.components.MigrationTypeParams.SelectAMigrationType">Select Migration Type</UI-TERM>** page, select **<UI-TERM key="OMS.components.CheckUpDateConfig.constants.FullMigration">Full Migration</UI-TERM>**. The following parameters are displayed:
      ![oms17-en](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/oms/oms-enterprise/oms17-en.png)
      |Parameter|Description|
      |----|----------------------------------|
      | Full Data Migration Rate Limit | You can choose whether to enable the full data migration rate limit based on your needs. If you enable this limit, set the RPS (maximum number of data rows that can be migrated to the destination per second during the full data migration stage) and BPS (maximum amount of data that can be migrated to the destination per second during the full data migration stage).<main id="notice" type='explain'><h4>Note</h4><p>The RPS and BPS set here are only used for bandwidth throttling. The actual performance of full data migration is affected by factors such as the source and destination, and instance specifications.</p></main>|
    
      | Full data migration resource allocation | You can choose the default read concurrency, write concurrency, and memory size, or customize the resource configuration for full data migration. The resource configuration for the full data import component Full-Import limits resource usage during the full data migration stage. <main id="notice" type='notice'><h4>Notice</h4><p>When you customize the configurations, the minimum value is 1, and only integer values are supported. </p></main> |
    
     | Target Table Strategy | The target table strategy includes **<UI-TERM key="must.PreCheck.constants.Skip">Ignore</UI-TERM>** and **<UI-TERM key="oms-v2.New.components.ParameterForm.StopProject">Stop Migration</UI-TERM>**: <ul><li>Select **<UI-TERM key="oms-v2.Operation.constants.Ignore">Ignore</UI-TERM>**. OMS writes the original data to the target table and ignores the write-in data when target table data conflicts with the write-in data. <main id="notice" type='notice'><h4>Notice</h4><p>If <b>Ignore</b> is selected, the full check uses the IN mode to pull data from the target table and cannot check scenarios where the target table contains data that the source table does not. Additionally, the check performance will be significantly affected. </p></main></li><li>Select **<UI-TERM key="oms-v2.New.components.ParameterForm.StopProject">Stop Migration</UI-TERM>**. If the target table contains data, OMS displays the error message: "Migration is not allowed, because the target table contains data." Therefore, you must process the target table data before the migration proceeds. <main id="notice" type='notice'><h4>Notice</h4><p> If you click "Restore" after an error occurs, the migration will resume and the table strategy will be ignored. Exercise caution when performing this action. </p></main></li> </ul>|
    
    
    
       | Is the index creation allowed after data migration is completed? | You can choose whether to allow you to create an index after full data migration is completed. Indexes are created after full data migration to shorten the time required for full data migration. For information about the usage limits of this feature, see the following table. <main id="notice" type='notice'><h4>Notice</h4><ul><li><p>You can choose this option only on the <b>Migrate Type</b> page and enable both <b>Schema Migration</b> and <b>Full Data Migration</b>.</p><li>Non-unique indexes can be created after full data migration. </li><li>Indexes cannot be created after full data migration in OceanBase Database V1.x. </li></ul></main>|
    
            If you can allow secondary indexes, we recommend that you adjust the parameters based on the hardware conditions and current business traffic of OceanBase Database.
            * If you use OceanBase Database V4.x, adjust the following sys tenant and business tenant parameters through a command-line tool.
            * Adjust parameters for the sys tenant.
            ```sql
            // parallel_servers_target sets the conditions of parallel query queueing on each server.
    
       // For better performance, we recommend that you set the value to 1.5 times that of the physical CPU. At the same time, the value is limited to a maximum of 64 to avoid lock contention issues within OceanBase Database. 
    
            set global parallel_servers_target = 64; 
            ```
            Please translate the following technical document into English:
            * Modify the parameters of a business tenant.
            ```sql
            // The file buffer size limits the memory buffer size. This is usually the size of the file. You can change the buffer size by adjusting the buffer. The buffer size in this example is set to 1 MB. The file buffer size is set to the value obtained from the file buffer size attribute. The file buffer size attribute value is the size of the file. The buffer size must be at least 1 MB. This example sets the file buffer size to 1 MB. The file buffer size is set to the value of the file buffer size attribute. The file buffer size attribute value is the size of the file. The buffer size must be at least 1 MB. This example sets the file buffer size to 1 MB.
    
     alter system set _temporary_file_io_area_size = '10' tenant = 'xxx'; 
    
         // 4.x Disables rate limiting.
         alter system set sys_bkgd_net_percentage = 100;  
         ```
         * If you use OceanBase Database V2.x or V3.x, you must adjust the following sys tenant parameters using a command-line tool.
         ```sql
         // parallel_servers_target specifies the number of parallel queries to be performed by each server.
    
    
         // If the parameter is to improve performance, we recommend that you set it to a value greater than the number of physical CPU cores. For example, we recommend that you set it to 1.5 times the number of physical CPU cores. We recommend that you set the value to no more than 64 to prevent OceanBase Database from encountering a contention issue when it tries to obtain locks.
    

    set global parallel_servers_target = 64;

      // data_copy_concurrency specifies the maximum number of data migration and copy tasks that can be executed concurrently in the system.
    
      alter system set data_copy_concurrency = 200; 
    
      ```
      Please translate the following technical document into English:
      * Incremental synchronization
      On the **<UI-TERM key="OMS.Migration.New.SelectMigrationType">Select Migration Type</UI-TERM>** page, select **<UI-TERM key="oms-v2.components.SyncOption.CheckSyncKind.IncrementalSynchronization">Incremental Synchronization</UI-TERM>** before you can see the parameters that follow.
      ![oms18-en](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/oms/oms-enterprise/oms18-en.png)
      |Parameter|Description|
      |----|----------------------------------|
    

    | Incremental synchronization rate limits | You can decide based on your requirements whether to enable the rate limit for incremental synchronization. If you enable this feature, you can specify RPS (maximum number of data rows that can be synchronized to the destination per second in the incremental synchronization phase) and BPS (maximum amount of data that can be synchronized to the destination per second in the incremental synchronization phase).

    Note

    The set RPS and BPS are only used for rate limiting and throttling, and the actual performance of incremental synchronization may vary based on the source, destination, and instance configuration.

    |

      | Resource allocation for incremental log pulling | You can specify small, medium, or large memory values as the default for incremental log pulling or define your own configurations for resource allocation. You can configure resource allocation for the Store component to limit the resources required to pull logs in the incremental synchronization phase of the task.<main id="notice" type='notice'><h4>Notice</h4><p>When you define configurations, the minimum value is 1, and only integer values are supported. </p></main>|
    
      | Allocate resources for incremental data writing | You can set the default write concurrency and memory consumption of **<UI-TERM key="OMS.components.ResourceAllocation.Small">Small</UI-TERM>**, **<UI-TERM key="OMS.components.ResourceAllocation.Medium">Medium</UI-TERM>**, or **<UI-TERM key="OMS.components.ResourceAllocation.Large">Large</UI-TERM>**. You can also customize the resource allocation for incremental data writing. You can configure the resource allocation for the incremental synchronization component Incr-Sync to limit the resource consumption of data writing in the incremental synchronization phase of the task. <main id="notice" type='notice'><h4>Notice</h4><p>When you customize the settings, the minimum value is 1, and only integer values are allowed. </p></main>|
    

    Please translate the following technical document into English:

      |Duration of incremental record retention|The length of time that the OMS stores cached incremental resolution files. The longer the retention period, the more disk space the Store component consumes.  |
    
      |Start time of the incremental synchronization| <ul><li> If **<UI-TERM key="oms-v2.New.components.MigrationObjectForm.FullMigration">Full Migration</UI-TERM>** is selected when migration type is selected, this parameter is not displayed.  <li> If **<UI-TERM key="oms-v2.Operation.constants.FullMigration">Full Migration</UI-TERM>** is not selected when migration type is selected but **<UI-TERM key="oms-v2.components.SyncOption.CheckSyncKind.IncrementalSynchronization">Incremental Synchronization</UI-TERM>** is selected, specify the date after which the migration starts, as in the current system time. For more information, see [Incremental synchronization start time](../1200.reference-guide/90.function-introduction/500.incremental-synchronization-timestamp.md).  |
    
    
    
      Please translate the following technical document into English:
      * Reverse incremental migration
      On the **<UI-TERM key="OMS.Migration.New.SelectMigrationType">Select Migration Type</UI-TERM>** page, make sure that the **<UI-TERM key="OMS.components.IncrTransferForm.ReverseIncrement">Reverse Increment</UI-TERM>** is selected for the migration type before the corresponding parameters in this section are displayed. The configuration parameters of Reverse Increment are the same as those of Increment Synchronization, and you can select **<UI-TERM key="OMS.components.ParameterConfigForm.AppSwitchForm.ReuseIncrementalSynchronizationConfiguration">Reuse Incremental Synchronization Configuration</UI-TERM>**.
    

    oms19-en

      * Full verification
    
      On the **<UI-TERM key="oms-v2.components.MigrationTypeParams.SelectAMigrationType">Select Migration Type</UI-TERM>** page, select **<UI-TERM key="oms-v2.DataVerify.constants.taskStepMap.FullCalibration">Full Validation</UI-TERM>** before the following parameters are displayed.
    
    
      ![oms26-en](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/oms/oms-enterprise/oms26-en.png)
    
      |Parameter|Description|
      |----|----------------------------------|
      | Full verification resource configuration | You can choose **<UI-TERM key="OMS.components.ResourceAllocation.Small">Small</UI-TERM>**, **<UI-TERM key="OMS.components.ResourceAllocation.Medium">Medium</UI-TERM>**, or **<UI-TERM key="OMS.components.ResourceAllocation.contsants.Large">Large</UI-TERM>** as the default concurrent reading and memory values, or you can customize the resource configuration for full verification. You can use the resource configuration of the Full-Verification component to limit the resource consumption during the full verification phase of the task. <main id="notice" type='notice'><h4>Notice</h4><p>When you customize the configuration, the minimum value is 1 and only integers are supported. </p></main>|
      Please translate the following technical document into English:
    
    • Advanced options

    The parameter is displayed in the page only when the Oracle compatibility mode of the target OceanBase Database is V4.3.0 or later, and either Schema Migration or Incremental Synchronization > DDL synchronization is selected on the Select Migration Type page.

    oms8-en

    The target table storage types include Default, Row Storage, Column Storage, and Hybrid Row-Column Storage. This setting specifies the storage type of the target table object during schema migration or incremental synchronization. For more information, see default_table_store_format.

    Notes

    Please translate the following technical document into English:

    Default applies the target options to other options based on the parameters at the destination end. When you use the default option, the schema is written to a table in an adaptive manner for a table migrated by using schema migration or added incrementally by using incremental DDL operations.

Previous topic

Migrate data from an Oracle database to an Oracle-compatible tenant of OceanBase Database
Last

Next topic

Migrate data from an Oracle-compatible tenant of OceanBase Database to a DB2 LUW database
Next
What is on this page
Prerequisites
Limitations
Considerations
Data type mapping
Migration conversion rules
Limitations
Procedure