OceanBase logo

OceanBase

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Resources

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS

OceanBase Cloud

OceanBase Database

Tools

Connectors and Middleware

QUICK START

OceanBase Cloud

OceanBase Database

BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Company

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

International - English
中国站 - 简体中文
日本 - 日本語
Sign In
Start on Cloud

A unified distributed database ready for your transactional, analytical, and AI workloads.

DEPLOY YOUR WAY

OceanBase Cloud

The best way to deploy and scale OceanBase

OceanBase Enterprise

Run and manage OceanBase on your infra

TRY OPEN SOURCE

OceanBase Community Edition

The free, open-source distributed database

OceanBase seekdb

Open source AI native search database

Customer Stories

Real-world success stories from enterprises across diverse industries.

View All
BY USE CASES

Mission-Critical Transactions

Global & Multicloud Application

Elastic Scaling for Peak Traffic

Real-time Analytics

Active Geo-redundancy

Database Consolidation

Comprehensive knowledge hub for OceanBase.

Blog

Live Demos

Training & Certification

Documentation

Official technical guides, tutorials, API references, and manuals for all OceanBase products.

View All
PRODUCTS
OceanBase CloudOceanBase Database
ToolsConnectors and Middleware
QUICK START
OceanBase CloudOceanBase Database
BEST PRACTICES

Practical guides for utilizing OceanBase more effectively and conveniently

Learn more about OceanBase – our company, partnerships, and trust and security initiatives.

About OceanBase

Partner

Trust Center

Contact Us

Start on Cloud
编组
All Products
    • Databases
    • iconOceanBase Database
    • iconOceanBase Cloud
    • iconOceanBase Tugraph
    • iconInteractive Tutorials
    • iconOceanBase Best Practices
    • Tools
    • iconOceanBase Cloud Platform
    • iconOceanBase Migration Service
    • iconOceanBase Developer Center
    • iconOceanBase Migration Assessment
    • iconOceanBase Admin Tool
    • iconOceanBase Loader and Dumper
    • iconOceanBase Deployer
    • iconKubernetes operator for OceanBase
    • iconOceanBase Diagnostic Tool
    • iconOceanBase Binlog Service
    • Connectors and Middleware
    • iconOceanBase Database Proxy
    • iconEmbedded SQL in C for OceanBase
    • iconOceanBase Call Interface
    • iconOceanBase Connector/C
    • iconOceanBase Connector/J
    • iconOceanBase Connector/ODBC
    • iconOceanBase Connector/NET
icon

OceanBase Migration Service

V3.4.0Enterprise Edition

  • OMS Documentation
  • What's new
  • OMS Introduction
    • What is OMS?
    • Terms
    • Architecture
      • Overview
      • Hierarchical functional system
      • Basic components
    • Limits
  • Quick Start
    • Data migration process
    • Data synchronization process
  • Deployment Guide
    • Deployment type
    • System and network requirements
    • Memory and disk requirements
    • Prepare the environment
    • Deploy OMS on a single node
    • Deploy OMS on multiple nodes in a single region
    • Deploy OMS on multiple nodes in multiple regions
    • Scale-out and deployment
    • Check the deployment
    • Deploy a time-series database (Optional)
  • OMS console
    • Log on to the OMS console
    • Overview
    • User center
      • Configure user information
      • Change your logon password
      • Log off
  • Data migration
    • Data migration overview
    • Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database
    • Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database
    • Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
    • Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database
    • Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database
    • Migrate data within OceanBase Database
    • Create an active-active disaster recovery project in OceanBase Database
    • Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode
    • Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database
    • Manage data migration projects
      • View details of a data migration project
      • View and modify migration objects
      • Use tags to manage data migration projects
      • Download and import the settings of migration objects
      • Start, pause, and resume a data migration project
      • Release and delete a data migration project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Configure matching rules for migration objects
      • Wildcard rules
      • Rename a database table
      • Use SQL conditions to filter data
      • Create and update a heartbeat table
      • Schema migration mechanisms
      • Schema migration operations
    • Supported DDL operations in incremental migration and limits
      • Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits
      • Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database
      • Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database
      • Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database
      • Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits
      • Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits
      • Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database
      • Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database
  • Data synchronization
    • Data synchronization overview
    • Create a project to synchronize data from an OceanBase database to a Kafka instance
    • Create a project to synchronize data from an OceanBase database to a RocketMQ instance
    • Create a project to synchronize data from an OceanBase database to a DataHub instance
    • Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from a DBP logical table to a DataHub instance
    • Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database
    • Create a project to synchronize data from an IDB logical table to a DataHub instance
    • Create a project to synchronize data from a MySQL database to a DataHub instance
    • Create a project to synchronize data from an Oracle database to a DataHub instance
    • Manage data synchronization projects
      • View details of a data synchronization project
      • View and modify synchronization objects
      • Use tags to manage data synchronization projects
      • Download and import the settings of synchronization objects
      • Start, pause, and resume a data synchronization project
      • Release and delete a data synchronization project
    • Features
      • DML filtering
      • Synchronize DDL operations
      • Rename databases and tables
      • Rename a topic
      • Use SQL conditions to filter data
      • Column filtering
      • Data formats
  • Create and manage data sources
    • Create data sources
      • Create an OceanBase data source
        • Create a physical OceanBase data source
        • Create a DBP data source
        • Create an IDB data source
      • Create a MySQL data source
      • Create an Oracle data source
      • Create a TiDB data source
      • Create a Kafka data source
      • Create a RocketMQ data source
      • Create a DataHub data source
      • Create a DB2 LUW data source
      • Create a PostgreSQL data source
    • Manage data sources
      • View data source information
      • Copy a data source
      • Edit a data source
      • Delete a data source
    • Create a database user
    • User privileges
    • Enable binlogs for the MySQL database
    • Minimum privileges required when an Oracle database serves as the source
  • OPS & Monitoring
    • O&M overview
    • Go to the overview page
    • Server
      • View server information
      • Update quotas
      • View server logs
    • Components
      • Store
        • Create a store
        • View details of a store
        • Update the configurations of a store
        • Start and pause a store
        • Destroy a store
      • Connector
        • View details of a connector
        • Start and pause a connector
        • Migrate a connector
        • Update the configurations of a connector
        • Batch O\&M
        • Delete a connector
      • JDBCWriter
        • View details of a JDBCWriter
        • Start and pause a JDBCWriter
        • Migrate a JDBCWriter
        • Update the configurations of a JDBCWriter
        • Batch O\&M
        • Delete a JDBCWriter
      • Checker
        • View the information about a checker
        • Start and pause a checker
        • Rerun and reverify a checker
        • Update the configurations of a checker
        • Delete a checker
    • O&M tickets
      • View details of an O\&M ticket
      • Skip a ticket or sub-ticket
      • Retry a ticket or sub-ticket
  • System management
    • User management
    • Alert center
      • View project alerts
      • View system alerts
      • Manage alert settings
    • Associate with OCP
    • System parameters
      • Modify system parameters
      • Modify HA configurations
    • Operation audit
  • O&M Guide
    • Manage OMS services
    • OMS logs
    • O&M operations for the Store component
    • Store parameters
      • Parameters of an Oracle store
      • Parameters of a DB2 store
      • Parameters of a MySQL store
      • Parameters of an OceanBase store
    • O&M operations for the Supervisor component
    • Parameters of the Supervisor component
    • O&M operations for the Connector component
    • Connector parameters
      • Parameters of a destination RocketMQ instance
      • Parameters of a DataflowSink instance
      • Parameters in the destination Kafka instance
      • Parameters of the source database in full migration
      • Parameters of the source database in incremental data synchronization
      • Parameters of a destination DataHub instance
      • Parameters of the source Sybase database
      • Parameters for intermediate-layer synchronization
    • Checker parameters
    • JDBCWriter parameters
    • Parameters of the CM component
  • Reference Guide
    • API Reference
      • Obtain the status of a migration project
      • Obtain the status of a synchronization project
    • OMS error codes
    • Alert Reference
      • oms_host_down
      • oms_host_down_migrate_resource
      • oms_host_threshold
      • oms_migration_failed
      • oms_migration_delay
      • oms_sync_failed
      • oms_sync_status_inconsistent
      • oms_sync_delay
  • Upgrade Guide
    • Overview
    • Upgrade OMS in single-node deployment mode
    • Upgrade OMS in multi-node deployment mode
    • FAQ
  • FAQ
    • General O&M
      • How do I modify the resource quotas of an OMS container?
      • How do I troubleshoot the OMS server down issue?
    • Project diagnostics
      • How do I troubleshoot common problems with Oracle Store?
      • How do I perform performance tuning for Oracle Store?
      • What do I do when Oracle Store reports an error at the isUpdatePK stack?
      • What do I do when a store does not have data of the timestamp requested by the downstream?
      • What do I do when OceanBase Store failed to access an OceanBase cluster through RPC?
      • How do I use LogMiner to pull data from an Oracle database?
    • OPS & monitoring
      • What are the alert rules?
    • Data synchronization
      • FAQ about synchronization to a message queue
        • What are the strategies for ensuring the message order in incremental data synchronization to Kafka
    • Data migration
      • User privileges
        • What privileges do I need to grant to a user during data migration to or from an Oracle database?
      • Full migration
        • FAQ about full migration
          • How do I query the ID of a checker?
          • How do I query log files of the Checker component of OMS?
          • How do I query the verification result files of the Checker component of OMS?
          • What do I do if the destination table does not exist?
      • Incremental synchronization
        • How do I skip DDL statements?
        • How do I update the configurations of a JDBCWriter?
        • How do I start or stop a JDBCWriter?
        • How do I update whitelists and blacklists?
        • What are the application scope and limits of ETL?
    • Installation and deployment
      • How do I upgrade Store?
  • Release Note
    • V3.4
      • OMS V3.4.0
    • V3.3
      • OMS V3.3.1
      • OMS V3.3.0
    • V3.2
      • OMS V3.2.2
      • OMS V3.2.1
    • V3.1
      • OMS V3.1.0
    • V2.1
      • OMS V2.1.2
      • OMS V2.1.0

Download PDF

OMS Documentation What's new What is OMS? Terms Overview Hierarchical functional system Basic components Limits Data migration process Data synchronization process Deployment type System and network requirements Memory and disk requirements Prepare the environment Deploy OMS on a single node Deploy OMS on multiple nodes in a single region Deploy OMS on multiple nodes in multiple regions Scale-out and deployment Check the deployment Deploy a time-series database (Optional) Log on to the OMS console Overview Configure user information Change your logon password Log off Data migration overview Create a project to migrate data from a MySQL database to a MySQL tenant of OceanBase Database Create a project to migrate data from a MySQL tenant of OceanBase Database to a MySQL database Create a project to migrate data from an Oracle database to a MySQL tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database Create a project to migrate data from an Oracle tenant of OceanBase Database to a DB2 LUW database Create a project to migrate data from a DB2 LUW database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a MySQL tenant of OceanBase Database to a DB2 LUW database Migrate data within OceanBase Database Create an active-active disaster recovery project in OceanBase Database Create a project to migrate data from a TiDB database to an OceanBase database in MySQL tenant mode Create a project to migrate data from a PostgreSQL database to a MySQL tenant of OceanBase Database View details of a data migration project View and modify migration objects Use tags to manage data migration projects Download and import the settings of migration objects Start, pause, and resume a data migration project Release and delete a data migration project DML filtering Synchronize DDL operations Configure matching rules for migration objects Wildcard rules Rename a database table Use SQL conditions to filter data Create and update a heartbeat table Schema migration mechanisms Schema migration operations Supported DDL operations in incremental migration from a MySQL database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a MySQL database and limits Supported DDL operations in incremental migration from an Oracle database to an Oracle tenant of OceanBase Database Supported DDL operations in incremental migration from an Oracle tenant of OceanBase Database to an Oracle database Dynamic DDL operations during data migration between an Oracle tenant of OceanBase Database and a DB2 LUW database Supported DDL operations in incremental migration from a DB2 LUW database to a MySQL tenant of OceanBase Database and limits Supported DDL operations in incremental migration from a MySQL tenant of OceanBase Database to a DB2 LUW database and limits Supported DDL operations in incremental migration between MySQL tenants of OceanBase Database Supported DDL operations in incremental migration between Oracle tenants of OceanBase Database Data synchronization overview Create a project to synchronize data from an OceanBase database to a Kafka instance Create a project to synchronize data from an OceanBase database to a RocketMQ instance Create a project to synchronize data from an OceanBase database to a DataHub instance Create a project to synchronize data from a DBP logical table to a physical table in the MySQL tenant of OceanBase Database Create a project to synchronize data from a DBP logical table to a DataHub instance Create a project to synchronize data from an IDB logical table to the MySQL tenant of OceanBase Database Create a project to synchronize data from an IDB logical table to a DataHub instance Create a project to synchronize data from a MySQL database to a DataHub instance Create a project to synchronize data from an Oracle database to a DataHub instance View details of a data synchronization project View and modify synchronization objects Use tags to manage data synchronization projects Download and import the settings of synchronization objects Start, pause, and resume a data synchronization project Release and delete a data synchronization project DML filtering Synchronize DDL operations Rename databases and tables Rename a topic Use SQL conditions to filter data Column filtering Data formats Create a MySQL data source Create an Oracle data source Create a TiDB data source Create a Kafka data source Create a RocketMQ data source Create a DataHub data source Create a DB2 LUW data source Create a PostgreSQL data source View data source informationCopy a data source Edit a data source Delete a data source Create a database user User privileges
OceanBase logo

The Unified Distributed Database for the AI Era.

Follow Us
Products
OceanBase CloudOceanBase EnterpriseOceanBase Community EditionOceanBase seekdb
Resources
DocsBlogLive DemosTraining & Certification
Company
About OceanBaseTrust CenterLegalPartnerContact Us
Follow Us

© OceanBase 2026. All rights reserved

Cloud Service AgreementPrivacy PolicySecurity
Contact Us
Document Feedback
  1. Documentation Center
  2. OceanBase Migration Service
  3. V3.4.0
iconOceanBase Migration Service
V 3.4.0Enterprise Edition
Enterprise Edition
  • V 4.3.2
  • V 4.3.1
  • V 4.3.0
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.0.2
  • V 3.4.0
Community Edition
  • V 4.2.13
  • V 4.2.12
  • V 4.2.11
  • V 4.2.10
  • V 4.2.9
  • V 4.2.8
  • V 4.2.7
  • V 4.2.6
  • V 4.2.5
  • V 4.2.4
  • V 4.2.3
  • V 4.2.1
  • V 4.2.0
  • V 4.0.0
  • V 3.3.1

Create a project to migrate data from an Oracle database to an Oracle tenant of OceanBase Database

Last Updated:2026-04-14 07:36:28  Updated
share
What is on this page
Background
Prerequisites
Limits
Data type mappings
Conversion of Oracle table partitions
Check and modify the system configurations of the Oracle instance
Enable archivelog for the source Oracle database
Enable supplemental_log in the source Oracle database
(Optional) Set the system parameters of the Oracle database
Restart the instance and perform log switchover
Create a data migration project

folded

share

This topic describes how to use OceanBase Migration Service (OMS) to migrate data from an Oracle database to an Oracle tenant of OceanBase Database.

Background

You can create a data migration project in the OMS console to seamlessly migrate the existing business data and incremental data from an Oracle database to an Oracle tenant of OceanBase Database by using the schema migration, full migration, and incremental data synchronization features.

The Oracle database supports the following modes: single primary database, single standby database, and primary/standby databases. The following table describes the data migration operations supported for each mode.

Type Supported operations
Single primary database Schema migration, full migration, incremental synchronization, full verification, and reverse incremental migration
Single standby database Schema migration, full migration, incremental synchronization, and full verification
Primary/standby databases Primary database: reverse incremental migration. Standby database: schema migration, full migration, incremental synchronization, and full verification

Prerequisites

  • You have created a corresponding schema in the destination Oracle tenant of OceanBase Database. OMS allows you to migrate tables and columns. Therefore, you must create a corresponding schema in the destination database before migration.

  • You have enabled archivelog for the source Oracle instance and switched the logfile before OMS starts incremental data replication.

  • You have installed LogMiner in the source Oracle instance, and LogMiner runs properly.

    LogMiner enables you to obtain data from the archived logs of the Oracle instance.

  • You have created a dedicated database user in the source Oracle database and the destination Oracle tenant of OceanBase Database for data migration and granted the corresponding privileges to the users. For more information, see Create a database user.

  • You have made sure that the Oracle instance has enabled the database-level or table-level supplemental_log feature.

  • If you enable supplemental_log of the primary key and unique key at the database level, when a large number of unnecessary logs are generated by tables that do not need to be synchronized, the pressure on LogMiner Reader to fetch logs and on the Oracle database increases. Therefore, you can enable only the table-level supplemental_log of the primary key and unique key for Oracle databases in the OMS console. However, if you configure the Set ETL Options to filter columns other than the primary key and unique key columns when you create a migration task, enable supplemental_log for the corresponding columns or all columns.

  • Clock synchronization (such as the NTP service) is required between an Oracle server and the OMS server to avoid data risks. For an Oracle RAC, clock synchronization is also required between Oracle instances.

Limits

  • OMS supports the following Oracle database versions: 10g, 11g, 12c, 18c, and 19c. Version 12c and later provide container databases (CDBs) and pluggable databases (PDBs).

  • In long-term synchronization between databases, OMS does not support triggers in the destination database.

  • Data type limits

    • Incremental data migration is not supported for a table whose data in all columns is of the following three large object (LOB) types: BLOB, CLOB, and NCLOB.

    • If a table does not have the primary key and contains data of the LOB type, the reverse incremental migration of the table can suffer poor data quality.

    • If the LOB field in the source Oracle database is too large, it cannot be stored in OceanBase Database, causing data synchronization errors.

  • OMS allows you to migrate data from the source Oracle instance that uses character sets including AL32UTF8, AL16UTF16, ZHS16GBK, and GB18030. If the character set used by the source database is UTF-8, we recommend that you use UTF-8 or a greater character set for the destination database.

  • If you select a migration mode that supports incremental synchronization and reverse incremental migration, and an exception occurs when OMS pulls the incremental data from a standby Oracle database, you can run the ALTER SYSTEM SWITCH LOGFILE command in the primary database to handle the exception.

  • When you migrate a table without the primary key from an Oracle database to an Oracle tenant of OceanBase Database, do not perform any operations on the source Oracle database that may change the ROWID, such as data import and export, Alter Table, FlashBack Table, and partition splitting or compaction.

  • If a new table without the primary key is added in the source Oracle database during the incremental synchronization, OMS does not automatically delete the hidden columns and the unique index added to the table in the destination Oracle tenant of OceanBase Database. You need to manually delete them before you start a reverse migration task.

    To confirm the tables without the primary key that are added during the incremental synchronization, view the manual_table.log file in the logs/msg/ directory.

  • Daylight Savings Time (DST) was once adopted in China, so a one-hour time difference between the source and the destination is expected for the data of the TIMESTAMP(6) WITH TIME ZONE type that was generated during the following periods: the DST periods from 1986 to 1991, and April 10 to 17, 1988.

  • In a project of reverse incremental migration from an Oracle database to an Oracle tenant of OceanBase Database, when the Oracle tenant is of a version earlier than V3.2.x and has a multi-partition table with global unique indexes, if you update the value of a partitioning key of the table, data may be lost during migration.

  • When the Oracle tenant of the destination OceanBase database is earlier than V2.2.70, foreign keys, checks, and other objects added during the switchover may not be supported.

  • Character encoding and reverse synchronization are limited:

    If the source and destination databases use different character sets, a field length extension policy will be provided during schema migration. For example, the field length is extended by 1.5 times, and the length unit is changed from BYTE to CHAR.

    This ensures that data encoded by using different character sets can be migrated from the source database to the destination database. However, after cutover, data may fail to be written back to the source database during reverse incremental data migration because of an extra long data length.

  • If forward switchover is not started in a data migration project, delete the unique indexes and pseudocolumns from the source database. If you do not delete the unique indexes and pseudocolumns, data cannot be written, and pseudocolumns will be generated again when data is imported to the downstream system, causing conflicts with the pseudocolumns in the source database.

  • If you change the unique index of the destination, you must restart the incremental synchronization. Otherwise, the data may be inconsistent.

Data type mappings

Notice

  • Data of the CLOB and BLOB types must be less than 48 MB in size.

  • Data of the LONG, ROWID, BFILE, LONG RAW, XMLType, and UDT types cannot be migrated.

Oracle Database Oracle tenant of OceanBase Database
CHAR CHAR
NCHAR NCHAR
VARCHAR2 VARCHAR2
NVARCHAR2 NVARCHAR2
NUMBER NUMBER
NUMBER (p, s) NUMBER(p,s)
LONG Full migration and verification of the data are supported. Incremental synchronization is not supported.
RAW RAW
CLOB CLOB
NCLOB
  • An Oracle tenant of OceanBase Database of a version earlier than V2.2.50 does not support conversion.
  • An Oracle tenant of OceanBase Database V2.2.50 and later supports the conversion to the NVARCHAR2 data type.
BLOB BLOB
FLOAT(n)
  • Oracle tenant of OceanBase Database of a version earlier than V2.2.30: NUMBER(n*0.30103)
  • Oracle tenant of OceanBase Database V2.2.30 or later: FLOAT
BINARY_FLOAT BINARY_FLOAT
BINARY_DOUBLE BINARY_DOUBLE
DATE DATE
TIMESTAMP TIMESTAMP
TIMESTAMP WITH TIME ZONE TIMESTAMP WITH TIME ZONE
TIMESTAMP WITH LOCAL TIME ZONE TIMESTAMP WITH LOCAL TIME ZONE
INTERVAL YEAR(p) TO MONTH INTERVAL YEAR(p) TO MONTH
INTERVAL DAY(p) TO SECOND INTERVAL DAY(p) TO SECOND
ROWID Not supported
BFILE Not supported
LONG RAW Full migration and verification of the data are supported. Incremental synchronization is not supported.
XMLType Not supported
UDT Not supported

Conversion of Oracle table partitions

When OMS is used to migrate data from an Oracle database, the system automatically converts your business SQL statements. However, the conversion performed in OceanBase Database V2.2.30 is different from that in OceanBase Database V2.2.50.

Note

The partition conversion rules described in this topic apply to all partitioning types.

Source table definition Table after conversion in OceanBase Database V2.2.30 Table after conversion in OceanBase Database V2.2.50 and later
CREATE TABLE T_RANGE_0 (
  A INT,
  B INT,
  PRIMARY KEY (B)
)PARTITION BY RANGE(A)(
....
);
CREATE TABLE "T_RANGE_0" (
  "A" NUMBER,
  "B" NUMBER NOT NULL,
  PRIMARY KEY ("B", "A")
)PARTITION BY RANGE ("A")(
....
);
CREATE UNIQUE INDEX ON "T_RANGE_0"(B);
  • The primary key column does not contain the partition column.
  • The partition column is a physical column.
  • A composite primary key is formed by joining the partition column with the primary key column.
  • A global unique index is added to the original primary key column.
CREATE TABLE "T_RANGE_0" (
  "A" NUMBER,
   "B" NUMBER NOT NULL,
   CONSTRAINT "T_RANGE_10_UK" UNIQUE ("B")
)PARTITION BY RANGE ("A")(
....
);
CREATE TABLE T_RANGE_10 (
  "A" INT,
   "B" INT,
   "C" DATE,
   "D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,
  CONSTRAINT "T_RANGE_10_PK" PRIMARY KEY (A)
)PARTITION BY RANGE(D)(
....
);
CREATE TABLE T_RANGE_10 (
  "A" INT NOT NULL,
  "B" INT,
  "C" DATE,
  "D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,
  CONSTRAINT "T_RANGE_10_PK" PRIMARY KEY (A, C)
)PARTITION BY RANGE(D)(
....
);
CREATE TABLE T_RANGE_10 (
  "A" INT NOT NULL,
  "B" INT,
  "C" DATE,
"D" NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR("C",'dd'))) VIRTUAL,``
  CONSTRAINT "T_RANGE_10_PK" UNIQUE (A)
)PARTITION BY RANGE(D)(
....
);
CREATE TABLE T_RANGE_1 (
  A INT,
  B INT,
  UNIQUE (B)
)PARTITION BY RANGE(A)(
partition P_MAX values less than (10)
);
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_1" The source table definition is supported.
CREATE TABLE T_RANGE_2 (
  A INT,
  B INT NOT NULL,
  UNIQUE (B)
)PARTITION BY RANGE(A)(
partition P_MAX values less than (10)
);
CREATE TABLE "T_RANGE_2" (
  "A" NUMBER,
  "B" NUMBER NOT NULL,
  PRIMARY KEY ("B", "A")
)PARTITION BY RANGE ("A")(
....
);
The source table definition is supported.
CREATE TABLE T_RANGE_3 (
  A INT,
  B INT,
  UNIQUE (A)
)PARTITION BY RANGE(A)(
....
);
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_2" The source table definition is supported.
CREATE TABLE T_RANGE_4 (
  A INT NOT NULL,
  B INT,
  UNIQUE (A)
)PARTITION BY RANGE(A)(
....
);
CREATE TABLE "T_RANGE_4" (
  "A" NUMBER NOT NULL,
  "B" NUMBER,
  PRIMARY KEY ("A")
)PARTITION BY RANGE ("A")(
....
);
CREATE TABLE "T_RANGE_4" (
  "A" NUMBER NOT NULL,
  "B" NUMBER,
  PRIMARY KEY ("A")
)PARTITION BY RANGE ("A")(
....
);
CREATE TABLE T_RANGE_5 (
  A INT,
  B INT,
  UNIQUE (A, B)
)PARTITION BY RANGE(A)(
partition P_MAX values less than (10)
);
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_5" The source table definition is supported.
CREATE TABLE T_RANGE_6 (
  A INT NOT NULL,
  B INT,
  UNIQUE (A, B)
)PARTITION BY RANGE(A)(
partition P_MAX values less than (10)
);
-- [WARNING] Create global index on no primary key table is unsupported. Object: "GUYUE"."T_RANGE_5" The source table definition is supported.
CREATE TABLE T_RANGE_7 (
  A INT NOT NULL,
  B INT NOT NULL,
  UNIQUE (A, B)
)PARTITION BY RANGE(A)(
partition P_MAX values less than (10)
);
CREATE TABLE "T_RANGE_7" (
  "A" NUMBER NOT NULL,
  "B" NUMBER NOT NULL,
  PRIMARY KEY ("A", "B")
)PARTITION BY RANGE ("A")(
....
);
CREATE TABLE "T_RANGE_7" (
  "A" NUMBER NOT NULL,
  "B" NUMBER NOT NULL,
  PRIMARY KEY ("A", "B")
)PARTITION BY RANGE ("A")(
....
);
CREATE TABLE T_RANGE_8 (
  "A" INT,
  "B" INT,
  "C" INT NOT NULL,
  UNIQUE (A),
  UNIQUE (B),
  UNIQUE (C)
)PARTITION BY RANGE(B)(
partition P_MAX values less than (10)
);
CREATE TABLE "T_RANGE_8" (
  "A" NUMBER,
  "B" NUMBER,
  "C" NUMBER NOT NULL,
  PRIMARY KEY ("C", "B"),
  UNIQUE ("A"),
  UNIQUE ("B"),
  UNIQUE ("C")
)PARTITION BY RANGE ("B")(
....
);
The source table definition is supported.
CREATE TABLE T_RANGE_9 (
  "A" INT,
  "B" INT,
  "C" INT NOT NULL,
  UNIQUE(A),
  UNIQUE(B),
  UNIQUE (C)
)PARTITION BY RANGE(C)(
partition P_MAX values less than (10)
);
CREATE TABLE "T_RANGE_9" (
  "A" NUMBER,
  "B" NUMBER,
  "C" NUMBER NOT NULL,
  PRIMARY KEY ("C"),
  UNIQUE ("A"),
  UNIQUE ("B")
)PARTITION BY RANGE ("C")(
....
);
CREATE TABLE "T_RANGE_9" (
  "A" NUMBER,
  "B" NUMBER,
  "C" NUMBER NOT NULL,
  PRIMARY KEY ("C"),
  UNIQUE ("A"),
  UNIQUE ("B")
)PARTITION BY RANGE ("C")(
....
);

Check and modify the system configurations of the Oracle instance

Perform the following operations:

  1. Enable archivelog for the source Oracle database.

  2. Enable supplemental_log in the source Oracle database.

  3. Set the system parameters of the Oracle database.

  4. Restart the instance and perform the archivelog switchover three times.

Enable archivelog for the source Oracle database

select log_mode from v$database;

The value of the log_mode field must be archivelog. Otherwise, perform the following steps to change it:

  1. Run the following commands to enable archivelog.

    shutdown immediate;
    startup mount;
    alter database archivelog;
    alter database open;
    
  2. Run the following command to view the path and quota of archived logs.

    View the path and quota of the recovery file. We recommend that you set the db_recovery_file_dest_size parameter to a relatively large value. After you enable archivelog, you need to regularly clear the archived logs by using RMAN or other methods.

    show parameter db_recovery_file_dest;
    
  3. Change the quota of archived logs as needed.

    alter system set db_recovery_file_dest_size=50G scope=both;
    

Enable supplemental_log in the source Oracle database

LogMiner Reader allows you to enable only table-level supplemental_log in an Oracle database. If you create new tables in the Oracle instance before the migration, you must enable the supplemental_log for the primary key and unique key before the DML operations. Otherwise, OMS returns an exception of incomplete logs.

Notice

You need to enable supplemental_log in the primary Oracle database.

If the indexes are inconsistent between the source and destination databases, the ETL does not meet the expectation, or the migration performance of partitioned tables deteriorates. You need to add the following supplemental_logs:

  • Add the database-level or table-level supplemental_log_data_pk and supplemental_log_data_ui.

  • Add columns to the supplemental_logs.

    • Add all columns involved by the primary keys or unique keys in the source and destination databases to resolve the problem of index inconsistency between the source and destination databases.

    • If an ETL exists, add the ETL column to resolve the problem that the ETL does not meet the expectation.

    • If the destination table is a partitioned table, add a partition column to resolve the problem that the write performance deteriorates because partition pruning cannot be performed.

    You can execute the following statement to check the addition result.

    select log_group_type from all_log_groups where owner = 'Database' and table_name = 'Table';
    

    If the check result includes ALL COLUMN LOGGING, the check is passed. Otherwise, check whether the ALL_LOG_GROUP_COLUMNS table contains all preceding columns.

    Sample statement for adding columns to supplemental_logs:

    alter table <table_name> add supplemental log group <table_name_group> (c1, c2) always;
    

The following table describes the possible risks and solutions when you perform DDL operations in a running data migration project.

Operation Risks Solution
CREATE TABLE (table to be synchronized) If the table in the destination database is a partitioned table, the table indexes in the source and destination databases are inconsistent, or ETL is required, the data migration performance may be affected and ETL may not meet the expectation. Database-level primary key and unique key supplemental_logs must be enabled. Manually add the involved columns to the supplemental_logs.
Add, delete, or modify the primary key, unique key, or partition column, or modify the ETL column This violates the rule of adding supplemental_logs upon start and may result in data inconsistency or reduced migration performance. Add supplemental_logs based on the preceding rules.

LogMiner Reader uses one of the following two methods to check whether supplemental_log is enabled. If not, LogMiner Reader exits.

  • Enable supplemental_log_data_pk and supplemental_log_data_ui at the database level.

    Run the following commands to check whether the supplemental_log is enabled. If the returned values are both YES, the supplemental_log is enabled.

    select supplemental_log_data_pk, supplemental_log_data_ui from v$database;
    

    Otherwise, perform the following steps:

    1. Execute the following statement to enable the supplemental_log.

      alter database add supplemental log data(primary key, unique) columns;
      
    2. Perform archivelog switchover three times. For an Oracle RAC, perform switchover for the instances alternately.

      alter system switch logfile;
      

      The reason for performing the archivelog switchover three times:

      When the Oracle Store locates the start time to pull log files, it rolls back 0 to 2 archived logs based on the specified timestamp. Therefore, after you enable the supplemental_log, you need to perform the archivelog switchover three times to prevent the store from pulling the logs that are generated before the specified timestamp. Otherwise, the store exits unexpectedly.

      The reason for alternately performing the archivelog switchover among multiple instances in an RAC system:

      In an Oracle RAC system, if you perform the archivelog switchover multiple times on one instance, when you perform the archivelog switchover on the next instance, the latter instance may pull the logs that are generated before the supplemental_log is enabled.

  • Enable supplemental_log_data_pk and supplemental_log_data_ui at the table level.

    1. Execute the following statement to confirm whether supplemental_log_data_min is enabled at the database level.

      select supplemental_log_data_min from v$database;
      

      If the returned value is YES or IMPLICIT, the supplemental_log is enabled.

    2. Execute the following statement to check whether the table-level supplemental_log is enabled for the tables to be synchronized.

      select log_group_type from all_log_groups where owner = 'xxx' and table_name = 'yyy';
      

      Each type of supplemental_log returns one row. The results must contain ALL COLUMN LOGGING or both PRIMARY KEY LOGGING and UNIQUE KEY LOGGING.

      If the table-level supplemental_log is not enabled, execute the following statement.

      alter table table_name add supplemental log data(primary key, unique) columns;
      
    3. Perform archivelog switchover three times. For an Oracle RAC, perform switchover for the instances alternately.

      alter system switch logfile;
      

(Optional) Set the system parameters of the Oracle database

We recommend that you set the _log_parallelism_max parameter of the Oracle database to 1. The default value is 2.

You can use one of the following two methods to query the value of the _log_parallelism_max parameter.

  • Method 1

    SELECT NAM.KSPPINM,VAL.KSPPSTVL,NAM.KSPPDESC FROM SYS.X$KSPPI NAM,SYS.X$KSPPSV VAL WHERE NAM.INDX= VAL.INDX AND NAM.KSPPINM LIKE '_%' AND UPPER(NAM.KSPPINM) LIKE '%LOG_PARALLEL%';
    
  • Method 2

    select value from v$parameter where name = '_log_parallelism_max';
    

Execute one of the following statements to modify the value of the _log_parallelism_max parameter.

  • Oracle RAC

    alter system set "_log_parallelism_max"=1 sid='*' scope=spfile;
    
  • Non-Oracle RAC

    alter system set "_log_parallelism_max"=1 scope=spfile;
    

When you modify the value of the _log_parallelism_max parameter in Oracle Database 10g, if the error message write to SPFILE requested but no SPFILE specified at startup is returned, perform the following operations:

create spfile from pfile;
shutdown immediate;
startup;
show parameter spfile;

Restart the instance and perform log switchover

After completing the preceding operations, restart the instance and perform log switchover three times.

Create a data migration project

  1. Create a migration project.

    1. Log on to the OMS console.

    2. In the left-side navigation pane, click Data Migration.

    3. On the Data Migration page, click Create Migration Project in the upper-right corner.

  2. On the Select Source and Destination page, configure the parameters.

    Parameter Description
    Migration Project Name We recommend that you set it to a combination of Chinese characters, digits, and letters. It must not contain any spaces and cannot exceed 64 characters in length.
    Label Click the field and select the target tag from the drop-down list. You can click Manage Tags to create, modify, and delete tags. For more information, see Use tags to manage data migration projects.
    Source If you have created an Oracle data source, select it from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create an Oracle data source.
    Destination If you have created Oracle tenants in OceanBase Database as data sources, select one from the drop-down list. If you have not created a data source, click Create Data Source in the drop-down list and create a data source in the dialog box that appears on the right. For more information, see Create OceanBase Database physical tables as a data source.
  3. Click Next. On the Select Migration Type page, specify the following parameters.

    Options are available for Migration Type include Schema Migration, Full Migration, Incremental Synchronization, Full Verification, and Reverse Incremental Migration.

    Migration type Limits
    Full Migration If you select Full Migration, we recommend that you use the GATHER_SCHEMA_STATS or GATHER_TABLE_STATS statement to collect the statistics of the Oracle database before data migration.
    Incremental Synchronization Options for Incremental Synchronization are DML Synchronization and DDL Synchronization. The DML operations for synchronization are Insert, Delete, and Update. You can select the operations as needed. For more information, see Supported DDL operations for incremental migration from an Oracle database to an Oracle tenant of OceanBase Database. Incremental Synchronization has the following limits:
    • For Oracle 12c or later versions, if you select DDL Synchronization, when you add or change a column, the table name and column name cannot exceed 30 bytes in length.
      If you want the database to support table names and column names of more than 30 bytes in length, specify the ENABLE_GOLDENGATE_REPLICATION parameter as the SYS user, and set deliver2store.logminer.need_check_object_length to false.
    • Set ENABLE_GOLDENGATE_REPLICATION as follows:
      For a Real Application Cluster (RAC) environment, set this parameter for each node. If the Oracle database is in Active Data Guard (ADG) mode, set this parameter in the ADG source database.
      alter system set ENABLE_GOLDENGATE_REPLICATION=true SCOPE=BOTH;
    • Query ENABLE_GOLDENGATE_REPLICATION as follows.
      SELECT K.KSPPINM,V.KSPPSTVL FROM SYS.X$KSPPI K,SYS.X$KSPPSV V WHERE K.INDX=V.INDX AND UPPER(K.KSPPINM) = 'ENABLE_GOLDENGATE_REPLICATION';
    • If you do not select DDL Synchronization, ensure that the source database involves no modifications and that the incremental DML data has been synchronized to the destination before DDL modifications. Then, perform related DDL operations in the source and destination databases respectively.
    • If you do not select DDL Synchronization, for DDL operations on tables in the migration link, perform these operations in the destination database first. Otherwise, data migration may fail.
    • If you have selected DDL Synchronization, when you perform a DDL operation for incremental migration that is not supported by OMS in the source database, data migration may fail.
    • The source Oracle database does not support incremental synchronization of tables using the empty_clob() function.
    Full Verification
    • If you select Full Verification, we recommend that you collect the statistics of the Oracle database and the Oracle tenant of OceanBase Database before full verification.
    • If you have selected Incremental Synchronization but did not select all DML statements in DML Synchronization, OMS does not support full data verification in this scenario.
    Reverse Incremental Migration If a table to migrate has no primary key or unique index and a large amount of data in the table is changed, the reverse incremental migration will take a long time. In this case, you can add unique indexes in the source database.
    You cannot select Reverse Incremental Migration in the following cases:
    • Multi-table aggregation and synchronization are enabled.
    • Multiple schemas are configured in a rule to match one type of objects.
  4. (Optional) Click Next. If you select Reverse Incremental Migration but the ConfigUrl, username, or password is not configured for the data source of the destination Oracle tenant of OceanBase Database, the More about Data Sources dialog box appears, prompting you to configure related parameters. For more information, see Create OceanBase Database physical tables as a data source.

    After you configure the parameters, click Test Connectivity. After the test succeeds, click Save.

  5. Click Next. On the Select Migration Objects page, select the migration objects and migration scope.

    You can select one of the following two modes to migrate objects: Specify Objects or Match Rules. If you select DDL Synchronization, only the Match Rules option is available.

    • Select Specify Objects. Then select the objects to be migrated on the left and click > to add them to the list on the right. You can select tables and views of one or more databases as the migration objects.

      Notice

      • The name of a table to be migrated and the names of columns in the table must not contain Chinese characters.

      • If the database or table name contains a double dollar sign ($$), you cannot create the migration project.

      When you migrate data from an Oracle database to an Oracle tenant of OceanBase Database, OMS allows you to import objects through text, rename object names, set row filters, view column information, and remove one or all objects to be migrated.

      Operation Steps
      Import Objects
      1. In the list on the right of the Specify Migration Scope section, click Import Objects in the upper-right corner.
      2. In the dialog box that appears, click OK.
        NoticeThis operation will overwrite previous selections. Proceed with caution.
      3. In the Import Objects dialog box, import the objects to be migrated.
        You can import CSV files to rename databases/tables and set row filtering conditions. For more information, see Download and import the settings of migration objects.
      4. Click Validate.
      5. After the validation succeeds, click OK.
      Rename
      1. In the list on the right of the Specify Migration Scope section, hover the pointer over the target object.
      2. Click Rename.
      3. Enter a new name and click OK.
      Settings OMS allows you to set WHERE conditions to filter data by row and view column information.
      1. In the list on the right of the Specify Migration Scope section, hover the pointer over the target object.
      2. Click Settings.
      3. In the Settings dialog box, specify a standard SQL WHERE clause to filter data by row. The setting takes effect for full migration and incremental synchronization.
        Notice
        • Add an escape character (`) for column names. Example:`col`.
        • Only the data meeting theWHEREcondition is synchronized to the destination data source, thereby filtering data by row.
        • If row-based filtering with theWHERE clause is enabled, right-trim is forcibly performed on data of the CHAR or VARCHAR type, which may cause an inaccurate comparison of the VARCHAR data. Proceed with caution.
      4. Click OK.
        You can also view column information of the migration object in the View Columns section.
      Remove/Remove All OMS allows you to remove one or more objects from the destination database during data mapping.
      • Remove a single migration object
        In the list on the right of the Specify Migration Scope section, hover the pointer over the target object, and click Remove. The migration object is removed.
      • Remove all migration objects
        In the list on the right of the Specify Migration Scope section, click Remove All in the upper-right corner. In the dialog box that appears, click OK to remove all migration objects.

      When the source database is an Oracle database, if row filtering is enabled for columns other than the primary key and unique key columns, enable supplemental_log for the corresponding columns or all columns.

      Statement for enabling supplemental_log for the corresponding columns:

      ALTER TABLE table_name ADD SUPPLEMENTAL LOG GROUP log_group_name (column1, column2, column3) ALWAYS;
      

      Statement for enabling supplemental_log for all columns:

      -- Enable database-level supplemental_log:
      ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
      -- Enable table-level supplemental_log:
      ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
      
    • Select Match Rules. For more information, see Configure matching rules for migration objects.

  6. Click Next. On the Migration Options page, configure the parameters.

    Parameter Description
    Concurrency for Full Migration The value can be Smooth, Normal, or Fast. The amount of resources to be consumed by a full data migration task varies based on the migration performance.
    You can also modify the configurations of the checker component to customize the concurrency.
    Notice
    To enable this feature, select Full Migration on the Select Migration Type page.
    Full Verification Concurrency The value can be Smooth, Normal, or Fast. Different quantities of resources of the source and destination databases are consumed at different concurrencies.
    You can also modify the configurations of the checker component to customize the concurrency.
    Incremental Record Retention Time The duration that incremental parsed files are cached in OMS. A longer retention period indicates more disk space occupied by the store component of OMS.
    Whether to Allow Destination Table to Be Not Empty During Full Migration If destination tables are allowed to be not empty during full migration, full verification is performed in IN mode.
    Notice
    To enable this feature, select Full Migration on the Select Migration Type page.
    Whether to Allow Post-indexing You can specify whether to allow post-indexing after full migration is completed. Post-indexing can shorten the time of full migration.
    Notice
    • To enable this feature, select both Schema Migration and Full Migration on the Select Migration Type page.
    • Only non-unique key indexes can be created after the migration is completed.
  7. Click Precheck to start a precheck on the data migration project.

    During the precheck****, OMS checks the read and write privileges of the database users and the network connections of the databases. The data migration project can be started only after it passes all check items. If an error is returned:

    • You can troubleshoot the error and run the precheck again.

    • You can also click Skip in the Actions column of the precheck item that returns the error. Then, a dialog box appears, indicating the impact that may be caused if you choose to skip this check item. If you want to continue, click OK in the dialog box.

  8. Click Start Project. If you do not need to start the project now, click Save to go to the details page of the data migration project. You can start the project later as needed.

    OMS allows you to modify migration objects when a data migration project is running. For more information, see View and modify migration objects. After a data migration project is started, the migration objects will be executed based on the selected migration type. For more information, see the "View migration details" section in the View details of a data migration project.

Previous topic

Create a project to migrate data from an Oracle tenant of OceanBase Database to an Oracle database
Last

Next topic

Create a project to migrate data from a DB2 LUW database to an Oracle tenant of OceanBase Database
Next
What is on this page
Background
Prerequisites
Limits
Data type mappings
Conversion of Oracle table partitions
Check and modify the system configurations of the Oracle instance
Enable archivelog for the source Oracle database
Enable supplemental_log in the source Oracle database
(Optional) Set the system parameters of the Oracle database
Restart the instance and perform log switchover
Create a data migration project