You can create a data migration task to migrate existing and incremental business data from a MySQL database to a MySQL-compatible tenant of OceanBase Database through schema migration, full data migration, and incremental synchronization.
Notice
A data migration task remaining in an inactive state for a long time may fail to be resumed depending on the retention period of incremental logs. Inactive states are Failed, Stopped, and Completed. The data migration service automatically releases data migration tasks that remain in an inactive state for more than 7 days to recycle resources. We recommend that you configure alerting for tasks and handle task exceptions in a timely manner.
Prerequisites
You have created the source database.
If your cloud vendor is AWS, create an Aurora MySQL instance. For more information, see Creating and connecting to an Aurora MySQL DB cluster.
If your cloud vendor is Huawei Cloud, buy an RDS for MySQL database instance. For more information, see Buy a database instance.
If your cloud vendor is Google Cloud, create a Cloud SQL instance. For more information, see Create an instance.
If your cloud vendor is Alibaba Cloud, create a RDS MySQL database instance or Buy a PolarDB MySQL cluster.
You have created the target OceanBase instance and tenant. For more information, see Create an instance and Create a tenant.
You have created dedicated database users in the source MySQL cluster and the target MySQL-compatible tenant of OceanBase Database for data migration, and granted required privileges to the users. For more information, see User privileges.
You have enabled binary logs for the source MySQL database. For more information, see Accessing MySQL binary logs.
Limitations
Only users with the Project Owner, Project Admin or Data Services Admin project roles are allowed to create data migration tasks.
Limitations on the source database
Do not perform DDL operations that modify database or table schemas during schema migration or full data migration. Otherwise, the data migration task may be interrupted.
At present, the data migration service supports Aurora MySQL 2.x and 3.x and MySQL-compatible tenants of OceanBase Database V3.x, V4.0.0, V4.1.0, V4.2.1, and V4.2.2.
The data migration service supports the migration of only objects whose database name, table name, and column name are ASCII-encoded and do not contain special characters. The special characters are line breaks and . | " ' ` ( ) = ; / &
The precheck fails if the primary key is of the FLOAT or DOUBLE data type. We recommend that you do not specify a column of such a data type as the primary key.
If the target is a database, the data migration service does not support triggers in the target database. If triggers exist in the target database, the data migration may fail.
The data migration service does not support an index field greater than 16000 bytes (or 4000 characters) in length in a MySQL database.
The clock of the source database must be synchronized with that of the target database.
Considerations
Do not perform DDL operations that modify database or table schemas during schema migration or full data migration. Otherwise, the data migration task may be interrupted.
The tables to migrate must have PRIMARY KEY constraints or UNIQUE constraints, and field names in the same table must be unique. Otherwise, data inconsistency may occur in the target database.
The host of the MySQL database must have sufficient outbound bandwidth. Insufficient outbound bandwidth on the host will slow down log parsing and data migration, which may increase the latency of data synchronization.
The data migration service cannot change the data type of a custom column. You must manually change the data type by yourself if necessary.
The data migration service supports the migration of tables, indexes, and views.
When there are no business data changes at the source, the data migration service cannot acquire the binary logs to proceed with data synchronization. This can result in continuous task latency that is not inherently caused by the task itself. To address excessive latency, we recommend that you resolve the issue by introducing data writes at the source.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
If collations of the source and target databases are different, a table whose primary key is data of the VARCHAR type fails the data consistency verification.
If you modify a unique index at the target when DDL synchronization is disabled, you must restart the data migration task to avoid data inconsistency.
Check whether the data migration service migrates columns of certain data types, such as DECIMAL, FLOAT, and DATETIME, with the expected precision. If the precision of the target field type is lower than the precision of the source field type, the value with a higher precision may be truncated. This may result in data inconsistency between the source and target fields.
If you select only Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for more than 48 hours.
If you select Full Migration and Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for at least 7 days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because the data migration service cannot obtain incremental logs.
Supported source and target instance types
In the following table, the instance types supported for OceanBase Database in the MySQL compatible mode are Dedicated (Transactional) and Dedicated (Analytical).
| Cloud vendor | Source | Target |
|---|---|---|
| AWS | Aurora MySQL | OceanBase MySQL Compatible Mode |
| AWS | RDS MySQL | OceanBase MySQL Compatible Mode |
| AWS | Self-managed MySQL | OceanBase MySQL Compatible Mode |
| Huawei Cloud | RDS MySQL | OceanBase MySQL Compatible Mode |
| Huawei Cloud | Self-managed MySQL | OceanBase MySQL Compatible Mode |
| Google Cloud | Cloud SQL | OceanBase MySQL Compatible Mode |
| Google Cloud | Self-managed MySQL | OceanBase MySQL Compatible Mode |
| Alibaba Cloud | RDS MySQL | OceanBase MySQL Compatible Mode |
| Alibaba Cloud | PolarDB MySQL | OceanBase MySQL Compatible Mode |
| Alibaba Cloud | Self-managed MySQL | OceanBase MySQL Compatible Mode |
Data type mappings
| MySQL database | MySQL-compatible tenant of OceanBase Database |
|---|---|
| INTEGER | INTEGER |
| TINYINT | TINYINT |
| MEDIUMINT | MEDIUMINT |
| BIGINT | BIGINT |
| SMALLINT | SMALLINT |
| DECIMAL | DECIMAL |
| NUMERIC | NUMERIC |
| FLOAT | FLOAT |
| REAL | REAL |
| DOUBLE PRECISION | DOUBLE PRECISION |
| BIT | BIT |
| CHAR | CHAR |
| VARCHAR | VARCHAR |
| BINARY | BINARY |
| VARBINARY | VARBINARY |
| BLOB | BLOB |
| TEXT | TEXT |
| ENUM | ENUM |
| SET | SET |
| DATE | DATE |
| DATETIME | DATETIME |
| TIMESTAMP | TIMESTAMP |
| TIME | TIME |
| YEAR | YEAR |
Procedure
Create a data migration task.

Log on to the OceanBase Cloud console.
In the left-side navigation pane, click Data Services > Migrations.
On the Migrations page, click the Migrate Data tab.
On the Migrate Data tab, click New Task in the upper-right corner.
On the Configure Source & Target page, configure the parameters.
In the task name field, enter a custom migration task name.
We recommend that you use a combination of Chinese characters, numbers, and English letters. The name cannot contain spaces and must be less than 64 characters in length.
In the Source Profile section, configure the parameters.
If you want to reference the configurations of an existing data source, click Quick Fill next to Source Profile and select the required data source from the drop-down list. Then, the parameters in the Source Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source document.
Parameter Description Cloud Vendor At present, supported cloud vendors are AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. Database Type The type of the source. Select MySQL. Instance Type - When you select AWS as the cloud vendor, the instance types supported are Aurora MySQL, RDS MySQL, and Self-managed MySQL.
- When you select Huawei Cloud as the cloud vendor, the instance types supported are RDS MySQL and Self-managed MySQL.
- When you select Google Cloud as the cloud vendor, the instance types supported are Cloud SQL and Self-managed MySQL.
- When you select Alibaba Cloud as the cloud vendor, the instance types supported are RDS MySQL, PolarDB MySQL, and Self-managed MySQL.
Region The region of the source database. Connection Type Available connection types are Endpoint and Public IP. - If you choose Endpoint connection type, you need to first add the account ID displayed on the page to the whitelist of your endpoint service. This allows the endpoint from that account to connect to the endpoint service. For more information, see Adding private network whitelist.
- If your Cloud Vendor is AWS, when you create an Endpoint Service, if the parameter Acceptance required for endpoint connections is set to Enabled, the data migration service will prompt you to Accept endpoint connection request in the AWS console upon first connecting via private connection.
- When your Cloud Vendor is Google Cloud, add authorized projects to Published Services. After authorization, no manual authorization is needed when you test the data source connection.
Connection Details - When you select Connection Type as Endpoint, enter the endpoint service name.
- When you select Connection Type as Public IP, enter the IP address and port number of the database host machine.
Database Account The name of the MySQL database user created for data migration. Password The password of the database user. In the Target Profile section, configure the parameters.
If you want to reference the configurations of an existing data source, click Quick Fill next to Target Profile and select the required data source from the drop-down list. Then, the parameters in the Target Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source document.
Migration type Description Cloud Vendor We support AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. You can choose the same cloud vendor as the source, or perform cross-cloud data migration. Notice
Cross-cloud vendor data migration is disabled by default. If you need to use this feature, please contact our technical support.
Database Type Select OceanBase MySQL Compatible as the database type for the target. Instance Type When the target is a MySQL-compatible tenant of OceanBase Database, supported instance types are Dedicated (Transactional) and Dedicated (Analytical). - A Dedicated (Transactional) instance handles online transaction processing (OLTP) operations.
- A Dedicated (Analytical) instance handles online analytical processing (OLAP) operations.
Region Select the region where the target database is located. Instance The ID or name of the instance to which the MySQL-compatible tenant of OceanBase Database belongs. You can view the ID or name of the instance on the Instances page. Note
When your cloud vendor is Alibaba Cloud, you can also select a cross-account authorized instance of an Alibaba Cloud primary account. For more information, see Alibaba Cloud account authorization.
Tenant The ID or name of the MySQL-compatible tenant of OceanBase Database. You can expand the information about the target instance on the Instances page and view the ID or name of the tenant. Database Account The name of the database user in the MySQL-compatible tenant of OceanBase Database for data migration. Password The password of the database user.
Click Test and Continue.
On the Select Type & Objects page, configure the parameters.
Select One-way Synchronization for Synchronous Topology.
Data migration supports One-way Synchronization and Two-way Synchronization. This topic introduces the operation of one-way synchronization. For more information on two-way synchronization, see Configuring a Two-Way Synchronization Task.
Note
Two-way synchronization is not supported when the target is an OceanBase analytical instance.
Select the migration type for your data migration task in the Migration Type section.
Options of Migration Type are Schema Migration, Full Migration, Incremental Synchronization, and Full Verification.

Migration type Description Schema migration If you select this migration type, you must define the mapping between the character sets. The data migration service only copies schemas from the source database to the target database without affecting the schemas in the source. In a task that migrates schemas from a MySQL database to a MySQL-compatible tenant of OceanBase Database, the database that does not exist in the target can be automatically created. Full Migration After the full migration task begins, the data migration service will transfer the existing data from the source database tables to the corresponding tables in the target database. Incremental Synchronization After the incremental synchronization task begins, the data migration service will synchronize the changes (inserts, updates, or deletes) from the source database to the corresponding tables in the target database. Incremental Synchronization includes DML Synchronization and DDL Synchronization. You can select based on your needs. For more information on synchronizing DDL, see Custom DML/DDL configurations. Full verification If you select this option, the Migrations automatically initiates a full verification task to verify the tables in the target database against the configured tables in the source database after full data migration is completed and incremental data is synchronized to the target database.
The Migrations supports full verification only for tables with a primary key or non-null unique key.In the Select Migration Objects section, specify your way to select migration objects.
You can select migration objects in two ways: Specify Objects and Match by Rule.
In the Select Migration Scope section, select migration objects.
If you select Specify Objects, the data migration service supports Database Object and Entire Database. Database Object allows you to select tables and views from one or more databases as the migration objects. Entire Database allows you to select an entire database for the migration. If you select the Database Object option to migrate specific objects in a database, you are not able to migrate the entire database. Vice versa, if you select the Entire Database option to migrate an entire database, you are not able to select objects from the database for the migration.
After selecting the Database Object or Entire Database option, you need to select the objects or database to be migrated in the left-side pane, and then click > to add them to the right-side pane.
The data migration service allows you to rename objects, set row filters, and remove a single migration object or all migration objects.

Note
Take note of the following items when you select Entire Database:
The right-side pane displays only the database name and does not list all objects in the database.
If you have selected DDL Synchronization-Synchronize DDL, newly added tables in the source database can also be synchronized to the target database.
Operation Description Import Objects In the list on the right side of the selection area, click Import in the upper right corner. For more information, see Import migration objects. Rename an object The data migration service allows you to rename a migration object. For more information, see Rename a migration object. Set row filters The data migration service allows you to filter rows by using WHEREconditions. For more information, see Use SQL conditions to filter data. You can also view column information about the migration objects in the View Columns section.Remove one or all objects The data migration service allows you to remove one or all migration objects during data mapping. - Remove a single migration object
In the right-side pane, hover the pointer over the object that you want to remove, and then click Remove. - Remove all migration objects
In the right-side pane, click Remove All. In the dialog box that appears, click OK to remove all migration objects.
Select Match by Rule, for more information, see Configure database-to-database matching rules.
Click Next. On the Migration Options page, configure the parameters.
Full Migration
The following parameters will be displayed only if One-way Synchronization > Full Migration is selected on the Migration Options page.

Parameter Description Read Concurrency Specifies source data read concurrency during full migration. The maximum limit is 512. Excessive concurrency may overload the source system and impact business operations. Write Concurrency Specifies target data write concurrency during full migration. The maximum limit is 512. Excessive concurrency may overload the target system and impact business operations. Rate Limiting for Full Migration Enable the full migration rate limit based on your needs. If enabled, set the RPS (maximum data limit that can be migrated to the target per second during full migration) and BPS (maxim data limit that can be migrated to the target per second during full migration). Notice
The RPS and BPS settings here only serve as rate limiting. The actual performance of full migration is limited by factors such as the source, target, and instance specification.
Strategy for Handling Non-empty Table in the Target Database The handling strategies include Stop Migration and Ignore: - Select Stop Migration: If data exists in the target table objects, full migration will throw an error, and migration will not be allowed. Please handle the target data properly before continuing the migration.
Note
If you click resume after an error, data migration will ignore this configuration option and continue migrating data. Please proceed with caution.
- Select Ignore: If data exists in the target table objects and conflicts with the incoming data, the data migration service will log the conflicting data and retain the original data unchanged during data write.
Note
If you select Ignore, full verification will use IN mode to pull data, which cannot verify the scenario where data exists in the target but not in the source, and verification performance will degrade to a certain extent.
Whether to Allow Post-indexing This feature can shorten the full migration time. You can set whether to create indexes after full data migration is completed. For considerations when selecting the index postpone option, refer to the notes below the table. Note
This option can only be set if both Schema Migration and Full Migration are selected on the Select Migration Type page.
- Only non-unique key indexes support postponed creation.
When allowing index postpone, we recommend that you adjust the following business tenant parameters based on the hardware conditions of the OceanBase Database and the current business traffic using a command-line client tool.
// File memory buffer limit ALTER SYSTEM SET _temporary_file_io_area_size = '10' tenant = 'xxx'; // For OceanBase database V4.x, disable throttling ALTER SYSTEM SET sys_bkgd_net_percentage = 100;- Select Stop Migration: If data exists in the target table objects, full migration will throw an error, and migration will not be allowed. Please handle the target data properly before continuing the migration.
Incremental Synchronization
On the Select Type & Objects page, select One-way Synchronization > Incremental Synchronization to display the following parameters.

Parameter Description Write Concurrency Specifies target data write concurrency during incremental synchronization. The maximum limit is 512. Excessive concurrency may overload the target system and impact business operations. Rate Limiting for Incremental Migration Enable the incremental migration rate limit based on your needs. If enabled, set the RPS (maximum data limit that can be migrated to the target per second during full migration) and BPS (maxim data limit that can be migrated to the target per second during full migration). Notice
The RPS and BPS settings here only serve as rate limiting. The actual performance of full migration is limited by factors such as the source, target, and instance specification.
Incremental Synchronization Start Timestamp - If Full Migration has been selected when choosing the migration type, this parameter will not be displayed.
- If Full Migration has not been selected when choosing the migration type, but Incremental Synchronization has been selected, please specify here the data to be migrated after a certain timestamp. The default is the current system time. For more information, see Set incremental synchronization timestamp.
Note
This parameter is only displayed when modifying the parameters of a two-way synchronization task.
Schedule Advancement of Binlog Timestamp If you enable this feature, you need to configure the frequency. The supported frequency range is 1~60 seconds.
After configuration, during the incremental synchronization phase, data migration will periodically execute theCREATE DATABASE IF NOT EXISTS testcommand in the MySQL source database according to the frequency you configured, in order to advance the Binlog timestamp.Note
This parameter is only displayed when migrating data from a MySQL database to an OceanBase database MySQL-compatible tenant.
Adapt to Online DDL Tool If you enable this feature, the database will use an online DDL tool to perform online schema changes. Data migration will filter temporary table objects to enhance the stability of the data migration task. For more information, see Online DDL tools. Note
At present, the Online DDL tools are only supported for scenarios where the source is a MySQL database and online schema changes are performed using Alibaba Cloud DMS, gh-ost, or pt-osc.
Advanced Options
The parameters in this section will only be displayed if the target OceanBase Database MySQL-compatible tenant is V4.3.0 or later, and Schema Migration or Incremental Synchronization > DDL Synchronization was selected on the Select Migration Type and Objects page.

The storage types for target table objects include Default, Row Storage, Column Storage, and Hybrid Row-Column Storage. This configuration determines the storage type of target table objects during schema migration or incremental synchronization.
Note
The Default option adapts to other options based on target parameter settings, and structures of schema migration table objects or new table objects created by incremental DDL will follow the configured storage type.
Click Next to proceed to the pre-check stage for the data migration task.
During the precheck, the Migrations checks the read and write privileges of the database user and the network connection of the database. A data migration task can be started only after it passes all check items. If an error is returned during the precheck, you can perform the following operations:
You can identify and troubleshoot the problem and then perform the precheck again.
You can also click Skip in the Actions column of a failed precheck item. In the dialog box that appears, you can view the prompt for the consequences of the operation and click OK.
After the precheck is successful, click Purchase to go to the Purchase Data Migration Instance page to make the purchase.
Once the purchase is successful, you can start the data migration task. For more information on purchase, see Purchase a data migration instance. If you don't need to purchase a data migration instance at the moment, click Save to jump to the details page of the data migration task, where you can manually purchase later as needed.
The data migration service allows modification of migration objects during the execution of a data migration task. For more information, see View and modify migration objects. Once the data migration task starts, it will execute sequentially based on the selected migration types. For more information, see "View Migration Details" section in View Details of Data Migration Tasks.