You can create a data migration task to migrate existing and incremental business data from a MySQL database to a MySQL-compatible tenant of OceanBase Database through schema migration, full data migration, and incremental synchronization.
Notice
A data migration task remaining in an inactive state for a long time may fail to be resumed depending on the retention period of incremental logs. Inactive states are Failed, Stopped, and Completed. The data migration service automatically releases data migration tasks that remain in an inactive state for more than 7 days to recycle resources. We recommend that you configure alerting for tasks and handle task exceptions in a timely manner.
Prerequisites
You have created the source database.
If your cloud vendor is AWS, create an Aurora MySQL instance. For more information, see Creating and connecting to an Aurora MySQL DB cluster.
If your cloud vendor is Huawei Cloud, buy an RDS for MySQL database instance. For more information, see Buy a database instance.
If your cloud vendor is Google Cloud, create a Cloud SQL instance. For more information, see Create an instance.
If your cloud vendor is Alibaba Cloud, create a RDS MySQL database instance or Buy a PolarDB MySQL cluster.
You have created the target OceanBase instance and tenant. For more information, see Create an instance and Create a tenant.
You have created dedicated database users for data migration in the source and the target, and granted required privileges to the users. For more information, see User privileges.
You have enabled binary logs for the source MySQL database. For more information, see Accessing MySQL binary logs.
Limitations
Only users with the Project Owner, Project Admin or Data Services Admin project roles are allowed to create data migration tasks.
Limitations on the source database
Do not perform DDL operations that modify database or table schemas during schema migration or full data migration. Otherwise, the data migration task may be interrupted.
At present, the data migration service supports Aurora MySQL V2.x and V3.x, Cloud SQL V5.6, V5.7, and V8.0, Huawei Cloud RDS MySQL V5.6, V5.7, and V8.0, MySQL databases V5.6, V5.7, and V8.0, and OceanBase Database (in the MySQL compatible mode) V2.x, V3.x, and V4.x.
The data migration service supports the migration of only objects whose database name, table name, and column name are ASCII-encoded and do not contain special characters. The special characters are line breaks and . | " ' ` ( ) = ; / &
The precheck fails if the primary key is of the FLOAT or DOUBLE data type. We recommend that you do not specify a column of such a data type as the primary key.
If the target is a database, the data migration service does not support triggers in the target database. If triggers exist in the target database, the data migration may fail.
The data migration service does not support an index field greater than 16000 bytes (or 4000 characters) in length in a MySQL database.
The clock of the source database must be synchronized with that of the target database.
Considerations
Do not perform DDL operations that modify database or table schemas during schema migration or full data migration. Otherwise, the data migration task may be interrupted.
The tables to migrate must have PRIMARY KEY constraints or UNIQUE constraints, and field names in the same table must be unique. Otherwise, data inconsistency may occur in the target database.
The host of the MySQL database must have sufficient outbound bandwidth. Insufficient outbound bandwidth on the host will slow down log parsing and data migration, which may increase the latency of data synchronization.
The data migration service cannot change the data type of a custom column. You must manually change the data type by yourself if necessary.
The data migration service supports the migration of tables, indexes, and views.
When there are no business data changes at the source, the data migration service cannot acquire the binary logs to proceed with data synchronization. This can result in continuous task latency that is not inherently caused by the task itself. To address excessive latency, we recommend that you resolve the issue by introducing data writes at the source.
If the clocks between nodes or between the client and the server are out of synchronization, the latency may be inaccurate during incremental synchronization.
For example, if the clock is earlier than the standard time, the latency can be negative. If the clock is later than the standard time, the latency can be positive.
If collations of the source and target databases are different, a table whose primary key is data of the VARCHAR type fails the data consistency verification.
If you modify a unique index at the target when DDL synchronization is disabled, you must restart the data migration task to avoid data inconsistency.
Check whether the data migration service migrates columns of certain data types, such as DECIMAL, FLOAT, and DATETIME, with the expected precision. If the precision of the target field type is lower than the precision of the source field type, the value with a higher precision may be truncated. This may result in data inconsistency between the source and target fields.
If you select only Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for more than 48 hours.
If you select Full Migration and Incremental Synchronization when you create the data migration task, the data migration service requires that the local incremental logs in the source database be retained for at least 7 days. Otherwise, the data migration task may fail or the data in the source and target databases may be inconsistent because the data migration service cannot obtain incremental logs.
Supported source and target instance types
In the following table, the instance types supported for OceanBase Database in the MySQL compatible mode are Dedicated (Transactional) and Dedicated (Analytical).
| Cloud vendor | Source | Target |
|---|---|---|
| AWS | Aurora MySQL | OceanBase MySQL Compatible Mode |
| AWS | RDS MySQL | OceanBase MySQL Compatible Mode |
| AWS | Self-managed MySQL | OceanBase MySQL Compatible Mode |
| Huawei Cloud | RDS MySQL | OceanBase MySQL Compatible Mode |
| Huawei Cloud | Self-managed MySQL | OceanBase MySQL Compatible Mode |
| Google Cloud | Cloud SQL | OceanBase MySQL Compatible Mode |
| Google Cloud | Self-managed MySQL | OceanBase MySQL Compatible Mode |
| Alibaba Cloud | RDS MySQL | OceanBase MySQL Compatible Mode |
| Alibaba Cloud | PolarDB MySQL | OceanBase MySQL Compatible Mode |
| Alibaba Cloud | Self-managed MySQL | OceanBase MySQL Compatible Mode |
Data type mappings
| MySQL database | OceanBase Database (MySQL Compatible Mode) |
|---|---|
| INTEGER | INTEGER |
| TINYINT | TINYINT |
| MEDIUMINT | MEDIUMINT |
| BIGINT | BIGINT |
| SMALLINT | SMALLINT |
| DECIMAL | DECIMAL |
| NUMERIC | NUMERIC |
| FLOAT | FLOAT |
| REAL | REAL |
| DOUBLE PRECISION | DOUBLE PRECISION |
| BIT | BIT |
| CHAR | CHAR |
| VARCHAR | VARCHAR |
| BINARY | BINARY |
| VARBINARY | VARBINARY |
| BLOB | BLOB |
| TEXT | TEXT |
| ENUM | ENUM |
| SET | SET |
| DATE | DATE |
| DATETIME | DATETIME |
| TIMESTAMP | TIMESTAMP |
| TIME | TIME |
| YEAR | YEAR |
Procedure
Create a data migration task.

Log in to the OceanBase Cloud console.
In the left-side navigation pane, click Data Services > Migrations.
On the Migrations page, click the Migrate Data tab.
On the Migrate Data tab, click New Task in the upper-right corner.
In the task name field, enter a custom migration task name.
We recommend that you use a combination of Chinese characters, numbers, and English letters. The name cannot contain spaces and must be less than 64 characters in length.
On the Configure Source & Target page, configure the parameters.
In the Source Profile section, configure the parameters.
If you want to reference the configurations of an existing data source, click Quick Fill next to Source Profile and select the required data source from the drop-down list. Then, the parameters in the Source Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source.
Parameter Description Cloud Vendor At present, supported cloud vendors are AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. Database Type The type of the source. Select MySQL. Instance Type - When you select AWS as the cloud vendor, the instance types supported are Aurora MySQL, RDS MySQL, and Self-managed MySQL.
- When you select Huawei Cloud as the cloud vendor, the instance types supported are RDS MySQL and Self-managed MySQL.
- When you select Google Cloud as the cloud vendor, the instance types supported are Cloud SQL and Self-managed MySQL.
- When you select Alibaba Cloud as the cloud vendor, the instance types supported are RDS MySQL, PolarDB MySQL, and Self-managed MySQL.
Region The region of the source database. Connection Type Available connection types are Endpoint and Public IP. - If you select Endpoint connection type, you need to first add the account ID displayed on the page to the whitelist of your endpoint service. This allows the endpoint from that account to connect to the endpoint service. For more information, see the corresponding topic under Connect via private network.
- When you select Cloud Vendor as AWS, if you selected Acceptance required for the parameter Require acceptance for endpoint when you created the endpoint service, the data migration service will prompt you to go to the AWS console to perform the Accept endpoint connection request operation in the AWS console when the data migration service first connects to the PrivateLink.
- When your Cloud Vendor is Google Cloud, add authorized projects to Published Services. After authorization, no manual authorization is needed when you test the data source connection.
Note
You need to select the source and target regions before the page displays the data source IP addresses that need to be added to the whitelist.
Connection Details - When you select Connection Type as Endpoint, enter the endpoint service name.
- When you select Connection Type as Public IP, enter the IP address and port number of the database host machine.
Database Account The name of the MySQL database user for data migration. Password The password of the database user. In the Target Profile section, configure the parameters.
If you want to reference the configurations of an existing data source, click Quick Fill next to Target Profile and select the required data source from the drop-down list. Then, the parameters in the Target Profile section are automatically populated. If you need to save the current configuration as a new data source, click on the Save icon located on the right side of the Quick Fill.
You can also click Quick Fill > Manage Data Sources, enter Data Sources page to check and manage data sources. You can manage different types of data sources on the Data Sources page. For more information, see Data Source.
Migration type Description Cloud Vendor We support AWS, Huawei Cloud, Google Cloud, and Alibaba Cloud. You can choose the same cloud vendor as the source, or perform cross-cloud data migration. Notice
Cross-cloud vendor data migration is disabled by default. If you need to use this feature, please contact our technical support.
Database Type Select OceanBase MySQL Compatible as the database type for the target. Instance Type When the target is a MySQL-compatible tenant of OceanBase Database, supported instance types are Dedicated (Transactional) and Dedicated (Analytical). - A Dedicated (Transactional) instance handles online transaction processing (OLTP) operations.
- A Dedicated (Analytical) instance handles online analytical processing (OLAP) operations.
Region Select the region where the target database is located. Connection Type Available connection types are Endpoint and Public IP. - If you select Endpoint connection type, you need to first add the account ID displayed on the page to the whitelist of your endpoint service. This allows the endpoint from that account to connect to the endpoint service. For more information, see the corresponding topic under Connect via private network.
- If you select the Public IP connection type, you must add the data migration IP address displayed on the page to the OceanBase database whitelist. For more information, see the corresponding topic under Connect via public network.
Note
You need to select the source and target regions before the page displays the data source IP addresses that need to be added to the whitelist.
Connection Details - When you select Connection Type as Endpoint, enter the endpoint service name.
- When you select Connection Type as Public IP, enter the IP address and port number of the database host machine.
Instance The ID or name of the instance to which the MySQL-compatible tenant of OceanBase Database belongs. You can view the ID or name of the instance on the Instances page. Note
When your cloud vendor is Alibaba Cloud, you can also select a cross-account authorized instance of an Alibaba Cloud primary account. For more information, see Alibaba Cloud account authorization.
Tenant The ID or name of the MySQL-compatible tenant of OceanBase Database. You can expand the information about the target instance on the Instances page and view the ID or name of the tenant. Database Account The name of the database user in the MySQL-compatible tenant of OceanBase Database for data migration. Password The password of the database user. If the instance type is Self-managed Database, configure the parameters in the Advanced Settings section if you need to perform schema migration and incremental synchronization.

If you need to select Schema Migration or Incremental Synchronization on the Select Type & Objects page, enable the sys Tenant Account and configure the following parameters:
Parameter Description Sys Account The name of the sys user. This user is mainly used to read incremental logs and database object structure information from OceanBase Database. Create this user under the sys tenant of the business cluster. Password The password of the sys user. If you need to select Incremental Synchronization on the Select Type & Objects page, enable the OBLogProxy and enter the OBLogProxy connection information.
Notice
Enable both the sys Tenant Account and OBLogProxy to perform incremental synchronization.
The OBLogProxy connection information is the incremental log proxy service of OceanBase Database, which provides real-time incremental project access and management capabilities as a service. It facilitates application access to OceanBase Database incremental logs and meets the need to subscribe to incremental logs in network-isolated environments. The format is
OBLogProxy IP:OBLogProxy Port.
Click Test and Continue.
On the Select Type & Objects page, configure the parameters.
Select One-way Synchronization for Synchronous Topology.
Data migration supports One-way Synchronization and Two-way Synchronization. This topic introduces the operation of one-way synchronization. For more information on two-way synchronization, see Configure a two-way synchronization task.
Note
Two-way synchronization is not supported when the target is an OceanBase analytical instance.
Select the migration type for your data migration task in the Migration Type section.
Options of Migration Type are Schema Migration, Full Migration, and Incremental Synchronization.
Migration type Description Schema migration If you select this migration type, you must define the mapping between the character sets. The data migration service only copies schemas from the source database to the target database without affecting the schemas in the source. In a task that migrates schemas from a MySQL database to a MySQL-compatible tenant of OceanBase Database, the database that does not exist in the target can be automatically created. Full Migration After the full migration task begins, the data migration service will transfer the existing data from the source database tables to the corresponding tables in the target database. Incremental Synchronization After the incremental synchronization task begins, the data migration service will synchronize the changes (inserts, updates, or deletes) from the source database to the corresponding tables in the target database. Incremental Synchronization includes DML Synchronization and DDL Synchronization. You can select based on your needs. For more information on synchronizing DDL, see Custom DML/DDL configurations. In the Select Migration Objects section, specify your way to select migration objects.
You can select migration objects in two ways: Specify Objects and Match by Rule.
In the Select Migration Scope section, select migration objects.
If you select Specify Objects, data migration supports Table-level and Database-level. Table-level migration allows you to select one or more tables or views from one or more databases as migration objects. Database-level migration allows you to select an entire database as a migration object. If you select table-level migration for a database, database-level migration is no longer supported for that database. Conversely, if you select database-level migration for a database, table-level migration is no longer supported for that database.
After selecting Table-level or Database-level, select the objects to be migrated in the left pane and click > to add them to the right pane.
The data migration service allows you to rename objects, set row filters, and remove a single migration object or all migration objects.

Note
Take note of the following items when you select Database-level:
The right-side pane displays only the database name and does not list all objects in the database.
If you have selected DDL Synchronization-Synchronize DDL, newly added tables in the source database can also be synchronized to the target database.
Operation Description Import Objects In the list on the right side of the selection area, click Import in the upper right corner. For more information, see Import migration objects. Rename an object The data migration service allows you to rename a migration object. For more information, see Rename a migration object. Set row filters The data migration service allows you to filter rows by using WHEREconditions. For more information, see Use SQL conditions to filter data. You can also view column information about the migration objects in the View Column section.Remove one or all objects The data migration service allows you to remove one or all migration objects during data mapping. - Remove a single migration object
In the right-side pane, hover the pointer over the object that you want to remove, and then click Remove. - Remove all migration objects
In the right-side pane, click Clear All. In the dialog box that appears, click OK to remove all migration objects.
If you select Match by Rule, for more information, see Configure database-to-database matching rules.
Click Next. On the Migration Options page, configure the parameters.
Full migration
The following parameters will be displayed only if One-way Synchronization > Full Migration is selected on the Migration Options page.

Parameter Description Read Concurrency This parameter specifies the number of concurrent threads for reading data from the source during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the source and affect business operations. Write Concurrency This parameter specifies the number of concurrent threads for writing data to the target during full migration. The maximum number of concurrent threads is 512. A high number of concurrent threads may cause high pressure on the target and affect business operations. Rate Limiting for Full Migration You can decide whether to limit the full migration rate based on your needs. If you enable this option, you must also set the RPS (maximum number of data rows that can be migrated to the target per second during full migration) and BPS (maximum amount of data that can be migrated to the target per second during full migration). Note
The RPS and BPS values specified here are only for throttling and limiting capabilities. The actual performance of full migration is limited by factors such as the source, target, and instance specifications.
Handle Non-empty Tables in Target Database This parameter specifies the strategy for handling records in target table objects. Valid values: Stop Migration and Ignore. - If you select Stop Migration, data migration will report an error when target table objects contain data, indicating that migration is not allowed. Please handle the data in the target database before resuming migration.
Notice
If you click Restore after an error occurs, data migration will ignore this setting and continue to migrate table data. Proceed with caution.
- If you select Ignore, when target table objects contain data, data migration will adopt the strategy of recording conflicting data in logs and retaining the original data.
Post-Indexing This parameter specifies whether to allow index creation to be postponed after full migration is completed. If you select this option, note the following items. Notice
Before you select this option, make sure that you have selected both Schema Migration and Full Migration on the Select Migration Type page.
- Only non-unique key indexes support index creation after migration.
When allowing index postpone, we recommend that you adjust the following business tenant parameters based on the hardware conditions of the OceanBase Database and the current business traffic using a command-line client tool.
// File memory buffer limit ALTER SYSTEM SET _temporary_file_io_area_size = '10' tenant = 'xxx'; // For OceanBase database V4.x, disable throttling ALTER SYSTEM SET sys_bkgd_net_percentage = 100;- If you select Stop Migration, data migration will report an error when target table objects contain data, indicating that migration is not allowed. Please handle the data in the target database before resuming migration.
Incremental synchronization
On the Select Type & Objects page, select One-way Synchronization > Incremental Synchronization to display the following parameters.

Parameter Description Write Concurrency Specifies target data write concurrency during incremental synchronization. The maximum limit is 512. Excessive concurrency may overload the target system and impact business operations. Rate Limiting for Incremental Migration Enable the incremental migration rate limit based on your needs. If enabled, set the RPS (maximum data limit that can be migrated to the target per second during full migration) and BPS (maxim data limit that can be migrated to the target per second during full migration). Notice
The RPS and BPS settings here only serve as rate limiting. The actual performance of full migration is limited by factors such as the source, target, and instance specification.
Incremental Synchronization Start Timestamp - If Full Migration has been selected when choosing the migration type, this parameter will not be displayed.
- If Full Migration has not been selected when choosing the migration type, but Incremental Synchronization has been selected, please specify here the data to be migrated after a certain timestamp. The default is the current system time. For more information, see Set incremental synchronization timestamp.
Note
This parameter is only displayed when modifying the parameters of a two-way synchronization task.
Schedule Advancement of Binlog Timestamp If you enable this feature, you need to configure the frequency. The supported frequency range is 1~60 seconds.
After configuration, during the incremental synchronization phase, data migration will periodically execute theCREATE DATABASE IF NOT EXISTS testcommand in the MySQL source database according to the frequency you configured, in order to advance the Binlog timestamp.Note
This parameter is only displayed when migrating data from a MySQL database to an OceanBase database MySQL-compatible tenant.
Adapt to Online DDL Tool If you enable this feature, the database will use an online DDL tool to perform online schema changes. Data migration will filter temporary table objects to enhance the stability of the data migration task. For more information, see Online DDL tools. Note
At present, the Online DDL tools are only supported for scenarios where the source is a MySQL database and online schema changes are performed using Alibaba Cloud DMS, gh-ost, or pt-osc.
Advanced Options
The parameters in this section will only be displayed if the target OceanBase Database MySQL-compatible tenant is V4.3.0 or later, and Schema Migration or Incremental Synchronization > DDL Synchronization was selected on the Select Type & Objects page.

The storage types for target table objects include Default, Row Storage, Column Storage, and Hybrid Row-Column Storage. This configuration determines the storage type of target table objects during schema migration or incremental synchronization.
Note
The Default option adapts to other options based on target parameter settings, and structures of schema migration table objects or new table objects created by incremental DDL will follow the configured storage type.
Click Next to proceed to the pre-check stage for the data migration task.
During the precheck, the Migrations checks the read and write privileges of the database user and the network connection of the database. A data migration task can be started only after it passes all check items. If an error is returned during the precheck, you can perform the following operations:
You can identify and troubleshoot the problem and then perform the precheck again.
You can also click Skip in the Actions column of a failed precheck item. In the dialog box that appears, you can view the prompt for the consequences of the operation and click OK.
After the pre-check succeeds, click Purchase to go to the Purchase Data Migration Instance page.
After the purchase succeeds, you can start the data migration task. For more information about how to purchase a data migration instance, see Purchase a data migration instance. If you do not need to purchase a data migration instance at this time, click Save to go to the details page of the data migration task. You can manually purchase a data migration instance later as needed.
You can click Configure Validation Task in the upper-right corner of the details page to compare the data differences between the source database and the target database. For more information, see Create a data validation task.
The data migration service allows you to modify the migration objects when the task is running. For more information, see View and modify migration objects. After the data migration task is started, it is executed based on the selected migration types. For more information, see the "View migration details" section in View details of a data migration task.