This topic describes how to use MySQLDump to export objects and data from a MySQL database. OceanBase Database is compatible with MySQL. Therefore, you can use MySQLDump to back up data in OceanBase databases. MySQLDump is a logical backup tool provided with MySQL. MySQLDump retrieves the data to be backed up after MySQLDump is connected to the database over the corresponding protocol and converts the retrieved data into the corresponding INSERT statements. To recover the data, you need to execute these INSERT statements.
Recommended version
| Operating system | Version |
|---|---|
| Linux | MySQLDump Ver 10.13 Distrib 5.6.37, for Linux (x86_64) |
| MacOS | MySQLDump Ver 10.13 Distrib 5.7.21, for macos10.13 (x86_64) |
Notes
You can use MySQLDump to export only data in OceanBase Database instances in MySQL mode.
MySQLDump does not support locked databases and tables.
A
TIMEOUT 4012error may be returned when you export a large amount of data by using MySQLDump. To prevent this error, log on to the database as the tenant administrator and run the following command to modify the system parameters. After the data export is completed, you can restore the system parameters to the original values.
obclient> SET GLOBAL ob_trx_timeout=1000000000,GLOBAL ob_query_timeout=1000000000;
Back up data
The following statement backs up data in an OceanBase databases by using MySQLDump. An SQL file is generated after you execute this statement.
mysqldump -h xx.xx.xx.xx -P2883 -u 'user@tenantname#clustenamer' -p**1*** --skip-triggers --databases db1 db2 db3 --skip-extended-insert > /tmp/data.sql
The following table describes the command-line options and parameters in the preceding data backup statement.
| Option/Parameter | Description |
|---|---|
| -h (--host) | The IP address of the server. |
| -P (--port) | The port number of the server. |
| -u (--user) | The username used to log on to the MySQL database. |
| -p (--password) | The password used to log on to the MySQL database. |
| -d | Specifies to export only the table structure but not data. |
| --databases | The databases to be backed up. |
| --all-databases | Specifies to back up all databases. This keyword is not recommended. We recommend that you specify the databases one by one. |
| --compact | Specifies to back up the data in compressed mode to reduce the volume of the generated data. |
| --comments | The comments. |
| --complete-insert | Specifies to generate the completed INSERT statements. |
| --force | Specifies to ignore errors. |
| --skip-triggers | OceanBase Database does not support the trigger syntax. Therefore, this parameter is required if you do not specify the --force parameter. Otherwise, the data export fails. |
| --skip-extended-insert | Specifies to export statements in the format of multiple INSERT statements. If this parameter is not specified, statements are exported in the INSERT INTO table VALUES(...),(...) format. |
After data is exported, you can run the following statement to import the previously exported data:
source /tmp/data.sql
Migrate MySQL tables to OceanBase Database by using MySQLDump
Export only the table structure of the specified database
mysqldump -h 127.1 -u**** -P3306 -p**1*** -d TPCH --compact > tpch_ddl.sql
In the exported script:
Definitions of views are included as comments (/ ! /). You can disregard and delete the definitions of the views.
- Some MySQL syntax is not supported in MySQL mode of OceanBase Database. You can replace the part of the syntax that is not supported without affecting the actual use.
Sample code:
/*!40101 SET character_set_client = @saved_cs_client */;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `NATION` (
`N_NATIONKEY` int(11) NOT NULL,
`N_NAME` char(25) COLLATE utf8_unicode_ci NOT NULL,
`N_REGIONKEY` int(11) NOT NULL,
`N_COMMENT` varchar(152) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`N_NATIONKEY`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci MAX_ROWS=4294967295;
In the sample code, MAX_ROWS= is unique to MySQL. In MySQL mode of OceanBase Database, it does not exist. It is not required or supported. You need to comment out MAX_ROWS= and the content following it. You can batch replace the related content. For example, you can use :%s/MAX_ROWS=/; -- MAX_ROWS=/g in Vim. Notice
The name of the exported SQL table is in uppercase. This means that the source MySQL table is likely to be case sensitive. Therefore, the same setting must be used in the destination MySQL tenant of OceanBase Database.
Foreign keys may be included in the statement of the exported table structure. When you import the table structure to a MySQL tenant of OceanBase Database, an error occurs if no table is created for the dependent foreign keys. You must disable the foreign key constraint before the import.
MySQL [oceanbase]> set global foreign_key_checks=off;
Query OK, 0 rows affected (0.01 sec)
MySQL [oceanbase]> show global variables like '%foreign%';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| foreign_key_checks | OFF |
+--------------------+-------+
1 row in set (0.00 sec)
Run the source command to execute the external SQL script file in OBClient.
Export only table data of the specified database
mysqldump -h 127.1 -u**** -P3306 -p**1*** -t TPCH > tpch_data.sql
Before the table data is exported, the table is locked by the data initialization SQL statement to prevent data writes in other sessions. Then, execute the INSERT statement to write data. The value following each INSERT includes many values. The INSERT is a batch INSERT.
LOCK TABLES `t1` WRITE;
/*!40000 ALTER TABLE `t1` DISABLE KEYS */;
INSERT INTO `t1` VALUES ('a'),('middle');
/*!40000 ALTER TABLE `t1` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;