OBLOADER allows you to specify the information required for import in command-line options. For more information about the options and their scenarios and examples, see Options and Scenarios and examples.
Options
| Option | Required? | Description | Introduced in | Deprecated? |
|---|---|---|---|---|
| -h(--host) | Yes | The host IP address for connecting to OceanBase Database Proxy (ODP) or a physical OceanBase Database node. | ||
| -P(--port) | Yes | The host port for connecting to ODP or a physical OceanBase Database node. | ||
| -c(--cluster) | No | The cluster name of the database. | ||
| -t(--tenant) | No | The tenant name of the cluster. | ||
| -u(--user) | Yes | The username that you use to log on to the database. | ||
| -p(--password) | No | The password that you use to log on to the database. | ||
| -D(--database) | No | The name of the database. | ||
| -f(--file-path) | Yes | The directory that stores the data file or the absolute path of the data file. | ||
| --sys-user | No | The name of the user in the sys tenant. | ||
| --sys-password | No | The password of the user in the sys tenant. | ||
| --public-cloud | No | The public cloud operating environment. | ||
| --file-suffix | No | The file name extension. | ||
| --file-encoding | No | The file character set, which is different from the database character set. | ||
| --ctl-path | No | The directory of the control files. | ||
| --map-path | No | The directory where the file that stores table field mappings is stored. | Yes | |
| --log-path | No | The directory to which log files are exported. | ||
| --ddl | No | Specifies to import DDL files. | ||
| --csv | No | Specifies to import data files in the CSV format (recommended). | ||
| --sql | No | Specifies to import data files in the SQL format, which is different from DDL files. | ||
| --orc | No | Specifies to import data files in the ORC format. | V4.0.0 | |
| --par | No | Specifies to import data files in the Parquet format. | V4.0.0 | |
| --mix | No | Specifies to import a mixed file that contains both definitions and data. | ||
| --pos | No | Specifies to import data files in the POS format. | ||
| --cut | No | Specifies to import data files in the CUT format. | ||
| --all | No | Specifies to import all supported database object definitions and table data. | ||
| --table-group | No | Specifies to import table group definitions. | V3.1.0 | |
| --table | No | Specifies to import table definitions or table data. | ||
| --view | No | Specifies to import view definitions. | ||
| --trigger | No | Specifies to import trigger definitions. | ||
| --sequence | No | Specifies to import sequence definitions. | ||
| --synonym | No | Specifies to import synonym definitions. | ||
| --type | No | Specifies to import type definitions. | V4.0.0 | |
| --type-body | No | Specifies to import type body definitions. | ||
| --package | No | Specifies to import package definitions. | ||
| --package-body | No | Specifies to import package body definitions. | ||
| --function | No | Specifies to import function definitions. | ||
| --procedure | No | Specifies to import stored procedure definitions. | ||
| --replace-object | No | Specifies to replace existing object definitions. We recommend that you replace object definitions manually, rather than using this option. | ||
| --rw | No | The proportion of data file parsing threads. | ||
| --slow | No | The threshold for triggering a slow import. | ||
| --pause | No | The threshold for triggering an import pause. | ||
| --batch | No | The number of records for a batch of transactions. | ||
| --thread | No | The number of concurrent import threads allowed. | ||
| --block-size | No | The block size for a file to be imported. | ||
| --retry | No | Specifies to import data from the last savepoint. | ||
| --external-data | No | Specifies that the data files are exported by a third-party tool. Check on the metadata file will be skipped. | ||
| --max-tps | No | The maximum import speed. Default unit: lines/second. | ||
| --max-wait-timeout | No | The timeout period of waiting for a database major compaction to complete. | ||
| --nls-date-format | No | The session-level datetime format, which is supported only for OceanBase Database in Oracle mode. | ||
| --nls-timestamp-format | No | The session-level timestamp format, which is supported only for OceanBase Database in Oracle mode. | ||
| --nls-timestamp-tz-format | No | The session-level timestamp format with a time zone, which is supported only for OceanBase Database in Oracle mode. | ||
| --trail-delimiter | No | Specifies to truncate the last column separator in a line. | ||
| --with-trim | No | Specifies to delete the space characters on the left and right sides of the data. | ||
| --skip-header | No | Specifies to skip the first line of data of CSV/CUT files when the files are imported. Only OBLOADER V3.3.0 and later versions support skipping the first line of data in a CUT file. | ||
| --skip-footer | No | Specifies to skip the last line of data in a CUT file during import. | V3.3.0 | |
| --null-string | No | Specifies to replace the specified character with NULL. Default value: \\N. |
||
| --empty-string | No | Specifies to replace the specified character with an empty string (' '). Default value: \\E. |
||
| --line-separator | No | The line separator. Custom line separators are supported for the import of CUT files. Default value: \\n. |
||
| --column-separator | No | The column separator, which is different from the column separator string in the CUT format. | ||
| --escape-character | No | The escape character. Default value: \\. |
||
| --column-delimiter | No | The column delimiter. Default value: '. |
||
| --ignore-unhex | No | Specifies to ignore hexadecimal strings in decoding. | ||
| --exclude-table | No | Specifies to excludethe specified tables from the import of table definitions and table data. | ||
| --exclude-data-types | No | Specifies to excludethe specified data types from the import of data. | ||
| --column-splitter | No | The column separator string, which is different from the column separator in the CSV format. | ||
| --max-discards | No | The maximum amount of duplicate data allowed in a single table to import. Default value: -1. |
||
| --max-errors | No | The maximum number of errors allowed for a single table during the import. Default value: 1000. |
||
| --exclude-column-names | No | Specifies to excludethe specified columns from the import of data. | ||
| --replace-data | No | Specifies to replace duplicate data in the table. This option is only applicable to tables that have primary keys or unique keys with the NOT NULL constraint. | ||
| --truncate-table | No | Specifies to truncate all tables in the destination database. We recommend that you truncate tables manually, rather than using this option. | ||
| --with-data-files | No | Specifies to truncate or clear tables with specified data files. | V3.1.0 | |
| --delete-from-table | No | Specifies to clear tables in the destination database before the import. We recommend that you clear tables manually, rather than using this option. | ||
| -V(--version) | No | Specifies to show the OBLOADER version. | ||
| --no-sys | No | Specifies not to provide the password for the sys tenant in the private cloud environment. | V3.3.0 | |
| --logical-database | No | Specifies to import data by using ODP (Sharding). | V3.3.0 | |
| --file-regular-expression | No | The regular expression that is used to match the file name during a single-table import. | V3.3.0 | |
| --ignore-escape | No | Specifies to ignore escape operations on characters for the import of CUT format files. | V3.3.0 | |
| --storage-uri | No | The uniform resource identifier (URI) of the storage space. | V4.2.0 | |
| null | No | The character set used when you create a database connection. | 4.2.4 | |
| --strict | No | The impact of dirty data on the exit status of the tool. | 4.2.4 | |
| -H(--help) | No | Specifies to show the help information of the OBLOADER command-line tool. |
Connection options
OBLOADER can read data from and write data to an OceanBase database only after connecting to the database. You can connect to an OceanBase database by specifying the following options:
-h host_name , --host= host_name
The host IP address for connecting to ODP or a physical OBServer node.
-P port_num , --port= port_num
The host port for connecting to ODP or a physical OBServer node.
-c cluster_name , --cluster= cluster_name
The OceanBase cluster to connect to. If this option is not specified, the physical node of the database is connected. Relevant options, such as
-hand-P, specify the IP address and port of the physical node of the database. If this option is specified, ODP is connected. Relevant options, such as-hand-P, specify the IP address and port of ODP.-t tenant_name , --tenant= tenant_name
The OceanBase Database tenant to connect to. For more information about tenants, see the official OceanBase documentation.
-u user_name , --user= user_name
The username for connecting to OceanBase Database. If you specify an incorrect username, OBLOADER cannot connect to OceanBase Database.
-p ' password' , --password=' password'
The user password for connecting to OceanBase Database. If no password is set for the current database account, you do not need to specify this option. To specify this option on the command line, you must enclose the value in single quotation marks (' '). Example:
-p'******'.Note
If you are using Windows OS, enclose the value in double quotation marks (" "). This rule also applies to string values of other options.
-D database_name , --database= database_name
Specifies to import database object definitions and table data from the specified database.
--sys-user sys_username
The username of a user with required privileges in the sys tenant, for example,
rootorproxyro. OBLOADER requires a special user in the sys tenant to query the metadata in system tables. Default value:root. You do not need to specify this option for OceanBase Database V4.0.0 and later.--sys-password ' sys_password'
The password of a user with required privileges in the sys tenant. This option is used along with
--sys-user. By default, the password of the root user in the sys tenant is left empty. When you specify this option on the command line, enclose the value in single quotation marks (' '). Example:--sys-password '******'. You do not need to specify this option for OceanBase Database V4.0.0 and later.Note
If this option is not specified, OBLOADER cannot query metadata in system tables, and the import features and performance may be greatly affected.
--public-cloud
Specifies to import database objects or table data from an OceanBase cluster deployed in the public cloud. If you specify this option on the command line, you do not need to specify the
-tor-cconnection option. OBDUMPER turns on the--no-sysoption by default. For more information about the--no-sysoption, see the corresponding option description. The use of the--public-cloudor--no-sysoption will affect the import features, performance, and stability. OceanBase Database V2.2.30 and later versions support throttling on the server. Therefore, before you use the--public-cloudor--no-sysoption, to ensure the stability of data import, you can run the following commands to set throttling thresholds as required on the server:alter system set freeze_trigger_percentage=50; alter system set minor_merge_concurrence=64; alter system set writing_throttling_trigger_percentage=80 tenant='xxx';--no-sys
Specifies to import database objects or table data from an OceanBase cluster deployed in the private cloud when you cannot provide the password of the sys tenant. Unlike the
--public-cloudoption, when you use the--no-sysoption, you need to specify the-tconnection option on the command line and also need to add the-coption to connect to the ODP service. In OceanBase V4.0.0 and earlier, if you do not specify the--public-cloudor--no-sysoption, you must specify the--sys-userand--sys-passwordoptions in OBLOADER.--logical-database
Specifies to import data by using ODP (Sharding). When you specify the
--logical-databaseoption on the command line, the definition of a random physical database shard is exported and the shard cannot be directly imported into the database. You need to manually convert the exported physical shard to a logical one before you import it to the database for business use.
Feature options
-f ' file_path ' , --file-path= ' file_path '
The absolute path on a local disk for storing data files. When you import data files from Alibaba Cloud Object Storage Service (OSS), you must specify the
-foption to save the logs and binary files generated.--file-suffix ' suffix_name '
The file name extension of data files to be imported. Generally, the file name extension is correlated with the file format. For example, a CSV file is usually named as xxx.csv. If you do not strictly follow the naming conventions, you may name a CSV file with any extension, such as .txt. In this case, OBLOADER cannot identify the file as a CSV file. This option is optional. A default value is available for each data format. The default file name extension is .csv for a CSV file, .sql for an SQL file, and .dat for a CUT or POS file. When you specify this option on the command line, enclose the value in single quotation marks (''). Example:
--file-suffix '.txt'.--file-encoding ' encode_name '
The file character set used when OBLOADER reads data files, which is not the database character set. When you specify this option on the command line, enclose the value in single quotation marks (' '). Example:
--file-encoding 'GBK'. Default value:UTF-8.--ctl-path ' control_path '
The absolute path on a local disk for storing control files. You can configure built-in preprocessing functions in a control file. Data will be preprocessed by these functions before being imported. For example, the functions can perform case conversion and check the data for empty values. For the use of control files, see Data processing. When you specify this option on the command line, enclose the value in single quotation marks (' '). Example:
--ctl-path '/home/controls/'.--log-path ' log_path '
The output directory for the operational logs of OBLOADER. If this option is not specified, OBLOADER operational logs are stored in the directory specified by the
-foption. Unless in special circumstances, redirection is not required for log output.--ddl
Specifies to import DDL files. A DDL file stores the database object definitions. The file is named in the format of object name-schema.sql. When this option is specified, only database object definitions are imported, and table data is not imported.
Notice
Avoid comments or statements to enable/disable a feature in the file. If database objects depend on each other, the import may fail and manual intervention is required.
--sql
Specifies to import data files in the SQL format. In an SQL file, data is stored in the format of INSERT statements. The file is named in the format of table name.sql. Each line of table data corresponds to an executable INSERT statement in an SQL file. An SQL file is different from a DDL file in terms of content format. We recommend that you use this option together with the
--tableoption. When it is used together with the--alloption, OBLOADER imports only table data but not database object definitions.Notice
The data cannot contain SQL functions, special characters, line breaks, and so on. Otherwise, the file may not be correctly parsed.
--orc
Specifies to import data files in the ORC format. An ORC file stores data in a column-oriented format. The file is named in the format of table name.orc. For more information about ORC format definitions, see Apache ORC.
--par
Specifies to import data files in the Parquet format. A Parquet file stores data in a column-oriented format. The file is named in the table name.parquet format. For more information about Parquet format definitions, see Apache Parquet.
--mix
Specifies to import MIX files. A MIX file stores a mix of DDL and DML statements, and does not have strict naming conventions.
Notice
MIX files do not have a strict format and feature a complex processing process and poor performance. Therefore, we recommend that you do not use MIX files.
--csv
Specifies to import data files in the CSV format. In a CSV file, data is stored in the standard CSV format. The file is named in the format of table name.csv. For CSV format specifications, see the definitions in RFC 4180. Delimiter errors are the most common errors that occur in CSV files. Single or double quotation marks are usually used as the delimiter. If data in the file contains the delimiter, you must specify escape characters. Otherwise, OBLOADER fails to parse the data due to its incorrect format. We strongly recommend that you use the CSV format. We recommend that you use this option together with the
--tableoption. When it is used together with the--alloption, OBLOADER imports only table data but not database object definitions.--pos
Specifies to import data files in the POS format. A POS file stores data with a fixed length in bytes. The file is named in the format of table name.dat. Data stored in each column of a POS file occupies a fixed number of bytes. A column value shorter than specified is padded with spaces. A column value longer than specified is truncated, in which case data may be garbled. We recommend that you use this option together with the
--tableoption. When it is used together with the--alloption, OBLOADER imports only table data but not database object definitions. This is different from a format that stores data with a fixed length in characters.--cut
Specifies to import data files in the CUT format. A CUT file uses a character or a character string as the separator string. A CUT file is named in the format of table name.dat. How to distinguish the CUT format from the CSV format? A file in the CSV format uses a single character, which is usually a comma (,), to separate fields. A file in the CUT format usually uses a string, such as
|@|, to separate fields. A file in the CSV format uses single quotation marks or double quotation marks as delimiters between fields, while a file in the CUT format does not have delimiters. We recommend that you use this option together with the--tableoption. When it is used together with the--alloption, OBLOADER imports only table data but not database object definitions.Notice
In a CUT file, each data record is stored in an entire line. When a single character is used as the field separator, avoid special characters in the data, such as delimiters, carriage returns, and line breaks. Otherwise, OBLOADER cannot correctly parse the data.
When you specify the--cutoption on the OBLOADER command line, do not use the--trail-delimiteroption if no field separator or separator string exists at the end of the data line in the file. Otherwise, a serious error occurs on OBLOADER.--table-group '*table_group_name [,table_group_name...]*'|--table-group '*'
Specifies to import table group definitions. This option is similar to the
--tableoption, except that this option does not support data import.--all
Specifies to import all supported database object definitions and table data. When this option is used in combination with
--ddl, all database object definition files are imported. When this option is used in combination with--csv,--sql,--cut, or--pos, data in all tables is imported in the specified format. To import all database object definitions and table data, you can specify--alland--ddlalong with a data format option.Notice
The
--alloption is mutually exclusive with any database object options. It cannot be specified together with other database object options. If both the--alloption and a database object option are specified, the--alloption will be executed first.--table ' table_name [,table_name...] ' | --table ' * '
Specifies to import table definitions or table data. When this option is used in combination with
--ddl, only table definitions are imported. When this option is used in combination with any data format option, only table data is imported. To specify multiple tables, separate the table names with commas (,). By default, for OceanBase Database in Oracle mode, the table names are in uppercase, and for OceanBase Database in MySQL mode, the table names are in lowercase. For example, for OceanBase Database in Oracle mode, both--table 'test'and--table 'TEST'indicate the table namedTEST. For OceanBase Database in MySQL mode, both--table 'test'and--table 'TEST'indicate the table namedtest. If table names are case-sensitive, enclose them in brackets ([ ]). For example,--table '[test]'indicates the table namedtest, while--table '[TEST]'indicates the table namedTEST. If the table name is specified as an asterisk, all table definitions and data are imported.Notice
When you use a control file to import data, the table name specified in the
--tableoption must be in the same letter case as that in the database. Otherwise, the control file fails to take effect.--view ' view_name [, view_name...] ' | --view ' * '
Specifies to import view definitions. This option is similar to the
--tableoption, except that this option does not support data import.--trigger ' trigger_name [, trigger_name...] ' | --trigger ' * '
Specifies to import trigger definitions. This option is similar to the
--tableoption, except that this option does not support data import.--sequence ' sequence_name [, sequence_name...] ' | --sequence ' * '
Specifies to import sequence definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--synonym ' synonym_name [, synonym_name...] ' | --synonym ' * '
Specifies to import synonym definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--type ' type_name [, type_name...] ' | --type ' * '
Specifies to import type definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--type-body ' typebody_name [, typebody_name...] ' | --type-body ' * '
Specifies to import type body definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--package ' package_name [, package_name...] ' | --package ' * '
Specifies to import package definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--package-body ' packagebody_name [, packagebody_name...] ' | --package-body ' * '
Specifies to import package body definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--function ' function_name [, function_name...] ' | --function ' * '
Specifies to import function definitions. This option is similar to the
--tableoption, except that this option does not support data import.--procedure ' procedure_name [, procedure_name...] ' | --procedure ' * '
Specifies to import stored procedure definitions. This option is similar to the
--tableoption, except that this option does not support data import.--replace-object
Specifies to replace existing database object definitions during the import. For tables and synonyms, existing definitions are deleted and then new ones are created. For functions and stored procedures, existing definitions are replaced by using the CREATE OR REPLACE statement. This option can be used only in combination with the
--ddlor--mixoption, and does not take effect for--csv,--sql, and other data format options.Notice
- If an object already exists in the destination database, the object definition is forcibly replaced with the object definition stored in the import file.
- If you do not need to replace objects in the destination database, do not use this option. Otherwise, business may be affected.
--retry
Resumes the import task from the breakpoint. We recommend that you specify this option to resume an import task that has more than 80% of data imported, so that you do not have to start a new import. Duplicate data errors may occur during the resumed import. If only a small part of the data has been imported, you can clear tables and start a new import, which is more efficient.
Notice
The CHECKPOINT.bin file is a savepoint file generated when the tool runs, and is located in the directory specified in the
-foption. You cannot use this option if the CHECKPOINT.bin file does not exist.--external-data
Specifies that the data files to be imported are exported by a third-party tool. OBDUMPER generates a MANIFEST.bin file in the directory specified in the
-foption during data export. The file stores metadata information. When OBLOADER imports data, it parses the metadata file by default. If the metadata file is missing or the data is exported by a third-party tool without a metadata file, you can specify this option to skip metadata parsing for the import.--nls-date-format ' date-format-string '
The format of dates in database connections in Oracle mode of OceanBase Database. Default value:
YYYY-MM-DD HH24:MI:SS.--nls-timestamp-format ' timestamp-format-string '
The format of timestamps in database connections in Oracle mode of OceanBase Database. Default value:
YYYY-MM-DD HH24:MI:SS:FF9.--nls-timestamp-tz-format ' timestamp-tz-format-string '
The format of timestamps that contains a time zone in database connections in Oracle mode of OceanBase Database. Default value:
YYYY-MM-DD HH24:MI:SS:FF9 TZR.--skip-header
Specifies to skip headers of CSV/CUT files when the files are imported. Headers are data in the first line. This option can be used only in combination with the
--cutor--csvoption. Data in the first line of CUT files can be skipped only in OBLOADER V3.3.0 and later.--skip-footer
Specifies to skip footers of CUT files when the files are imported. Footers are data in the last line. This option is used only in combination with the
--cutoption.--null-string ' null_replacer_string '
Specifies to replace a specified character with NULL. This option can be used only in combination with the
--cutor--csvoption. Default value:\\N.--empty-string ' empty_replacer_string '
Specifies to replace a specified character with an empty string (' '). This option can be used only in combination with the
--csvoption. Default value:\\E.--line-separator ' line_separator_string '
The line separator. Custom line breaks are supported for the import of CUT files. The default value of this option is relevant with the system platform. The values available are \r, \n, and \r\n only.
Note
This option can be used only in combination with the --cut or --csv option.
--column-separator ' column_separator_char '
The column separator. This option can be used only in combination with the
--csvoption and supports a single character only. Default value: comma (,).--escape-character ' escape_char '
The escape character. This option can be used only in combination with the
--cutor--csvoption and supports a single character only. Default value: backslash ().--column-delimiter ' column_delimiter_char '
The column delimiter. This option can be used only in combination with the
--csvoption and supports a single character only. Default value: single quotation mark (').--with-trim
Specifies to delete the space characters on the left and right sides of the data. This option is used only in combination with the
--cutor--csvoption.--trail-delimiter
Specifies to truncate the last column separator in a line. This option is used only in combination with the
--cutor--csvoption.--ignore-unhex
Specifies to ignore hexadecimal strings in decoding. This option applies only to binary data types.
--exclude-table ' table_name [, table_name...] '
Specifies to excludethe specified tables from the import of table definitions or table data. Fuzzy match on table names is supported. Example:
--exclude-table 'test1,test*,*test,te*st'The preceding example specifies to exclude the following tables from the import of table definitions or table data:- test1
- All tables with the table name starting with
test - All tables with the table name ending with
test - All tables with the table name starting with
teand ending withst
--exclude-data-types ' datatype [, datatype...] '
Specifies to excludethe specified data types during the data import.
--column-splitter ' split_string '
The column separator. This option is used only in combination with the
--cutoption.--storage-uri ' storage_uri_string '
The URI of the storage space. OBLOADER 4.2.0 and later support importing database object definitions and table data from Alibaba Cloud Object Storage Service (OSS) or Amazon Simple Storage Service (S3). OBLOADER 4.2.1 and later also support importing database object definitions and table data from Hadoop.
Syntax of 'storage_uri_string':
[scheme://host]path[?parameters] parameters: key[=value],...Parameters:
Parameter Description scheme The storage scheme. Alibaba Cloud OSS, Amazon S3, and Hadoop are supported.
If the scheme is not Alibaba Cloud OSS, Amazon S3, or Hadoop, an error is returned.host_name The name of the storage space. - When you import data from Alibaba Cloud OSS or Amazon S3, the
hostparameter specifies a bucket. For more information, see OSS Bucket. - When you import data from Apache Hadoop, the
hostparameter specifies a Hadoop node, which is in the<ip>:<port>or<cluster_name>format.
path The data storage path in the storage space. The path must start with a slash ( /).parameters The parameters required for the request.
The value can be a single key or multiple key-value pairs.Example: Import data from Amazon S3
--storage-uri 's3://bucket/path?region={region}&access-key={accessKey}&secret-key={secretKey}'- When you import data from Alibaba Cloud OSS or Amazon S3, the
s3: indicates that the storage scheme is Amazon S3.bucket: the name of the bucket in Amazon S3.path: the data storage path in the Amazon S3 bucket.?region={region}&access-key={accessKey}&secret-key={secretKey}: the region, AccessKey ID, and AccessKey secret.
Supported parameters:
| Parameter | Value required? | Description | Supported storage scheme | Supported version |
|---|---|---|---|---|
| endpoint | Yes | The endpoint of the region where the OSS host resides. | OSS | V4.2.0 |
| region | Yes | The region where the S3 bucket resides. | S3 | V4.2.0 |
| storage-class | Yes | The storage class of Amazon S3. | S3 | V4.2.0 |
| access-key | Yes | The AccessKey ID used to access the bucket. | OSS/S3 | V4.2.0 |
| secret-key | Yes | The AccessKey secret used to access the bucket. | OSS/S3 | V4.2.0 |
| hdfs-site-file | Yes | A hdfsSiteFile configuration file. The configuration file contains the configuration information of Apache Hadoop, such as the block size and number of replicas. Storage and access rules are set for Apache Hadoop based on the configuration information. | Apache Hadoop | 4.2.1 |
| core-site-file | Yes | A hdfsSiteFile configuration file. The configuration file contains the core configuration information of the Hadoop cluster, such as the URI of the file system and the default file system of Apache Hadoop. | Apache Hadoop | 4.2.1 |
| principal | Yes | The identifier for identity authentication in Kerberos. | Apache Hadoop | 4.2.1 |
| keytab-file | Yes | The absolute path of the Keytab file, which authorizes users or services to access system resources. | Apache Hadoop | 4.2.1 |
| krb5-conf-file | Yes | The path where the Kerberos configuration file resides. | Apache Hadoop | 4.2.1 |
Note
- When you import database object definitions and table data from Alibaba Cloud OSS, the `endpoint`, `access-key`, and `secret-key` parameters are required.
- When you import database object definitions and table data from Amazon S3, the `region`, `access-key`, and `secret-key` parameters are required.
--max-discards int_num
The maximum amount of duplicate data allowed in a single table to import. If duplicate data in a table exceeds the specified maximum, data import for this table stops. The import failure on the table is logged. Data import for other tables is not affected. Default value:
-1, indicating that data import does not stop for duplicate data.Note
This option takes effect only when the table contains a primary key or a unique key and duplicate data exists in the table.
--max-errors int_num
The maximum number of errors allowed for an import. If the number of import errors of a table exceeds the specified maximum, data import for this table stops. The import failure on the table is logged. This option can be set to
0,-1, or any positive integer N. If you set this option to-1, the errors are ignored and the import continues. Default value:1000.--exclude-column-names ' column_name [, column_name...] '
Notice
- The letter case of the specified column name must be the same as that of the column name in the table structure.
- In the imported data file, the excluded columns must have no corresponding data, and the imported columns must be in the same order as the columns in the table.
--replace-data
Specifies to replace duplicate data in the table. This option is only applicable to tables that have primary keys or unique keys with the NOT NULL constraint. If a file has a large amount of duplicate data, for example, more than 30% of the total data volume, we recommend that you clear the table and import it again. The performance of data replacement is lower than that of import after table clearing. This option can be used only in combination with the
--cut,--csv, and--sqloptions, and does not take effect for the--ddloption.Notice
- When the file and the table have duplicate data, the data in the table is replaced with the data in the file.
- For tables without primary keys or unique keys, this option appends data to the tables.
- If you do not need to replace duplicate data, do not specify this option. Otherwise, business may be affected.
--truncate-table
Specifies to truncate tables in the destination database. This option can be used only in combination with a data format option. When being used with the
--allor--table '*'option, this option specifies to truncate all tables in the destination database. If you want to truncate only some of the tables, you can explicitly specify them in the format of--table 'test1,test2,[....]'. When being used with the--with-data-filesoption, this option specifies to truncate tables that have the corresponding data files.Notice
- If the
--allor--table '*'option is used in combination with the--truncate-tableoption, the tool truncates all tables in the destination database, even if no corresponding data files of the tables exist in the directory specified by the-foption. - Do not use this option to truncate the destination database or destination tables. We recommend that you manually truncate tables to avoid impacts on business.
- If the
--with-data-files
When being used in combination with the
--truncate-tableor--delete-from-tableoption, this option specifies to truncate or clear tables that have corresponding data files before data import. If not used with these options, this option does not take effect.--delete-from-table
Specifies to clear tables in the destination database before the import. This option can be used only in combination with a data format option. When being used with the
--allor--table '*'option, this option specifies to clear all tables in the destination database. If you want to clear only some of the tables, you can explicitly specify them in the format of--table 'test1,test2,[....]'. When being used with the--with-data-filesoption, this option specifies to clear tables that have corresponding data files only.Notice
- If the
--allor--table '*'option is used in combination with the--delete-from-tableoption, the tool clears all tables in the destination database, even if no corresponding data files of the tables exist in the directory specified by the-foption. - Do not use this option to clear the destination database or destination tables, especially tables with large data volumes. We strongly recommend that you delete table data manually based on your business requirements to avoid impacts on your business.
- If the
--file-regular-expression
The regular expression that is used to match the file name during an import. This option applies only to the import of a single table.
--ignore-escape
Specifies to ignore escape operations on characters for the import of CUT files. By default, escape operations are not ignored.
--strict= ' strict_string '
Controls the impact of dirty data on the exit status of the tool. Default value:
true, which indicates that the tool exits at the failure state (System exit 1) when the imported data containsbad recordordiscard recorderrors. The valuefalseindicates that the exit state (System exit 0) of the tool is not affected bybad recordordiscard recorderrors in the imported data.Note
This option can be used in combination with the
--max-discardsor--max-errorsoption to specify that when the amount of duplicate data or the number of errors is within the specified range, the tool will skip the error and continue. For more information, see Error handling.--character-set ' character_set_string '
The character set used when you create a database connection. Default value: the value of the
jdbc.url.character.encodingsession variable in thesession.propertiesfile. The character set specified by the--character-setoption will overwrite the value ofjdbc.url.character.encoding. Valid values: binary, gbk, gb18030, utf16, and utf8mb4.
Performance options
--rw float_num
The proportion of data file parsing threads. Default value:
0.2. You can use this option in combination with the--threadoption to calculate the number of file parsing threads. Number of file parsing threads = Value of--thread× Value of--rw.--slow float_num
The threshold for triggering a slow import. When the memory usage of OceanBase Database reaches 75%, OBLOADER slows down to prevent an excessively high memory usage on the database. Default value:
0.75.--pause float_num
The threshold for triggering an import pause. When the memory usage of OceanBase Database reaches 85%, OBLOADER pauses data import to prevent issues caused by an excessively high memory usage on the database. Default value:
0.85.--batch int_num
The number of records for a batch of transactions. We recommend that you set this option to a value inversely proportional to the table width. Do not set this option to an excessively high value, which may cause database memory overflow. Default value:
200.--thread int_num
The number of concurrent export threads allowed. This option corresponds to the number of import threads. You can use this option together with the
--rwoption to calculate the number of file parsing threads. Formula: Number of file parsing threads = Value of--thread× Value of--rw. Default value: Number of CPU cores x 2. OceanBase Database executes DDL statements in sequence. Therefore, you do not need to specify this option when importing database object definitions.--block-size int_num
The block size for a file to be imported. When specifying this option, you do not need to explicitly specify the unit. The default unit is MB. By default, OBLOADER automatically splits a large file into multiple logical subfiles (or blocks) sized 64 MB. The logical subfiles do not occupy additional storage space. Default value:
64.--max-tps int_num
The maximum import speed. You can specify this option to ensure import stability.
--max-wait-timeout int_num
The timeout period of waiting for a database major compaction to complete. When specifying this option, you do not need to explicitly specify the unit. The default unit is hour. When the database is under a major compaction, OBLOADER stops data import and waits up to the specified period. Default value:
3.
Other options
-H, --help
Specifies to show the help information of the tool.
-V, --version
Specifies to show the version of the tool.