OBLOADER allows you to specify the information required for import in command-line options. For more information about the options and their scenarios and examples, see Options and Scenarios and examples.
Option styles
OBLOADER supports the Unix and GNU styles of command-line options.
Unix style: An option is prefixed with a hyphen and each option is a single character, such as
ps -e. In this style, you can omit the space between an option and its value, such as-p******.GNU style: An option is prefixed with double hyphens and each option is a single character or a string, such as
ps --version. An option and its value must be separated with a space, such as--table 'test'.
Options
| Option | Required? | Description | Introduced in | Deprecated? |
|---|---|---|---|---|
| -h(--host) | Yes | The host address for connecting to OceanBase Database Proxy (ODP) or a physical OceanBase Database node. | ||
| -P(--port) | Yes | The host port for connecting to ODP or a physical OceanBase Database node. | ||
| --rpc-port | No | The RPC port for connecting to an OBServer node. | V4.2.5 | |
| --direct | No | Specifies to use the bypass import mode. | V4.2.5 | |
| --parallel | No | The degree of parallelism (DOP) for loading data in bypass import mode. | V4.2.6 | |
| -c(--cluster) | No | The cluster name of the database. | ||
| -t(--tenant) | No | The tenant name of the cluster. | ||
| -u(--user) | Yes | The username that you use to log on to the database. | ||
| -p(--password) | No | The password that you use to log on to the database. | ||
| -D(--database) | No | The name of the database. | ||
| -f(--file-path) | Yes | The directory that stores the data file or the absolute path of the data file. | ||
| --sys-user | No | The username of a user under the sys tenant. | ||
| --sys-password | No | The password of the user under the sys tenant. | ||
| --public-cloud | No | Indicates that the database environment is ApsaraDB for OceanBase. | ||
| --file-suffix | No | The file name extension. | ||
| --file-encoding | No | The file character set, which is different from the database character set. | ||
| --ctl-path | No | The directory of the control files. | ||
| --log-path | No | The directory to which log files are exported. | ||
| --ddl | No | Imports DDL files. | ||
| --csv | No | Imports data files in the CSV format (recommended). | ||
| --sql | No | Imports data files in the SQL format, which is different from DDL files. | ||
| --orc | No | Imports data files in the ORC format. | V4.0.0 | |
| --par | No | Imports data files in the Parquet format. | V4.0.0 | |
| --mix | No | Imports a mixed file that contains both definitions and data. | ||
| --pos | No | Imports data files in the POS format. | ||
| --cut | No | Imports data files in the CUT format. | ||
| --all | No | Imports all supported database object definitions and table data. | ||
| --table-group | No | Imports table group definitions. | V3.1.0 | |
| --table | No | Imports table definitions or table data. | ||
| --view | No | Imports view definitions. | ||
| --trigger | No | Imports trigger definitions. | ||
| --sequence | No | Imports sequence definitions. | ||
| --synonym | No | Imports synonym definitions. | ||
| --type | No | Imports type definitions. | V4.0.0 | |
| --type-body | No | Imports type body definitions. | ||
| --package | No | Imports package definitions. | ||
| --package-body | No | Imports package body definitions. | ||
| --function | No | Imports function definitions. | ||
| --procedure | No | Imports stored procedure definitions. | ||
| --replace-object | No | Replaces existing object definitions. We recommend that you replace object definitions manually, rather than using this option. | ||
| --rw | No | The proportion of data file parsing threads. | ||
| --slow | No | The threshold for triggering a slow import. | ||
| --pause | No | The threshold for triggering an import pause. | ||
| --batch | No | The number of records for a batch of transactions. | ||
| --thread | No | The number of concurrent import threads allowed. | ||
| --block-size | No | The block size for a file to be imported. | ||
| --retry | No | Reimports data from the last savepoint. | ||
| --external-data | No | Specifies that the data files are exported by a third-party tool. Check on the metadata file will be skipped. | ||
| --max-tps | No | The maximum import speed. Default unit: lines/second. | ||
| --max-wait-timeout | No | The timeout period of waiting for a database major compaction to complete. | ||
| --nls-date-format | No | The session-level datetime format, which is supported only for OceanBase Database in Oracle mode. | ||
| --nls-timestamp-format | No | The session-level timestamp format, which is supported only for OceanBase Database in Oracle mode. | ||
| --nls-timestamp-tz-format | No | The session-level timestamp format with a time zone, which is supported only for OceanBase Database in Oracle mode. | ||
| --trail-delimiter | No | Truncates the last column separator in a line. | ||
| --with-trim | No | Deletes the space characters on the left and right sides of the data. | ||
| --skip-header | No | Skips the first line of data of CSV/CUT files when the files are imported. Only OBLOADER V3.3.0 and later support skipping the first line of data in a CUT file. | ||
| --skip-footer | No | Skips the last line of data in a CUT file during import. | V3.3.0 | |
| --null-string | No | Replaces the specified character with NULL. Default value: \N. | ||
| --empty-string | No | Replaces the specified character with an empty string (' '). Default value: \E. | ||
| --line-separator | No | The line separator. Custom line separators are supported for the import of CUT files. Default value: \n. | ||
| --column-separator | No | The column separator, which is different from the column separator string in the CUT format. | ||
| --escape-character | No | The escape character. Default value: \. | ||
| --column-delimiter | No | The column delimiter. Default value: '. | ||
| --ignore-unhex | No | Ignores hexadecimal strings in decoding. | ||
| --exclude-table | No | Excludes the specified tables from the import of table definitions and table data. | ||
| --exclude-data-types | No | Excludes the specified data types from the import of data. | ||
| --column-splitter | No | The column separator string, which is different from the column separator in the CSV format. | ||
| --max-discards | No | The maximum amount of duplicate data allowed in a single table to import. Default value: -1. | ||
| --max-errors | No | The maximum number of errors allowed for a single table during data import. Default value: 1000. | ||
| --exclude-column-names | No | Excludes the specified columns from the import of data. | ||
| --replace-data | No | Replaces duplicate data in the table. This option is only applicable to tables that have primary keys or unique keys with the NOT NULL constraint. | ||
| --truncate-table | No | Truncates all tables in the destination database. We recommend that you truncate tables manually, rather than using this option. | ||
| --with-data-files | No | Truncates or clears tables with specified data files. | V3.1.0 | |
| --delete-from-table | No | Clears tables in the destination database before the import. We recommend that you clear tables manually, rather than using this option. | ||
| -V(--version) | No | Shows the version of OBLOADER. | ||
| --no-sys | No | Specifies that the password of the sys tenant cannot be provided in OceanBase Database. | V3.3.0 | |
| --logical-database | No | Specifies to import data by using ODP (Sharding). | V3.3.0 | |
| --file-regular-expression | No | The regular expression that is used to match the file name during a single-table import. | V3.3.0 | |
| --ignore-escape | No | Ignores escape operations on characters for the import of CUT format files. | V3.3.0 | |
| --storage-uri | No | The uniform resource identifier (URI) of the storage space. | V4.2.0 | |
| --character-set | No | The character set used when the database connection is created. | V4.2.4 | |
| --strict | No | Controls the impact of dirty data on the process ending status during data import. | V4.2.4 | |
| -H(--help) | No | Shows the help information of the OBLOADER command-line tool. |
Connection options
OBLOADER can read data from and write data to an OceanBase database only after connecting to the database. You can connect to an OceanBase database by specifying the following options:
-h host_name , --host= host_name
The host address for connecting to ODP or a physical OceanBase Database node.
-P port_num , --port= port_num
The host port for connecting to ODP or a physical OceanBase Database node.
-c cluster_name , --cluster= cluster_name
The OceanBase cluster to connect to. If this option is not specified, a physical node of the database is connected. Relevant options, such as
-hand-P, specify the host address and port of the physical node. If this option is specified, ODP is connected. Relevant options, such as-hand-P, specify the host address and port of ODP.-t tenant_name , --tenant= tenant_name
The OceanBase Database tenant to connect to. For more information about tenants, see the official OceanBase documentation.
-u user_name , --user= user_name
The username for connecting to OceanBase Database. If an incorrect username is specified, OBLOADER cannot connect to OceanBase Database.
-p ' password' , --password=' password'
The user password for connecting to OceanBase Database. If no password is set for the current database account, you do not need to specify this option. To specify this option on the command line, you must enclose the value in single quotation marks (' '), for example,
-p'******'.Note
If you are using Windows OS, enclose the value in double quotation marks (" "). This rule also applies to string values of other options.
--rpc-port= rpc_port_num
The RPC port for connecting to an OBServer node. This option is used in combination with the
--directand--paralleloptions to specify to connect to the RPC port of an OBServer node to import data in bypass import mode. You can query theDBA_OB_SERVERSsystem table in the sys tenant for the RPC port of the OBServer node.Note
This option applies only to OceanBase Database V4.2.0 RC2 and later.
--direct
Specifies to use the bypass import mode. This option is used in combination with the
--rpc-portand--paralleloptions.Note
- The bypass import mode in OBLOADER V4.2.5 does not support binary data types.
- This option applies only to OceanBase Database V4.2.0 RC2 and later.
--parallel= parallel_num
The DOP for loading data in bypass import mode. This option is used in combination with the
--rpc-portand--directoptions.--sys-user sys_username
The username of a user with required privileges under the sys tenant, for example,
rootorproxyro. OBLOADER requires a special user under the sys tenant to query the metadata in system tables. The default value isroot. You do not need to specify this option for OceanBase Database V4.0.0 and later.--sys-password ' sys_password'
The password of a user with required privileges under the sys tenant. This option is used in combination with the
--sys-useroption. By default, the password of the root user under the sys tenant is left empty. When you specify this option on the command line, enclose the value in single quotation marks (' '), for example,--sys-password '******'. You do not need to specify this option for OceanBase Database V4.0.0 and later.Note
If this option is not specified, OBLOADER cannot query metadata in system tables, and the import features and performance may be greatly affected.
--public-cloud
Imports database object definitions or table data from an ApsaraDB for OceanBase cluster. If you specify this option on the command line, you do not need to specify the
-tor-cconnection option. OBLOADER turns on the--no-sysoption by default. For more information about the--no-sysoption, see the corresponding option description. The use of the--public-cloudor--no-sysoption will affect the import features, performance, and stability. OceanBase Database V2.2.30 and later support throttling on the server. Therefore, before you use the--public-cloudor--no-sysoption, to ensure the stability of data import, you can run the following commands to set throttling thresholds as required on the server:alter system set freeze_trigger_percentage=50; alter system set minor_merge_concurrence=64; alter system set writing_throttling_trigger_percentage=80 tenant='xxx';--no-sys
Imports database object definitions or table data from an OceanBase cluster when the password of the sys tenant cannot be provided. Unlike the
--public-cloudoption, when you use the--no-sysoption, you need to specify the-tconnection option on the command line and also need to add the-coption to connect to the ODP service. In OceanBase Database V4.0.0 and earlier, if the--public-cloudor--no-sysoption is not specified, you must specify the--sys-userand--sys-passwordoptions for OBLOADER.--logical-database
Specifies to import data by using ODP (Sharding). When you specify the
--logical-databaseoption on the command line, the definition of a random physical database shard is exported and the shard cannot be directly imported into the database. You need to manually convert the exported physical shard to a logical one before you import it to the database for business use.
Feature options
-f ' file_path ' , --file-path= ' file_path '
The absolute path on a local disk for storing data files. When you import data files from Alibaba Cloud Object Storage Service (OSS), you must specify the
-foption to save the logs and binary files generated.--file-suffix ' suffix_name '
The file name extension of data files to be imported. Generally, the file name extension is correlated with the file format. For example, a CSV file is usually named as xxx.csv. If you do not strictly follow the naming conventions, you may name a CSV file with any extension, such as .txt. In this case, OBLOADER cannot identify the file as a CSV file. This option is optional. A default value is available for each data format. The default file name extension is .csv for a CSV file, .sql for an SQL file, and .dat for a CUT or POS file. When you specify this option on the command line, enclose the value in single quotation marks (''), for example,
--file-suffix '.txt'.--file-encoding ' encode_name '
The file character set used when OBLOADER reads data files, which is not the database character set. When you specify this option on the command line, enclose the value in single quotation marks (' '), for example,
--file-encoding 'GBK'. The default value is UTF-8.--ctl-path ' control_path '
The absolute path for storing control files on the local disk. You can configure built-in preprocessing functions in a control file. Data will be preprocessed by these functions before being imported. For example, the functions can perform case conversion and check the data for empty values. For the use of control files, see Data processing. When you specify this option on the command line, enclose the value in single quotation marks (' '), for example,
--ctl-path '/home/controls/'.--log-path ' log_path '
The output directory for the operational logs of OBLOADER. If this option is not specified, OBLOADER operational logs are stored in the directory specified by the
-foption. Redirection is not required for log output, unless otherwise specified.--ddl
Imports DDL files. A DDL file stores the database object definitions. The file is named in the format of object name-schema.sql. When this option is specified, only database object definitions are imported, and table data is not imported.
Notice
Avoid comments or statements to enable/disable a feature in the file. If database objects depend on each other, the import may fail and manual intervention is required.
--sql
Imports data files in the SQL format. In an SQL file, data is stored in the format of INSERT statements. The file is named in the format of table name.sql. Each line of table data corresponds to an executable INSERT statement in an SQL file. An SQL file is different from a DDL file in terms of content format. We recommend that you use this option in combination with the
--tableoption. When it is used in combination with the--alloption, OBLOADER imports only table data but not database object definitions.Notice
The data cannot contain SQL functions, special characters, line breaks, and so on. Otherwise, the file may not be correctly parsed.
--orc
Imports data files in the ORC format. An ORC file stores data in the column-oriented format. The file is named in the format of table name.orc. For more information about ORC format definitions, see Apache ORC.
--par
Imports data files in the Parquet format. A Parquet file stores data in the column-oriented format. The file is named in the format of table name.parquet. For more information about Parquet format definitions, see Apache Parquet.
Note
When you use OBLOADER V4.2.5 or earlier to import Parquet files, the DECIMAL, DATE, TIME, and TIMESTAMP data types are not supported.
--mix
Imports MIX files. A MIX file stores a mix of DDL and DML statements, and does not have strict naming conventions.
Notice
MIX files do not have a strict format and feature a complex processing process and poor performance. Therefore, we recommend that you do not use MIX files.
--csv
Imports data files in the CSV format. In a CSV file, data is stored in the standard CSV format. The file is named in the format of table name.csv. For CSV format specifications, see the definitions in RFC 4180. Delimiter errors are the most common errors that occur in CSV files. Single or double quotation marks are usually used as the delimiter. If data in the file contains the delimiter, you must specify escape characters. Otherwise, OBLOADER fails to parse the data due to its incorrect format. We strongly recommend that you use the CSV format. We recommend that you use this option in combination with the
--tableoption. When it is used in combination with the--alloption, OBLOADER imports only table data but not database object definitions.--pos
Imports data files in the POS format. A POS file stores data with a fixed length in bytes. The file is named in the format of table name.dat. Data stored in each column of a POS file occupies a fixed number of bytes. A column value shorter than specified is padded with spaces. A column value longer than specified is truncated, in which case data may be garbled. We recommend that you use this option in combination with the
--tableoption. When it is used in combination with the--alloption, OBLOADER imports only table data but not database object definitions. (This is different from the format that stores data with a fixed length in characters.)--cut
Imports data files in the CUT format. A CUT file uses a character or a character string as the separator string. A CUT file is named in the format of table name.dat. How to distinguish the CUT format from the CSV format? A file in the CSV format uses a single character, which is usually a comma (,), to separate fields. A file in the CUT format usually uses a string, such as
|@|, to separate fields. A file in the CSV format uses single quotation marks or double quotation marks as delimiters between fields, while a file in the CUT format does not have delimiters. We recommend that you use this option in combination with the--tableoption. When it is used in combination with the--alloption, OBLOADER imports only table data but not database object definitions.Notice
In a CUT file, each data record is stored in an entire line. When a single character is used as the field separator, avoid special characters in the data, such as separators, carriage returns, and line breaks. Otherwise, OBLOADER cannot correctly parse the data.
When you specify the--cutoption on the command line to import data, do not use the--trail-delimiteroption if no field separator or separator string exists at the end of the data line in the file. Otherwise, a serious error occurs on OBLOADER.--table-group '*table_group_name [,table_group_name...]*'|--table-group '*'
Imports table group definitions. This option is similar to the
--tableoption, except that this option does not support data import.--all
Imports all supported database object definitions and table data. When this option is used in combination with
--ddl, all database object definition files are imported. When this option is used in combination with--csv,--sql,--cut, or--pos, data in all tables is imported in the specified format. To import all database object definitions and table data, you can specify--alland--ddlalong with a data format option.Notice
The
--alloption is mutually exclusive with any database object options. It cannot be specified together with other database object options. If both the--alloption and a database object option are specified, the--alloption will be executed first.--table ' table_name [,table_name...] ' | --table ' * '
Imports table definitions or table data. When this option is used in combination with
--ddl, only table definitions are imported. When this option is used in combination with any data format option, only table data is imported. To specify multiple tables, separate the table names with commas (,). By default, for OceanBase Database in Oracle mode, the table names are in uppercase, and for OceanBase Database in MySQL mode, the table names are in lowercase. For example, for OceanBase Database in Oracle mode, both--table 'test'and--table 'TEST'indicate the table namedTEST. For OceanBase Database in MySQL mode, both--table 'test'and--table 'TEST'indicate the table namedtest. If table names are case-sensitive, enclose them in brackets ([ ]). For example,--table '[test]'indicates the table namedtest, while--table '[TEST]'indicates the table namedTEST. If the table name is specified as an asterisk, all table definitions or table data is imported.Notice
When you use a control file to import data, the table name specified in the
--tableoption must be in the same letter case as that in the database. Otherwise, the control file fails to take effect.--view ' view_name [, view_name...] ' | --view ' * '
Imports view definitions. This option is similar to the
--tableoption, except that this option does not support data import.--trigger ' trigger_name [, trigger_name...] ' | --trigger ' * '
Imports trigger definitions. This option is similar to the
--tableoption, except that this option does not support data import.--sequence ' sequence_name [, sequence_name...] ' | --sequence ' * '
Imports sequence definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--synonym ' synonym_name [, synonym_name...] ' | --synonym ' * '
Imports synonym definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--type ' type_name [, type_name...] ' | --type ' * '
Imports type definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--type-body ' typebody_name [, typebody_name...] ' | --type-body ' * '
Imports type body definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--package ' package_name [, package_name...] ' | --package ' * '
Imports package definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--package-body ' packagebody_name [, packagebody_name...] ' | --package-body ' * '
Imports package body definitions. This option is similar to the
--tableoption, except that this option does not support data import. This option is supported only for OceanBase Database in Oracle mode.--function ' function_name [, function_name...] ' | --function ' * '
Imports function definitions. This option is similar to the
--tableoption, except that this option does not support data import.--procedure ' procedure_name [, procedure_name...] ' | --procedure ' * '
Imports stored procedure definitions. This option is similar to the
--tableoption, except that this option does not support data import.--replace-object
Replaces existing database object definitions during the import. For tables and synonyms, existing definitions are deleted and then new ones are created. For functions and stored procedures, existing definitions are replaced by using the CREATE OR REPLACE statement. This option can be used only in combination with the
--ddlor--mixoption, and does not take effect for--csv,--sql, and other data format options.Notice
- If an object already exists in the destination database, the object definition is forcibly replaced with the object definition stored in the import file.
- If you do not need to replace objects in the destination database, do not use this option. Otherwise, business may be affected.
--retry
Resumes the import task from the breakpoint. We recommend that you specify this option to resume an import task that has more than 80% of data imported, so that you do not have to start a new import. Duplicate data errors may occur during the resumed import. If only a small part of the data has been imported, you can clear tables and start a new import, which is more efficient.
Notice
The CHECKPOINT.bin file is a savepoint file generated when the tool runs, and is located in the directory specified in the
-foption. You cannot use this option if the CHECKPOINT.bin file does not exist.--external-data
Specifies that the data files to be imported are exported by a third-party tool. OBDUMPER generates a MANIFEST.bin file in the directory specified in the
-foption during data export. The file stores metadata information. When OBLOADER imports data, it parses the metadata file by default. If the metadata file is missing, or the data is exported by a third-party tool without a metadata file, you can specify this option to skip metadata parsing for the import.--nls-date-format ' date-format-string '
The format of dates in database connections in Oracle mode of OceanBase Database. The default value is
YYYY-MM-DD HH24:MI:SS.--nls-timestamp-format ' timestamp-format-string '
The format of timestamps in database connections in Oracle mode of OceanBase Database. The default value is
YYYY-MM-DD HH24:MI:SS:FF9.--nls-timestamp-tz-format ' timestamp-tz-format-string '
The format of timestamps that contain a time zone in database connections in Oracle mode of OceanBase Database. The default value is
YYYY-MM-DD HH24:MI:SS:FF9 TZR.--skip-header
Skips headers of CSV/CUT files when the files are imported. Headers are data in the first line. This option can be used only in combination with the
--cutor--csvoption. Data in the first line of CUT files can be skipped only in OBLOADER V3.3.0 and later.--skip-footer
Skips footers of CUT files when the files are imported. Footers are data in the last line. This option can be used only in combination with the
--cutoption.--null-string ' null_replacer_string '
Replaces a specified character with NULL. This option can be used only in combination with the
--cutor--csvoption. The default value is \N.--empty-string ' empty_replacer_string '
Replaces a specified character with an empty string (' '). This option can be used only in combination with the
--csvoption. The default value is \E.--line-separator ' line_separator_string '
The line separator. Custom line separators are supported for the import of CUT files. The default value of this option is relevant with the system platform. The values available are \r, \n, and \r\n only.
Note
This option can be used only in combination with the
--cutor--csvoption.--column-separator ' column_separator_char '
The column separator. This option can be used only in combination with the
--csvoption and supports a single character only. The default value is comma (,).--escape-character ' escape_char '
The escape character. This option can be used only in combination with the
--cutor--csvoption and supports a single character only. The default value is backslash ().--column-delimiter ' column_delimiter_char '
The column delimiter. This option can be used only in combination with the
--csvoption and supports a single character only. The default value is single quotation mark (').--with-trim
Deletes the space characters on the left and right sides of the data. This option can be used only in combination with the
--cutor--csvoption.--trail-delimiter
Truncates the last column separator in a line. This option can be used only in combination with the
--cutor--csvoption.--ignore-unhex
Ignores hexadecimal strings in decoding. This option applies only to binary data types.
--exclude-table ' table_name [, table_name...] '
Excludes the specified tables from the import of table definitions or table data. Fuzzy match on table names is supported. For example,
--exclude-table 'test1,test*,*test,te*st'specifies to exclude the following tables from the import of table definitions or table data:- test1
- All tables with the table name starting with
test - All tables with the table name ending with
test - All tables with the table name starting with
teand ending withst
--exclude-data-types ' datatype [, datatype...] '
Excludes the specified data types from the import of table data.
--column-splitter ' split_string '
The column separator. This option can be used only in combination with the
--cutoption.--storage-uri ' storage_uri_string '
The URI of the storage space. OBLOADER V4.2.0 and later support importing database object definitions and table data from Alibaba Cloud OSS or Amazon Simple Storage Service (S3). OBLOADER V4.2.1 and later also support importing database object definitions and table data from Apache Hadoop.
The syntax of 'storage_uri_string' is as follows:
[scheme://host]path[?parameters] parameters: key[=value],...The following table describes the components of the URI.
Component Description scheme The storage scheme. Alibaba Cloud OSS, Amazon S3, and Apache Hadoop are supported.
If the scheme is not Alibaba Cloud OSS, Amazon S3, or Apache Hadoop, an error is returned.host_name The name of the storage space. - When you import data from Alibaba Cloud OSS or Amazon S3, the
hostparameter specifies a bucket. For more information, see OSS Bucket. - When you import data from Apache Hadoop, the
hostparameter specifies an Apache Hadoop node, which is in the<ip>:<port>or<cluster_name>format.
path The data storage path in the storage space. The path must start with a slash ( /).parameters The parameters required for the request.
The value can be single keys or key-value pairs.Example: Import data from Amazon S3
--storage-uri 's3://bucket/path?region={region}&access-key={accessKey}&secret-key={secretKey}'s3: indicates that the storage scheme is Amazon S3.bucket: the name of the bucket in Amazon S3.path: the data storage path in the Amazon S3 bucket.?region={region}&access-key={accessKey}&secret-key={secretKey}: the region, AccessKey ID, and AccessKey secret.
The following table describes the supported parameters.
Parameter Value required? Description Supported storage scheme Supported version endpoint Yes - The endpoint of the region where the OSS host resides.
- The endpoint used to access the destination S3 bucket.
- OSS
- S3
- V4.2.0
- V4.2.5
region Yes The region where the S3 bucket resides. S3 V4.2.0 storage-class Yes The storage class of Amazon S3. S3 V4.2.0 access-key Yes The AccessKey ID used to access the bucket. OSS/S3 V4.2.0 secret-key Yes The AccessKey secret used to access the bucket. OSS/S3 V4.2.0 hdfs-site-file Yes A hdfsSiteFile configuration file. The configuration file contains the configuration information of Apache Hadoop, such as the block size and number of replicas. Storage and access rules are set for Apache Hadoop based on the configuration information. Apache Hadoop V4.2.1 core-site-file Yes A hdfsSiteFile configuration file. The configuration file contains the core configuration information of the Apache Hadoop cluster, such as the URI of the file system and the default file system of Apache Hadoop. Apache Hadoop V4.2.1 principal Yes The identifier for identity authentication in Kerberos. Apache Hadoop V4.2.1 keytab-file Yes The absolute path of the Keytab file, which authorizes users or services to access system resources. Apache Hadoop V4.2.1 krb5-conf-file Yes The path where the Kerberos configuration file resides. Apache Hadoop V4.2.1 Note
- When you import database object definitions and table data from Alibaba Cloud OSS, the
endpoint,access-key, andsecret-keyparameters are required. - When you import database object definitions and table data from Amazon S3, the
region,access-key, andsecret-keyparameters are required.
- When you import data from Alibaba Cloud OSS or Amazon S3, the
--max-discards int_num
The maximum amount of duplicate data allowed in a single table to import. If duplicate data in a table exceeds the specified maximum, data import for this table stops. The import failure on the table is logged. Data import for other tables is not affected. The default value is -1, indicating that data import does not stop for duplicate data.
Note
This option takes effect only when the table contains a primary key or a unique key and duplicate data exists in the table.
--max-errors int_num
The maximum number of errors allowed for an import. If the number of import errors of a table exceeds the specified maximum, data import for this table stops. The import failure on the table is logged. This option can be set to 0, -1, or any positive integer N. If you set this option to -1, the errors are ignored and the import continues. The default value is 1000.
--exclude-column-names ' column_name [, column_name...] '
Notice
- The letter case of the specified column name must be the same as that of the column name in the table structure.
- In the imported data file, the excluded columns must have no corresponding data, and the imported columns must be in the same order as the columns in the table.
--replace-data
Replaces duplicate data in the table. This option is only applicable to tables that have primary keys or unique keys with the NOT NULL constraint. If a file has a large amount of duplicate data, for example, more than 30% of the total data volume, we recommend that you clear the table and import it again. The performance of data replacement is lower than that of import after table clearing. This option can be used only in combination with the
--cut,--csv, and--sqloptions, and does not take effect for the--ddloption.Notice
- When the file and the table have duplicate data, the data in the table is replaced with the data in the file.
- For tables without primary keys or unique keys, this option appends data to the tables.
- If you do not need to replace duplicate data, do not specify this option. Otherwise, business may be affected.
--truncate-table
Truncates tables in the destination database. This option can be used only in combination with a data format option. When being used in combination with the
--allor--table '*'option, this option specifies to truncate all tables in the destination database. If you want to truncate only some of the tables, you can explicitly specify them in the format of--table 'test1,test2,[....]'. When being used in combination with the--with-data-filesoption, this option specifies to truncate tables that have the corresponding data files.Notice
- If the
--allor--table '*'option is used in combination with the--truncate-tableoption, the tool truncates all tables in the destination database, even if no corresponding data files of the tables exist in the directory specified by the-foption. - Do not use this option to truncate the destination database or destination tables. We recommend that you manually truncate tables to avoid impacts on business.
- If the
--with-data-files
When being used in combination with the
--truncate-tableor--delete-from-tableoption, this option specifies to truncate or clear tables that have corresponding data files before data import. This option alone does not take effect.--delete-from-table
Clears tables in the destination database before the import. This option can be used only in combination with a data format option. When being used in combination with the
--allor--table '*'option, this option specifies to clear all tables in the destination database. If you want to clear only some of the tables, you can explicitly specify them in the format of--table 'test1,test2,[....]'. When being used in combination with the--with-data-filesoption, this option specifies to clear tables that have corresponding data files only.Notice
- If the
--allor--table '*'option is used in combination with the--delete-from-tableoption, the tool clears all tables in the destination database, even if no corresponding data files of the tables exist in the directory specified by the-foption. - Do not use this option to clear the destination database or destination tables, especially tables with large data volumes. We strongly recommend that you delete table data manually based on your business requirements to avoid impacts on your business.
- If the
--file-regular-expression
The regular expression that is used to match the file name during an import. This option applies only to single-table import.
--ignore-escape
Ignores escape operations on characters for the import of CUT format files. By default, escape operations are not ignored.
--strict= ' strict_string '
Controls the impact of dirty data on the process ending status during data import. The default value is
true, which indicates that the process ends with a failure (System exit 1) when imported data contains a Bad Record or Discard Record error. If the value is set tofalse, the ending status of the process is not affected (System exit 0) when imported data contains a Bad Record or Discard Record error.Note
This option can be used in combination with the
--max-discardsor--max-errorsoption to specify to skip errors and continue with the process when the amount of duplicate data or the number of errors is within the specified range. For more information, see Error handling.--character-set ' character_set_string '
The character set used when the database connection is created. The default value is the value of the session variable
jdbc.url.character.encodingin thesession.propertiesfile. The character set specified by the--character-setoption takes precedence over the character set specified byjdbc.url.character.encoding. Supported character sets are binary, gbk, gb18030, utf16, and utf8mb4.
Performance options
--rw float_num
The proportion of data file parsing threads. The default value is 0.2. You can use this option in combination with the
--threadoption to calculate the number of file parsing threads by using the following formula: Number of file parsing threads = Value of--thread× Value of--rw.--slow float_num
The threshold for triggering a slow import. When the memory usage of OceanBase Database reaches 75%, OBLOADER slows down to prevent an excessively high memory usage on the database. The default value is 0.75.
--pause float_num
The threshold for triggering an import pause. When the memory usage of OceanBase Database reaches 85%, OBLOADER pauses data import to prevent issues caused by an excessively high memory usage on the database. The default value is 0.85.
--batch int_num
The number of records for a batch of transactions. We recommend that you set this option to a value inversely proportional to the table width. Do not set this option to an excessively high value, which may cause database memory overflow. The default value is 200.
Note
In OBLOADER V4.2.0 and later, the default value of the
--batchoption can be adapted based on the Java virtual machine (JVM) memory.--thread int_num
The number of concurrent threads allowed. This option corresponds to the number of import threads. You can use this option in combination with the
--rwoption to calculate the number of file parsing threads by using the following formula: Number of file parsing threads = Value of--thread× Value of--rw. The default value is twice the number of CPU cores. OceanBase Database executes DDL statements in series. Therefore, you do not need to specify this option when you import database object definitions.--block-size int_num
The block size for a file to be imported. When specifying this option, you do not need to explicitly specify the unit. The default unit is MB. By default, OBLOADER automatically splits a large file into multiple logical subfiles (or blocks) sized 64 MB. The logical subfiles do not occupy additional storage space. The default value is 64.
--max-tps int_num
The maximum import speed. You can specify this option to ensure import stability.
--max-wait-timeout int_num
The timeout period of waiting for a database major compaction to complete. When specifying this option, you do not need to explicitly specify the unit. The default unit is hour. When the database is under a major compaction, OBLOADER stops data import and waits up to the specified period. The default value is 3.
Other options
-H, --help
Shows the help information of the tool.
-V, --version
Shows the version of the tool.