Notice
All parameters of a store are in the deliver2store section and prefixed with
logminer.unless otherwise specified, such aslogminer. oracle.url, which corresponds tooracle.url.If the LogMiner Reader plug-in is used, you do not need to be concerned with the section in which the parameters are located or prefix the parameters with
logminer..
| Parameter | Required? | Default value | Description |
|---|---|---|---|
| back_query_retry_times | No | 1 | The number of retries allowed when no record is found in a flashback query. |
| connect_task_queue_size | No | 20000 | The queue size. |
| connect_timeout_ms | No | 30000 | The timeout period for connecting to the Oracle database. |
| convert_to_source_record_thread_num | No | 6 | The number of concurrent converters when incremental records are converted. |
| enable_check_ddl_cause_row_move | No | true | Specifies whether to check whether DDL statements have caused row movement. |
| enable_flashback_query | No | true | Specifies whether to enable the flashback query feature. If this feature is enabled, the value of a historical moment is queried based on the system change number (SCN) of the log. Otherwise, the value of the current moment is queried. |
| enable_nopk_table_update_change_to_delete_plus_insert | No | true | Specifies whether to convert an UPDATE statement into a DELETE statement and an INSERT statement for a table without the primary key. |
| enable_replace_null | No | true | Specifies whether to replace a null value with the value specified by replace_null_string for columns with a no-null constraint. This parameter is used together with replace_null_string. |
| enable_skip_revise_valid_rowid_exception | No | false | Specifies whether to skip exceptions caused by row ID correction. |
| enable_stage_queue_compression | No | false | Specifies whether to compress prefetched archive files that are stored temporarily in the local disk. We recommend that you set this parameter to true. The compressed files are almost the same as the archived logs in size. If the archive files are not compressed, they will occupy a large amount of disk space. |
| fetch_arch_logs_max_parallelism_per_instance | No | 1 | The maximum number of archived logs that can be fetched concurrently from an instance (thread). The default value is 1. When the value is greater than 1, archived logs are prefetched to improve the fetch performance. Prefetched archived logs are temporarily stored in the local disk. |
| fetch_log_size | No | 10000 | The value for setFetchSize in PreparedStatement for fetching logs. The value affects the fetch speed. |
| fetch_online_log_interval_seconds | No | 5 | The interval for analyzing online logs. |
| full_table_name_black_list | Yes (nullable) | None | The blacklist. The table names in the list must contain the full paths. The two-segment format database.table and three-segment format tenant.database.table are supported, but the format used must be consistent. You can specify the full name or use an asterisk (*) for each segment. Regular expressions are not supported. Multiple tables are joined with vertical lines (|). The whitelist allows *.*, but the blacklist does not. If the first section is an asterisk (*), the second section must be a dot plus an asterisk (.*). For example, *.table1 is not allowed. |
| full_table_name_white_list | Yes | None | The whitelist. The table names in the list must contain the full paths. The two-segment format database.table and three-segment format tenant.database.table are supported, but the format used must be consistent. You can specify a full name or use an asterisk (*) for each segment. Regular expressions are not supported. Multiple tables are joined with vertical lines (|). The whitelist allows for *.*, but the blacklist does not. If the first section is an asterisk (*), the second section must be a dot plus an asterisk (.*). For example, *.table1 is not allowed. |
| ignore_error_code | No | Empty | The Oracle error codes to be ignored during a record query. Separate multiple error codes with vertical lines (|). |
| init_connection_size | No | 10 | The number of initial connections. |
| load_dictionary_thread_num | No | 16 | The number of concurrent threads for obtaining metadata during startup. Each schema corresponds to one task. |
| log_after_select_queue_size | No | 20000 | The queue size after the flashback query phase is completed. This parameter is used to check for bottlenecks in the pipeline. |
| log_aggregator_queue_size | No | 20000 | The queue size during the aggregation phase. |
| log_analyse_queue_size | No | 20000 | The queue size during the analysis phase. |
| log_converter_queue_size | No | 20000 | The queue size during the conversion phase. |
| log_entry_queue_size | No | 20000 | The queue size during the fetch phase. |
| master.timestamp | Yes | None | The start timestamp for pulling logs. The unit is seconds in a store, and this parameter does not need to be prefixed with logminer.. If the LogMiner Reader plug-in is used, the unit is milliseconds. This value is used as the pull start checkpoint for a new start or restart. When the incremental data of the LogMiner Reader plug-in is consumed, the committed timestamp of the last consumed transaction needs to be checkpointed, and the value of the last checkpoint is uploaded during recovery after a restart. |
| max_actively_staged_arch_logs_per_instance | No | Integer.MAX_VALUE | The maximum number of archive files fetched in parallel from one instance (thread) that can be temporarily stored in the local disk. When the specified value is reached, prefetch is paused. This parameter is used to control the usage of disk space. The actual number of temporarily stored files is subject to Math.max(fetch_arch_logs_max_parallelism_per_instance, max_actively_staged_arch_logs_per_instance). |
| max_connection_size | No | None (nullable) | If this parameter is empty, the value is automatically calculated by using the following formula: max(load_dictionary_thread_num, selector_thread_num) + 4. |
| max_num_in_memory_one_transaction | No | 1,000 | The maximum number of log records that can be stored in the memory for a transaction. If the specified value is exceeded, the log records will be temporarily stored on the disk. This parameter is used to resolve the problem of large transactions. |
| max_wait_ms | No | 180000 | The maximum time in milliseconds for which the Oracle database waits for the connection to the destination database to succeed. |
| only_fetch_archived_log | No | false | Specifies whether to pull only archived logs. |
| oracle.password | Yes | None | The password of the account for fetching logs. |
| oracle.url | Yes | None | The value of ip:port or service_name of the Oracle database from which logs are to be fetched. For a pluggable database (PDB), enter the value of service_name of the PDB. |
| oracle.user | Yes | None | The username of the account for fetching logs. For a pluggable database (PDB), enter the username of a regular user. |
| output_rowid | No | false | Specifies whether to output the row ID. For tables without primary keys, you can perform deduplication based on the obtained row IDs. |
| print_data | No | false | Specifies whether to print pulled logs for troubleshooting. |
| read_timeout_ms | No | 180000 | The read timeout period of Oracle connections. |
| read_timeout_retry_times | No | 10 | The number of retries upon query exceptions in the Oracle database. |
| replace_invalid_date | No | false | Specifies whether to replace an unparseable DATE type with the time when the log was generated, so that the LogMiner Reader plug-in can keep running. |
| replace_null_string | No | " " | The value for replacing a null value. The default value is a space. |
| replace_rowids | No | Empty | Specifies whether to replace a specified row ID with another value. The value is in the format of before1:after1|before2:after2. |
| selector_thread_num | No | 32 | The number of flashback query threads. Flashback query is required for INSERT LOB and UPDATE statements. You can adjust the value of this parameter to increase the speed of processing the INSERT LOB and UPDATE statements. Each flashback query thread requires one connection. |
| session_timezone | No | Asia/Shanghai | The time zone based on which strings are output for the timestamp with local time zone type. The value of this parameter must match that of the timezone parameter of the JDBCWriter. |
| skip_records | No | Empty | Specifies whether to skip records containing the specified strings. Separate multiple strings with vertical lines (|). |
| skip_transactions | No | Empty | Whether to skip transactions with specified transaction IDs. Separate multiple transaction IDs with vertical lines (|). |
| sql_in_clause_max_parameter_num | No | 200 | The maximum number of tables that can be queried when the IN clause is used to query metadata. |
| stage_queue_directory | No | The current working path of the process. | By default, the directory where prefetched archive files are temporarily stored is the current working path of the process. |
| start_timestamp_backoff_seconds | No | 300 | The backoff period to the breakpoint time, from when the fetch starts. The value is in seconds. |
| use_independent_fetcher_per_instance | No | false | Specifies whether to use an independent fetcher module to fetch transaction logs generated by each instance (thread) in the database. Default value: false.
|
| use_system_exit | No | true | Specifies whether to use the system.exit method. |