Last active
June 17, 2025 22:35
-
-
Save sxlijin/67fa9579a0868eedb87622111319bf75 to your computer and use it in GitHub Desktop.
fiddle settings
We can make this file beautiful and searchable if this error is corrected: Illegal quoting in line 115.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
dialect clickhouse 0 Which dialect will be used to parse query \N \N 0 Dialect clickhouse 0 Production | |
min_compress_block_size 65536 0 For [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables. In order to reduce latency when processing queries, a block is compressed when writing the next mark if its size is at least `min_compress_block_size`. By default, 65,536.\n\nThe actual size of the block, if the uncompressed data is less than `max_compress_block_size`, is no less than this value and no less than the volume of data for one mark.\n\nLet\'s look at an example. Assume that `index_granularity` was set to 8192 during table creation.\n\nWe are writing a UInt32-type column (4 bytes per value). When writing 8192 rows, the total will be 32 KB of data. Since min_compress_block_size = 65,536, a compressed block will be formed for every two marks.\n\nWe are writing a URL column with the String type (average size of 60 bytes per value). When writing 8192 rows, the average will be slightly less than 500 KB of data. Since this is more than 65,536, a compressed block will be formed for each mark. In this case, when reading data from the disk in the range of a single mark, extra data won\'t be decompressed.\n\n:::note\nThis is an expert-level setting, and you shouldn\'t change it if you\'re just getting started with ClickHouse.\n::: \N \N 0 UInt64 65536 0 Production | |
max_compress_block_size 1048576 0 The maximum size of blocks of uncompressed data before compressing for writing to a table. By default, 1,048,576 (1 MiB). Specifying a smaller block size generally leads to slightly reduced compression ratio, the compression and decompression speed increases slightly due to cache locality, and memory consumption is reduced.\n\n:::note\nThis is an expert-level setting, and you shouldn\'t change it if you\'re just getting started with ClickHouse.\n:::\n\nDon\'t confuse blocks for compression (a chunk of memory consisting of bytes) with blocks for query processing (a set of rows from a table). \N \N 0 UInt64 1048576 0 Production | |
max_block_size 65409 0 In ClickHouse, data is processed by blocks, which are sets of column parts. The internal processing cycles for a single block are efficient but there are noticeable costs when processing each block.\n\nThe `max_block_size` setting indicates the recommended maximum number of rows to include in a single block when loading data from tables. Blocks the size of `max_block_size` are not always loaded from the table: if ClickHouse determines that less data needs to be retrieved, a smaller block is processed.\n\nThe block size should not be too small to avoid noticeable costs when processing each block. It should also not be too large to ensure that queries with a LIMIT clause execute quickly after processing the first block. When setting `max_block_size`, the goal should be to avoid consuming too much memory when extracting a large number of columns in multiple threads and to preserve at least some cache locality. \N \N 0 UInt64 65409 0 Production | |
max_insert_block_size 1048449 0 The size of blocks (in a count of rows) to form for insertion into a table.\nThis setting only applies in cases when the server forms the blocks.\nFor example, for an INSERT via the HTTP interface, the server parses the data format and forms blocks of the specified size.\nBut when using clickhouse-client, the client parses the data itself, and the \'max_insert_block_size\' setting on the server does not affect the size of the inserted blocks.\nThe setting also does not have a purpose when using INSERT SELECT, since data is inserted using the same blocks that are formed after SELECT.\n\nThe default is slightly more than `max_block_size`. The reason for this is that certain table engines (`*MergeTree`) form a data part on the disk for each inserted block, which is a fairly large entity. Similarly, `*MergeTree` tables sort data during insertion, and a large enough block size allow sorting more data in RAM. \N \N 0 UInt64 1048449 0 Production | |
min_insert_block_size_rows 1048449 0 Sets the minimum number of rows in the block that can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones.\n\nPossible values:\n\n- Positive integer.\n- 0 — Squashing disabled. \N \N 0 UInt64 1048449 0 Production | |
min_insert_block_size_bytes 268402944 0 Sets the minimum number of bytes in the block which can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones.\n\nPossible values:\n\n- Positive integer.\n- 0 — Squashing disabled. \N \N 0 UInt64 268402944 0 Production | |
min_insert_block_size_rows_for_materialized_views 0 0 Sets the minimum number of rows in the block which can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones. This setting is applied only for blocks inserted into [materialized view](../../sql-reference/statements/create/view.md). By adjusting this setting, you control blocks squashing while pushing to materialized view and avoid excessive memory usage.\n\nPossible values:\n\n- Any positive integer.\n- 0 — Squashing disabled.\n\n**See Also**\n\n- [min_insert_block_size_rows](#min_insert_block_size_rows) \N \N 0 UInt64 0 0 Production | |
min_insert_block_size_bytes_for_materialized_views 0 0 Sets the minimum number of bytes in the block which can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones. This setting is applied only for blocks inserted into [materialized view](../../sql-reference/statements/create/view.md). By adjusting this setting, you control blocks squashing while pushing to materialized view and avoid excessive memory usage.\n\nPossible values:\n\n- Any positive integer.\n- 0 — Squashing disabled.\n\n**See also**\n\n- [min_insert_block_size_bytes](#min_insert_block_size_bytes) \N \N 0 UInt64 0 0 Production | |
min_external_table_block_size_rows 1048449 0 Squash blocks passed to external table to specified size in rows, if blocks are not big enough. \N \N 0 UInt64 1048449 0 Production | |
min_external_table_block_size_bytes 268402944 0 Squash blocks passed to the external table to a specified size in bytes, if blocks are not big enough. \N \N 0 UInt64 268402944 0 Production | |
max_joined_block_size_rows 65409 0 Maximum block size for JOIN result (if join algorithm supports it). 0 means unlimited. \N \N 0 UInt64 65409 0 Production | |
min_joined_block_size_bytes 524288 0 Minimum block size for JOIN result (if join algorithm supports it). 0 means unlimited. \N \N 0 UInt64 524288 0 Production | |
max_insert_threads 0 0 The maximum number of threads to execute the `INSERT SELECT` query.\n\nPossible values:\n\n- 0 (or 1) — `INSERT SELECT` no parallel execution.\n- Positive integer. Bigger than 1.\n\nCloud default value: from `2` to `4`, depending on the service size.\n\nParallel `INSERT SELECT` has effect only if the `SELECT` part is executed in parallel, see [max_threads](#max_threads) setting.\nHigher values will lead to higher memory usage. \N \N 0 UInt64 0 0 Production | |
max_insert_delayed_streams_for_parallel_write 0 0 The maximum number of streams (columns) to delay final part flush. Default - auto (100 in case of underlying storage supports parallel write, for example S3 and disabled otherwise) \N \N 0 UInt64 0 0 Production | |
max_final_threads \'auto(8)\' 0 Sets the maximum number of parallel threads for the `SELECT` query data read phase with the [FINAL](/sql-reference/statements/select/from#final-modifier) modifier.\n\nPossible values:\n\n- Positive integer.\n- 0 or 1 — Disabled. `SELECT` queries are executed in a single thread. \N \N 0 MaxThreads \'auto(8)\' 0 Production | |
max_threads_for_indexes 0 0 The maximum number of threads process indices. \N \N 0 UInt64 0 0 Production | |
max_threads \'auto(8)\' 0 The maximum number of query processing threads, excluding threads for retrieving data from remote servers (see the \'max_distributed_connections\' parameter).\n\nThis parameter applies to threads that perform the same stages of the query processing pipeline in parallel.\nFor example, when reading from a table, if it is possible to evaluate expressions with functions, filter with WHERE and pre-aggregate for GROUP BY in parallel using at least \'max_threads\' number of threads, then \'max_threads\' are used.\n\nFor queries that are completed quickly because of a LIMIT, you can set a lower \'max_threads\'. For example, if the necessary number of entries are located in every block and max_threads = 8, then 8 blocks are retrieved, although it would have been enough to read just one.\n\nThe smaller the `max_threads` value, the less memory is consumed. \N \N 0 MaxThreads \'auto(8)\' 0 Production | |
use_concurrency_control 1 0 Respect the server\'s concurrency control (see the `concurrent_threads_soft_limit_num` and `concurrent_threads_soft_limit_ratio_to_cores` global server settings). If disabled, it allows using a larger number of threads even if the server is overloaded (not recommended for normal usage, and needed mostly for tests). \N \N 0 Bool 1 0 Production | |
max_download_threads 4 0 The maximum number of threads to download data (e.g. for URL engine). \N \N 0 MaxThreads 4 0 Production | |
max_parsing_threads \'auto(8)\' 0 The maximum number of threads to parse data in input formats that support parallel parsing. By default, it is determined automatically \N \N 0 MaxThreads \'auto(8)\' 0 Production | |
max_download_buffer_size 10485760 0 The maximal size of buffer for parallel downloading (e.g. for URL engine) per each thread. \N \N 0 UInt64 10485760 0 Production | |
max_read_buffer_size 1048576 0 The maximum size of the buffer to read from the filesystem. \N \N 0 UInt64 1048576 0 Production | |
max_read_buffer_size_local_fs 131072 0 The maximum size of the buffer to read from local filesystem. If set to 0 then max_read_buffer_size will be used. \N \N 0 UInt64 131072 0 Production | |
max_read_buffer_size_remote_fs 0 0 The maximum size of the buffer to read from remote filesystem. If set to 0 then max_read_buffer_size will be used. \N \N 0 UInt64 0 0 Production | |
max_distributed_connections 1024 0 The maximum number of simultaneous connections with remote servers for distributed processing of a single query to a single Distributed table. We recommend setting a value no less than the number of servers in the cluster.\n\nThe following parameters are only used when creating Distributed tables (and when launching a server), so there is no reason to change them at runtime. \N \N 0 UInt64 1024 0 Production | |
max_query_size 262144 0 The maximum number of bytes of a query string parsed by the SQL parser.\nData in the VALUES clause of INSERT queries is processed by a separate stream parser (that consumes O(1) RAM) and not affected by this restriction.\n\n:::note\n`max_query_size` cannot be set within an SQL query (e.g., `SELECT now() SETTINGS max_query_size=10000`) because ClickHouse needs to allocate a buffer to parse the query, and this buffer size is determined by the `max_query_size` setting, which must be configured before the query is executed.\n::: \N \N 0 UInt64 262144 0 Production | |
interactive_delay 100000 0 The interval in microseconds for checking whether request execution has been canceled and sending the progress. \N \N 0 UInt64 100000 0 Production | |
connect_timeout 10 0 Connection timeout if there are no replicas. \N \N 0 Seconds 10 0 Production | |
handshake_timeout_ms 10000 0 Timeout in milliseconds for receiving Hello packet from replicas during handshake. \N \N 0 Milliseconds 10000 0 Production | |
connect_timeout_with_failover_ms 1000 0 The timeout in milliseconds for connecting to a remote server for a Distributed table engine, if the \'shard\' and \'replica\' sections are used in the cluster definition.\nIf unsuccessful, several attempts are made to connect to various replicas. \N \N 0 Milliseconds 1000 0 Production | |
connect_timeout_with_failover_secure_ms 1000 0 Connection timeout for selecting first healthy replica (for secure connections). \N \N 0 Milliseconds 1000 0 Production | |
receive_timeout 300 0 Timeout for receiving data from the network, in seconds. If no bytes were received in this interval, the exception is thrown. If you set this setting on the client, the \'send_timeout\' for the socket will also be set on the corresponding connection end on the server. \N \N 0 Seconds 300 0 Production | |
send_timeout 300 0 Timeout for sending data to the network, in seconds. If a client needs to send some data but is not able to send any bytes in this interval, the exception is thrown. If you set this setting on the client, the \'receive_timeout\' for the socket will also be set on the corresponding connection end on the server. \N \N 0 Seconds 300 0 Production | |
tcp_keep_alive_timeout 290 0 The time in seconds the connection needs to remain idle before TCP starts sending keepalive probes \N \N 0 Seconds 290 0 Production | |
hedged_connection_timeout_ms 50 0 Connection timeout for establishing connection with replica for Hedged requests \N \N 0 Milliseconds 50 0 Production | |
receive_data_timeout_ms 2000 0 Connection timeout for receiving first packet of data or packet with positive progress from replica \N \N 0 Milliseconds 2000 0 Production | |
use_hedged_requests 1 0 Enables hedged requests logic for remote queries. It allows to establish many connections with different replicas for query.\nNew connection is enabled in case existent connection(s) with replica(s) were not established within `hedged_connection_timeout`\nor no data was received within `receive_data_timeout`. Query uses the first connection which send non empty progress packet (or data packet, if `allow_changing_replica_until_first_data_packet`);\nother connections are cancelled. Queries with `max_parallel_replicas > 1` are supported.\n\nEnabled by default.\n\nDisabled by default on Cloud. \N \N 0 Bool 1 0 Production | |
allow_changing_replica_until_first_data_packet 0 0 If it\'s enabled, in hedged requests we can start new connection until receiving first data packet even if we have already made some progress\n(but progress haven\'t updated for `receive_data_timeout` timeout), otherwise we disable changing replica after the first time we made progress. \N \N 0 Bool 0 0 Production | |
queue_max_wait_ms 0 0 The wait time in the request queue, if the number of concurrent requests exceeds the maximum. \N \N 0 Milliseconds 0 0 Production | |
connection_pool_max_wait_ms 0 0 The wait time in milliseconds for a connection when the connection pool is full.\n\nPossible values:\n\n- Positive integer.\n- 0 — Infinite timeout. \N \N 0 Milliseconds 0 0 Production | |
replace_running_query_max_wait_ms 5000 0 The wait time for running the query with the same `query_id` to finish, when the [replace_running_query](#replace_running_query) setting is active.\n\nPossible values:\n\n- Positive integer.\n- 0 — Throwing an exception that does not allow to run a new query if the server already executes a query with the same `query_id`. \N \N 0 Milliseconds 5000 0 Production | |
kafka_max_wait_ms 5000 0 The wait time in milliseconds for reading messages from [Kafka](/engines/table-engines/integrations/kafka) before retry.\n\nPossible values:\n\n- Positive integer.\n- 0 — Infinite timeout.\n\nSee also:\n\n- [Apache Kafka](https://kafka.apache.org/) \N \N 0 Milliseconds 5000 0 Production | |
rabbitmq_max_wait_ms 5000 0 The wait time for reading from RabbitMQ before retry. \N \N 0 Milliseconds 5000 0 Production | |
poll_interval 10 0 Block at the query wait loop on the server for the specified number of seconds. \N \N 0 UInt64 10 0 Production | |
idle_connection_timeout 3600 0 Timeout to close idle TCP connections after specified number of seconds.\n\nPossible values:\n\n- Positive integer (0 - close immediately, after 0 seconds). \N \N 0 UInt64 3600 0 Production | |
distributed_connections_pool_size 1024 0 The maximum number of simultaneous connections with remote servers for distributed processing of all queries to a single Distributed table. We recommend setting a value no less than the number of servers in the cluster. \N \N 0 UInt64 1024 0 Production | |
connections_with_failover_max_tries 3 0 The maximum number of connection attempts with each replica for the Distributed table engine. \N \N 0 UInt64 3 0 Production | |
s3_strict_upload_part_size 0 0 The exact size of part to upload during multipart upload to S3 (some implementations does not supports variable size parts). \N \N 0 UInt64 0 0 Production | |
azure_strict_upload_part_size 0 0 The exact size of part to upload during multipart upload to Azure blob storage. \N \N 0 UInt64 0 0 Production | |
azure_max_blocks_in_multipart_upload 50000 0 Maximum number of blocks in multipart upload for Azure. \N \N 0 UInt64 50000 0 Production | |
s3_min_upload_part_size 16777216 0 The minimum size of part to upload during multipart upload to S3. \N \N 0 UInt64 16777216 0 Production | |
s3_max_upload_part_size 5368709120 0 The maximum size of part to upload during multipart upload to S3. \N \N 0 UInt64 5368709120 0 Production | |
azure_min_upload_part_size 16777216 0 The minimum size of part to upload during multipart upload to Azure blob storage. \N \N 0 UInt64 16777216 0 Production | |
azure_max_upload_part_size 5368709120 0 The maximum size of part to upload during multipart upload to Azure blob storage. \N \N 0 UInt64 5368709120 0 Production | |
s3_upload_part_size_multiply_factor 2 0 Multiply s3_min_upload_part_size by this factor each time s3_multiply_parts_count_threshold parts were uploaded from a single write to S3. \N \N 0 UInt64 2 0 Production | |
s3_upload_part_size_multiply_parts_count_threshold 500 0 Each time this number of parts was uploaded to S3, s3_min_upload_part_size is multiplied by s3_upload_part_size_multiply_factor. \N \N 0 UInt64 500 0 Production | |
s3_max_part_number 10000 0 Maximum part number number for s3 upload part. \N \N 0 UInt64 10000 0 Production | |
s3_allow_multipart_copy 1 0 Allow multipart copy in S3. \N \N 0 Bool 1 0 Production | |
s3_max_single_operation_copy_size 33554432 0 Maximum size for single-operation copy in s3. This setting is used only if s3_allow_multipart_copy is true. \N \N 0 UInt64 33554432 0 Production | |
azure_upload_part_size_multiply_factor 2 0 Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage. \N \N 0 UInt64 2 0 Production | |
azure_upload_part_size_multiply_parts_count_threshold 500 0 Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor. \N \N 0 UInt64 500 0 Production | |
s3_max_inflight_parts_for_one_file 20 0 The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited. \N \N 0 UInt64 20 0 Production | |
azure_max_inflight_parts_for_one_file 20 0 The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited. \N \N 0 UInt64 20 0 Production | |
s3_max_single_part_upload_size 33554432 0 The maximum size of object to upload using singlepart upload to S3. \N \N 0 UInt64 33554432 0 Production | |
azure_max_single_part_upload_size 104857600 0 The maximum size of object to upload using singlepart upload to Azure blob storage. \N \N 0 UInt64 104857600 0 Production | |
azure_max_single_part_copy_size 268435456 0 The maximum size of object to copy using single part copy to Azure blob storage. \N \N 0 UInt64 268435456 0 Production | |
s3_max_single_read_retries 4 0 The maximum number of retries during single S3 read. \N \N 0 UInt64 4 0 Production | |
azure_max_single_read_retries 4 0 The maximum number of retries during single Azure blob storage read. \N \N 0 UInt64 4 0 Production | |
azure_max_unexpected_write_error_retries 4 0 The maximum number of retries in case of unexpected errors during Azure blob storage write \N \N 0 UInt64 4 0 Production | |
s3_max_unexpected_write_error_retries 4 0 The maximum number of retries in case of unexpected errors during S3 write. \N \N 0 UInt64 4 0 Production | |
s3_max_redirects 10 0 Max number of S3 redirects hops allowed. \N \N 0 UInt64 10 0 Production | |
s3_max_connections 1024 0 The maximum number of connections per server. \N \N 0 UInt64 1024 0 Production | |
s3_max_get_rps 0 0 Limit on S3 GET request per second rate before throttling. Zero means unlimited. \N \N 0 UInt64 0 0 Production | |
s3_max_get_burst 0 0 Max number of requests that can be issued simultaneously before hitting request per second limit. By default (0) equals to `s3_max_get_rps` \N \N 0 UInt64 0 0 Production | |
s3_max_put_rps 0 0 Limit on S3 PUT request per second rate before throttling. Zero means unlimited. \N \N 0 UInt64 0 0 Production | |
s3_max_put_burst 0 0 Max number of requests that can be issued simultaneously before hitting request per second limit. By default (0) equals to `s3_max_put_rps` \N \N 0 UInt64 0 0 Production | |
s3_list_object_keys_size 1000 0 Maximum number of files that could be returned in batch by ListObject request \N \N 0 UInt64 1000 0 Production | |
s3_use_adaptive_timeouts 1 0 When set to `true` than for all s3 requests first two attempts are made with low send and receive timeouts.\nWhen set to `false` than all attempts are made with identical timeouts. \N \N 0 Bool 1 0 Production | |
azure_list_object_keys_size 1000 0 Maximum number of files that could be returned in batch by ListObject request \N \N 0 UInt64 1000 0 Production | |
s3_truncate_on_insert 0 0 Enables or disables truncate before inserts in s3 engine tables. If disabled, an exception will be thrown on insert attempts if an S3 object already exists.\n\nPossible values:\n- 0 — `INSERT` query creates a new file or fail if file exists and s3_create_new_file_on_insert is not set.\n- 1 — `INSERT` query replaces existing content of the file with the new data.\n\nSee more details [here](/integrations/s3#inserting-data). \N \N 0 Bool 0 0 Production | |
azure_truncate_on_insert 0 0 Enables or disables truncate before insert in azure engine tables. \N \N 0 Bool 0 0 Production | |
s3_create_new_file_on_insert 0 0 Enables or disables creating a new file on each insert in s3 engine tables. If enabled, on each insert a new S3 object will be created with the key, similar to this pattern:\n\ninitial: `data.Parquet.gz` -> `data.1.Parquet.gz` -> `data.2.Parquet.gz`, etc.\n\nPossible values:\n- 0 — `INSERT` query creates a new file or fail if file exists and s3_truncate_on_insert is not set.\n- 1 — `INSERT` query creates a new file on each insert using suffix (from the second one) if s3_truncate_on_insert is not set.\n\nSee more details [here](/integrations/s3#inserting-data). \N \N 0 Bool 0 0 Production | |
s3_skip_empty_files 1 0 Enables or disables skipping empty files in [S3](../../engines/table-engines/integrations/s3.md) engine tables.\n\nPossible values:\n- 0 — `SELECT` throws an exception if empty file is not compatible with requested format.\n- 1 — `SELECT` returns empty result for empty file. \N \N 0 Bool 1 0 Production | |
azure_create_new_file_on_insert 0 0 Enables or disables creating a new file on each insert in azure engine tables \N \N 0 Bool 0 0 Production | |
s3_check_objects_after_upload 0 0 Check each uploaded object to s3 with head request to be sure that upload was successful \N \N 0 Bool 0 0 Production | |
azure_check_objects_after_upload 0 0 Check each uploaded object in azure blob storage to be sure that upload was successful \N \N 0 Bool 0 0 Production | |
s3_allow_parallel_part_upload 1 0 Use multiple threads for s3 multipart upload. It may lead to slightly higher memory usage \N \N 0 Bool 1 0 Production | |
azure_allow_parallel_part_upload 1 0 Use multiple threads for azure multipart upload. \N \N 0 Bool 1 0 Production | |
s3_throw_on_zero_files_match 0 0 Throw an error, when ListObjects request cannot match any files \N \N 0 Bool 0 0 Production | |
hdfs_throw_on_zero_files_match 0 0 Throw an error if matched zero files according to glob expansion rules.\n\nPossible values:\n- 1 — `SELECT` throws an exception.\n- 0 — `SELECT` returns empty result. \N \N 0 Bool 0 0 Production | |
azure_throw_on_zero_files_match 0 0 Throw an error if matched zero files according to glob expansion rules.\n\nPossible values:\n- 1 — `SELECT` throws an exception.\n- 0 — `SELECT` returns empty result. \N \N 0 Bool 0 0 Production | |
s3_ignore_file_doesnt_exist 0 0 Ignore absence of file if it does not exist when reading certain keys.\n\nPossible values:\n- 1 — `SELECT` returns empty result.\n- 0 — `SELECT` throws an exception. \N \N 0 Bool 0 0 Production | |
hdfs_ignore_file_doesnt_exist 0 0 Ignore absence of file if it does not exist when reading certain keys.\n\nPossible values:\n- 1 — `SELECT` returns empty result.\n- 0 — `SELECT` throws an exception. \N \N 0 Bool 0 0 Production | |
azure_ignore_file_doesnt_exist 0 0 Ignore absence of file if it does not exist when reading certain keys.\n\nPossible values:\n- 1 — `SELECT` returns empty result.\n- 0 — `SELECT` throws an exception. \N \N 0 Bool 0 0 Production | |
azure_sdk_max_retries 10 0 Maximum number of retries in azure sdk \N \N 0 UInt64 10 0 Production | |
azure_sdk_retry_initial_backoff_ms 10 0 Minimal backoff between retries in azure sdk \N \N 0 UInt64 10 0 Production | |
azure_sdk_retry_max_backoff_ms 1000 0 Maximal backoff between retries in azure sdk \N \N 0 UInt64 1000 0 Production | |
s3_validate_request_settings 1 0 Enables s3 request settings validation.\n\nPossible values:\n- 1 — validate settings.\n- 0 — do not validate settings. \N \N 0 Bool 1 0 Production | |
s3_disable_checksum 0 0 Do not calculate a checksum when sending a file to S3. This speeds up writes by avoiding excessive processing passes on a file. It is mostly safe as the data of MergeTree tables is checksummed by ClickHouse anyway, and when S3 is accessed with HTTPS, the TLS layer already provides integrity while transferring through the network. While additional checksums on S3 give defense in depth. \N \N 0 Bool 0 0 Production | |
s3_retry_attempts 100 0 Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries \N \N 0 UInt64 100 0 Production | |
s3_request_timeout_ms 30000 0 Idleness timeout for sending and receiving data to/from S3. Fail if a single TCP read or write call blocks for this long. \N \N 0 UInt64 30000 0 Production | |
s3_connect_timeout_ms 1000 0 Connection timeout for host from s3 disks. \N \N 0 UInt64 1000 0 Production | |
enable_s3_requests_logging 0 0 Enable very explicit logging of S3 requests. Makes sense for debug only. \N \N 0 Bool 0 0 Production | |
s3queue_default_zookeeper_path /clickhouse/s3queue/ 0 Default zookeeper path prefix for S3Queue engine \N \N 0 String /clickhouse/s3queue/ 0 Production | |
s3queue_migrate_old_metadata_to_buckets 0 0 Migrate old metadata structure of S3Queue table to a new one \N \N 0 Bool 0 0 Production | |
s3queue_enable_logging_to_s3queue_log 0 0 Enable writing to system.s3queue_log. The value can be overwritten per table with table settings \N \N 0 Bool 0 0 Production | |
hdfs_replication 0 0 The actual number of replications can be specified when the hdfs file is created. \N \N 0 UInt64 0 0 Production | |
hdfs_truncate_on_insert 0 0 Enables or disables truncation before an insert in hdfs engine tables. If disabled, an exception will be thrown on an attempt to insert if a file in HDFS already exists.\n\nPossible values:\n- 0 — `INSERT` query appends new data to the end of the file.\n- 1 — `INSERT` query replaces existing content of the file with the new data. \N \N 0 Bool 0 0 Production | |
hdfs_create_new_file_on_insert 0 0 Enables or disables creating a new file on each insert in HDFS engine tables. If enabled, on each insert a new HDFS file will be created with the name, similar to this pattern:\n\ninitial: `data.Parquet.gz` -> `data.1.Parquet.gz` -> `data.2.Parquet.gz`, etc.\n\nPossible values:\n- 0 — `INSERT` query appends new data to the end of the file.\n- 1 — `INSERT` query creates a new file. \N \N 0 Bool 0 0 Production | |
hdfs_skip_empty_files 0 0 Enables or disables skipping empty files in [HDFS](../../engines/table-engines/integrations/hdfs.md) engine tables.\n\nPossible values:\n- 0 — `SELECT` throws an exception if empty file is not compatible with requested format.\n- 1 — `SELECT` returns empty result for empty file. \N \N 0 Bool 0 0 Production | |
enable_hdfs_pread 1 0 Enable or disables pread for HDFS files. By default, `hdfsPread` is used. If disabled, `hdfsRead` and `hdfsSeek` will be used to read hdfs files. \N \N 0 Bool 1 0 Production | |
azure_skip_empty_files 0 0 Enables or disables skipping empty files in S3 engine.\n\nPossible values:\n- 0 — `SELECT` throws an exception if empty file is not compatible with requested format.\n- 1 — `SELECT` returns empty result for empty file. \N \N 0 Bool 0 0 Production | |
hsts_max_age 0 0 Expired time for HSTS. 0 means disable HSTS. \N \N 0 UInt64 0 0 Production | |
extremes 0 0 Whether to count extreme values (the minimums and maximums in columns of a query result). Accepts 0 or 1. By default, 0 (disabled).\nFor more information, see the section "Extreme values". \N \N 0 Bool 0 0 Production | |
use_uncompressed_cache 0 0 Whether to use a cache of uncompressed blocks. Accepts 0 or 1. By default, 0 (disabled).\nUsing the uncompressed cache (only for tables in the MergeTree family) can significantly reduce latency and increase throughput when working with a large number of short queries. Enable this setting for users who send frequent short requests. Also pay attention to the [uncompressed_cache_size](/operations/server-configuration-parameters/settings#uncompressed_cache_size) configuration parameter (only set in the config file) – the size of uncompressed cache blocks. By default, it is 8 GiB. The uncompressed cache is filled in as needed and the least-used data is automatically deleted.\n\nFor queries that read at least a somewhat large volume of data (one million rows or more), the uncompressed cache is disabled automatically to save space for truly small queries. This means that you can keep the \'use_uncompressed_cache\' setting always set to 1. \N \N 0 Bool 0 0 Production | |
replace_running_query 0 0 When using the HTTP interface, the \'query_id\' parameter can be passed. This is any string that serves as the query identifier.\nIf a query from the same user with the same \'query_id\' already exists at this time, the behaviour depends on the \'replace_running_query\' parameter.\n\n`0` (default) – Throw an exception (do not allow the query to run if a query with the same \'query_id\' is already running).\n\n`1` – Cancel the old query and start running the new one.\n\nSet this parameter to 1 for implementing suggestions for segmentation conditions. After entering the next character, if the old query hasn\'t finished yet, it should be cancelled. \N \N 0 Bool 0 0 Production | |
max_remote_read_network_bandwidth 0 0 The maximum speed of data exchange over the network in bytes per second for read. \N \N 0 UInt64 0 0 Production | |
max_remote_write_network_bandwidth 0 0 The maximum speed of data exchange over the network in bytes per second for write. \N \N 0 UInt64 0 0 Production | |
max_local_read_bandwidth 0 0 The maximum speed of local reads in bytes per second. \N \N 0 UInt64 0 0 Production | |
max_local_write_bandwidth 0 0 The maximum speed of local writes in bytes per second. \N \N 0 UInt64 0 0 Production | |
stream_like_engine_allow_direct_select 0 0 Allow direct SELECT query for Kafka, RabbitMQ, FileLog, Redis Streams, and NATS engines. In case there are attached materialized views, SELECT query is not allowed even if this setting is enabled. \N \N 0 Bool 0 0 Production | |
stream_like_engine_insert_queue 0 When stream-like engine reads from multiple queues, the user will need to select one queue to insert into when writing. Used by Redis Streams and NATS. \N \N 0 String 0 Production | |
dictionary_validate_primary_key_type 0 0 Validate primary key type for dictionaries. By default id type for simple layouts will be implicitly converted to UInt64. \N \N 0 Bool 0 0 Production | |
distributed_insert_skip_read_only_replicas 0 0 Enables skipping read-only replicas for INSERT queries into Distributed.\n\nPossible values:\n\n- 0 — INSERT was as usual, if it will go to read-only replica it will fail\n- 1 — Initiator will skip read-only replicas before sending data to shards. \N \N 0 Bool 0 0 Production | |
distributed_foreground_insert 0 0 Enables or disables synchronous data insertion into a [Distributed](/engines/table-engines/special/distributed) table.\n\nBy default, when inserting data into a `Distributed` table, the ClickHouse server sends data to cluster nodes in background mode. When `distributed_foreground_insert=1`, the data is processed synchronously, and the `INSERT` operation succeeds only after all the data is saved on all shards (at least one replica for each shard if `internal_replication` is true).\n\nPossible values:\n\n- 0 — Data is inserted in background mode.\n- 1 — Data is inserted in synchronous mode.\n\nCloud default value: `1`.\n\n**See Also**\n\n- [Distributed Table Engine](/engines/table-engines/special/distributed)\n- [Managing Distributed Tables](/sql-reference/statements/system#managing-distributed-tables) \N \N 0 Bool 0 0 Production | |
insert_distributed_sync 0 0 Enables or disables synchronous data insertion into a [Distributed](/engines/table-engines/special/distributed) table.\n\nBy default, when inserting data into a `Distributed` table, the ClickHouse server sends data to cluster nodes in background mode. When `distributed_foreground_insert=1`, the data is processed synchronously, and the `INSERT` operation succeeds only after all the data is saved on all shards (at least one replica for each shard if `internal_replication` is true).\n\nPossible values:\n\n- 0 — Data is inserted in background mode.\n- 1 — Data is inserted in synchronous mode.\n\nCloud default value: `1`.\n\n**See Also**\n\n- [Distributed Table Engine](/engines/table-engines/special/distributed)\n- [Managing Distributed Tables](/sql-reference/statements/system#managing-distributed-tables) \N \N 0 Bool 0 distributed_foreground_insert 0 Production | |
distributed_background_insert_timeout 0 0 Timeout for insert query into distributed. Setting is used only with insert_distributed_sync enabled. Zero value means no timeout. \N \N 0 UInt64 0 0 Production | |
insert_distributed_timeout 0 0 Timeout for insert query into distributed. Setting is used only with insert_distributed_sync enabled. Zero value means no timeout. \N \N 0 UInt64 0 distributed_background_insert_timeout 0 Production | |
distributed_background_insert_sleep_time_ms 100 0 Base interval for the [Distributed](../../engines/table-engines/special/distributed.md) table engine to send data. The actual interval grows exponentially in the event of errors.\n\nPossible values:\n\n- A positive integer number of milliseconds. \N \N 0 Milliseconds 100 0 Production | |
distributed_directory_monitor_sleep_time_ms 100 0 Base interval for the [Distributed](../../engines/table-engines/special/distributed.md) table engine to send data. The actual interval grows exponentially in the event of errors.\n\nPossible values:\n\n- A positive integer number of milliseconds. \N \N 0 Milliseconds 100 distributed_background_insert_sleep_time_ms 0 Production | |
distributed_background_insert_max_sleep_time_ms 30000 0 Maximum interval for the [Distributed](../../engines/table-engines/special/distributed.md) table engine to send data. Limits exponential growth of the interval set in the [distributed_background_insert_sleep_time_ms](#distributed_background_insert_sleep_time_ms) setting.\n\nPossible values:\n\n- A positive integer number of milliseconds. \N \N 0 Milliseconds 30000 0 Production | |
distributed_directory_monitor_max_sleep_time_ms 30000 0 Maximum interval for the [Distributed](../../engines/table-engines/special/distributed.md) table engine to send data. Limits exponential growth of the interval set in the [distributed_background_insert_sleep_time_ms](#distributed_background_insert_sleep_time_ms) setting.\n\nPossible values:\n\n- A positive integer number of milliseconds. \N \N 0 Milliseconds 30000 distributed_background_insert_max_sleep_time_ms 0 Production | |
distributed_background_insert_batch 0 0 Enables/disables inserted data sending in batches.\n\nWhen batch sending is enabled, the [Distributed](../../engines/table-engines/special/distributed.md) table engine tries to send multiple files of inserted data in one operation instead of sending them separately. Batch sending improves cluster performance by better-utilizing server and network resources.\n\nPossible values:\n\n- 1 — Enabled.\n- 0 — Disabled. \N \N 0 Bool 0 0 Production | |
distributed_directory_monitor_batch_inserts 0 0 Enables/disables inserted data sending in batches.\n\nWhen batch sending is enabled, the [Distributed](../../engines/table-engines/special/distributed.md) table engine tries to send multiple files of inserted data in one operation instead of sending them separately. Batch sending improves cluster performance by better-utilizing server and network resources.\n\nPossible values:\n\n- 1 — Enabled.\n- 0 — Disabled. \N \N 0 Bool 0 distributed_background_insert_batch 0 Production | |
distributed_background_insert_split_batch_on_failure 0 0 Enables/disables splitting batches on failures.\n\nSometimes sending particular batch to the remote shard may fail, because of some complex pipeline after (i.e. `MATERIALIZED VIEW` with `GROUP BY`) due to `Memory limit exceeded` or similar errors. In this case, retrying will not help (and this will stuck distributed sends for the table) but sending files from that batch one by one may succeed INSERT.\n\nSo installing this setting to `1` will disable batching for such batches (i.e. temporary disables `distributed_background_insert_batch` for failed batches).\n\nPossible values:\n\n- 1 — Enabled.\n- 0 — Disabled.\n\n:::note\nThis setting also affects broken batches (that may appears because of abnormal server (machine) termination and no `fsync_after_insert`/`fsync_directories` for [Distributed](../../engines/table-engines/special/distributed.md) table engine).\n:::\n\n:::note\nYou should not rely on automatic batch splitting, since this may hurt performance.\n::: \N \N 0 Bool 0 0 Production | |
distributed_directory_monitor_split_batch_on_failure 0 0 Enables/disables splitting batches on failures.\n\nSometimes sending particular batch to the remote shard may fail, because of some complex pipeline after (i.e. `MATERIALIZED VIEW` with `GROUP BY`) due to `Memory limit exceeded` or similar errors. In this case, retrying will not help (and this will stuck distributed sends for the table) but sending files from that batch one by one may succeed INSERT.\n\nSo installing this setting to `1` will disable batching for such batches (i.e. temporary disables `distributed_background_insert_batch` for failed batches).\n\nPossible values:\n\n- 1 — Enabled.\n- 0 — Disabled.\n\n:::note\nThis setting also affects broken batches (that may appears because of abnormal server (machine) termination and no `fsync_after_insert`/`fsync_directories` for [Distributed](../../engines/table-engines/special/distributed.md) table engine).\n:::\n\n:::note\nYou should not rely on automatic batch splitting, since this may hurt performance.\n::: \N \N 0 Bool 0 distributed_background_insert_split_batch_on_failure 0 Production | |
optimize_move_to_prewhere 1 0 Enables or disables automatic [PREWHERE](../../sql-reference/statements/select/prewhere.md) optimization in [SELECT](../../sql-reference/statements/select/index.md) queries.\n\nWorks only for [*MergeTree](../../engines/table-engines/mergetree-family/index.md) tables.\n\nPossible values:\n\n- 0 — Automatic `PREWHERE` optimization is disabled.\n- 1 — Automatic `PREWHERE` optimization is enabled. \N \N 0 Bool 1 0 Production | |
optimize_move_to_prewhere_if_final 0 0 Enables or disables automatic [PREWHERE](../../sql-reference/statements/select/prewhere.md) optimization in [SELECT](../../sql-reference/statements/select/index.md) queries with [FINAL](/sql-reference/statements/select/from#final-modifier) modifier.\n\nWorks only for [*MergeTree](../../engines/table-engines/mergetree-family/index.md) tables.\n\nPossible values:\n\n- 0 — Automatic `PREWHERE` optimization in `SELECT` queries with `FINAL` modifier is disabled.\n- 1 — Automatic `PREWHERE` optimization in `SELECT` queries with `FINAL` modifier is enabled.\n\n**See Also**\n\n- [optimize_move_to_prewhere](#optimize_move_to_prewhere) setting \N \N 0 Bool 0 0 Production | |
move_all_conditions_to_prewhere 1 0 Move all viable conditions from WHERE to PREWHERE \N \N 0 Bool 1 0 Production | |
enable_multiple_prewhere_read_steps 1 0 Move more conditions from WHERE to PREWHERE and do reads from disk and filtering in multiple steps if there are multiple conditions combined with AND \N \N 0 Bool 1 0 Production | |
move_primary_key_columns_to_end_of_prewhere 1 0 Move PREWHERE conditions containing primary key columns to the end of AND chain. It is likely that these conditions are taken into account during primary key analysis and thus will not contribute a lot to PREWHERE filtering. \N \N 0 Bool 1 0 Production | |
allow_reorder_prewhere_conditions 1 0 When moving conditions from WHERE to PREWHERE, allow reordering them to optimize filtering \N \N 0 Bool 1 0 Production | |
alter_sync 1 0 Allows to set up waiting for actions to be executed on replicas by [ALTER](../../sql-reference/statements/alter/index.md), [OPTIMIZE](../../sql-reference/statements/optimize.md) or [TRUNCATE](../../sql-reference/statements/truncate.md) queries.\n\nPossible values:\n\n- 0 — Do not wait.\n- 1 — Wait for own execution.\n- 2 — Wait for everyone.\n\nCloud default value: `0`.\n\n:::note\n`alter_sync` is applicable to `Replicated` tables only, it does nothing to alters of not `Replicated` tables.\n::: \N \N 0 UInt64 1 0 Production | |
replication_alter_partitions_sync 1 0 Allows to set up waiting for actions to be executed on replicas by [ALTER](../../sql-reference/statements/alter/index.md), [OPTIMIZE](../../sql-reference/statements/optimize.md) or [TRUNCATE](../../sql-reference/statements/truncate.md) queries.\n\nPossible values:\n\n- 0 — Do not wait.\n- 1 — Wait for own execution.\n- 2 — Wait for everyone.\n\nCloud default value: `0`.\n\n:::note\n`alter_sync` is applicable to `Replicated` tables only, it does nothing to alters of not `Replicated` tables.\n::: \N \N 0 UInt64 1 alter_sync 0 Production | |
replication_wait_for_inactive_replica_timeout 120 0 Specifies how long (in seconds) to wait for inactive replicas to execute [ALTER](../../sql-reference/statements/alter/index.md), [OPTIMIZE](../../sql-reference/statements/optimize.md) or [TRUNCATE](../../sql-reference/statements/truncate.md) queries.\n\nPossible values:\n\n- 0 — Do not wait.\n- Negative integer — Wait for unlimited time.\n- Positive integer — The number of seconds to wait. \N \N 0 Int64 120 0 Production | |
alter_move_to_space_execute_async 0 0 Execute ALTER TABLE MOVE ... TO [DISK|VOLUME] asynchronously \N \N 0 Bool 0 0 Production | |
load_balancing random 0 Specifies the algorithm of replicas selection that is used for distributed query processing.\n\nClickHouse supports the following algorithms of choosing replicas:\n\n- [Random](#load_balancing-random) (by default)\n- [Nearest hostname](#load_balancing-nearest_hostname)\n- [Hostname levenshtein distance](#load_balancing-hostname_levenshtein_distance)\n- [In order](#load_balancing-in_order)\n- [First or random](#load_balancing-first_or_random)\n- [Round robin](#load_balancing-round_robin)\n\nSee also:\n\n- [distributed_replica_max_ignored_errors](#distributed_replica_max_ignored_errors)\n\n### Random (by Default) {#load_balancing-random}\n\n```sql\nload_balancing = random\n```\n\nThe number of errors is counted for each replica. The query is sent to the replica with the fewest errors, and if there are several of these, to anyone of them.\nDisadvantages: Server proximity is not accounted for; if the replicas have different data, you will also get different data.\n\n### Nearest Hostname {#load_balancing-nearest_hostname}\n\n```sql\nload_balancing = nearest_hostname\n```\n\nThe number of errors is counted for each replica. Every 5 minutes, the number of errors is integrally divided by 2. Thus, the number of errors is calculated for a recent time with exponential smoothing. If there is one replica with a minimal number of errors (i.e. errors occurred recently on the other replicas), the query is sent to it. If there are multiple replicas with the same minimal number of errors, the query is sent to the replica with a hostname that is most similar to the server\'s hostname in the config file (for the number of different characters in identical positions, up to the minimum length of both hostnames).\n\nFor instance, example01-01-1 and example01-01-2 are different in one position, while example01-01-1 and example01-02-2 differ in two places.\nThis method might seem primitive, but it does not require external data about network topology, and it does not compare IP addresses, which would be complicated for our IPv6 addresses.\n\nThus, if there are equivalent replicas, the closest one by name is preferred.\nWe can also assume that when sending a query to the same server, in the absence of failures, a distributed query will also go to the same servers. So even if different data is placed on the replicas, the query will return mostly the same results.\n\n### Hostname levenshtein distance {#load_balancing-hostname_levenshtein_distance}\n\n```sql\nload_balancing = hostname_levenshtein_distance\n```\n\nJust like `nearest_hostname`, but it compares hostname in a [levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) manner. For example:\n\n```text\nexample-clickhouse-0-0 ample-clickhouse-0-0\n1\n\nexample-clickhouse-0-0 example-clickhouse-1-10\n2\n\nexample-clickhouse-0-0 example-clickhouse-12-0\n3\n```\n\n### In Order {#load_balancing-in_order}\n\n```sql\nload_balancing = in_order\n```\n\nReplicas with the same number of errors are accessed in the same order as they are specified in the configuration.\nThis method is appropriate when you know exactly which replica is preferable.\n\n### First or Random {#load_balancing-first_or_random}\n\n```sql\nload_balancing = first_or_random\n```\n\nThis algorithm chooses the first replica in the set or a random replica if the first is unavailable. It\'s effective in cross-replication topology setups, but useless in other configurations.\n\nThe `first_or_random` algorithm solves the problem of the `in_order` algorithm. With `in_order`, if one replica goes down, the next one gets a double load while the remaining replicas handle the usual amount of traffic. When using the `first_or_random` algorithm, the load is evenly distributed among replicas that are still available.\n\nIt\'s possible to explicitly define what the first replica is by using the setting `load_balancing_first_offset`. This gives more control to rebalance query workloads among replicas.\n\n### Round Robin {#load_balancing-round_robin}\n\n```sql\nload_balancing = round_robin\n```\n\nThis algorithm uses a round-robin policy across replicas with the same number of errors (only the queries with `round_robin` policy is accounted). \N \N 0 LoadBalancing random 0 Production | |
load_balancing_first_offset 0 0 Which replica to preferably send a query when FIRST_OR_RANDOM load balancing strategy is used. \N \N 0 UInt64 0 0 Production | |
totals_mode after_having_exclusive 0 How to calculate TOTALS when HAVING is present, as well as when max_rows_to_group_by and group_by_overflow_mode = \'any\' are present.\nSee the section "WITH TOTALS modifier". \N \N 0 TotalsMode after_having_exclusive 0 Production | |
totals_auto_threshold 0.5 0 The threshold for `totals_mode = \'auto\'`.\nSee the section "WITH TOTALS modifier". \N \N 0 Float 0.5 0 Production | |
allow_suspicious_low_cardinality_types 0 0 Allows or restricts using [LowCardinality](../../sql-reference/data-types/lowcardinality.md) with data types with fixed size of 8 bytes or less: numeric data types and `FixedString(8_bytes_or_less)`.\n\nFor small fixed values using of `LowCardinality` is usually inefficient, because ClickHouse stores a numeric index for each row. As a result:\n\n- Disk space usage can rise.\n- RAM consumption can be higher, depending on a dictionary size.\n- Some functions can work slower due to extra coding/encoding operations.\n\nMerge times in [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md)-engine tables can grow due to all the reasons described above.\n\nPossible values:\n\n- 1 — Usage of `LowCardinality` is not restricted.\n- 0 — Usage of `LowCardinality` is restricted. \N \N 0 Bool 0 0 Production | |
allow_suspicious_fixed_string_types 0 0 In CREATE TABLE statement allows creating columns of type FixedString(n) with n > 256. FixedString with length >= 256 is suspicious and most likely indicates a misuse \N \N 0 Bool 0 0 Production | |
allow_suspicious_indices 0 0 Reject primary/secondary indexes and sorting keys with identical expressions \N \N 0 Bool 0 0 Production | |
allow_suspicious_ttl_expressions 0 0 Reject TTL expressions that don\'t depend on any of table\'s columns. It indicates a user error most of the time. \N \N 0 Bool 0 0 Production | |
allow_suspicious_variant_types 0 0 In CREATE TABLE statement allows specifying Variant type with similar variant types (for example, with different numeric or date types). Enabling this setting may introduce some ambiguity when working with values with similar types. \N \N 0 Bool 0 0 Production | |
allow_suspicious_primary_key 0 0 Allow suspicious `PRIMARY KEY`/`ORDER BY` for MergeTree (i.e. SimpleAggregateFunction). \N \N 0 Bool 0 0 Production | |
allow_suspicious_types_in_group_by 0 0 Allows or restricts using [Variant](../../sql-reference/data-types/variant.md) and [Dynamic](../../sql-reference/data-types/dynamic.md) types in GROUP BY keys. \N \N 0 Bool 0 0 Production | |
allow_suspicious_types_in_order_by 0 0 Allows or restricts using [Variant](../../sql-reference/data-types/variant.md) and [Dynamic](../../sql-reference/data-types/dynamic.md) types in ORDER BY keys. \N \N 0 Bool 0 0 Production | |
allow_not_comparable_types_in_order_by 0 0 Allows or restricts using not comparable types (like JSON/Object/AggregateFunction) in ORDER BY keys. \N \N 0 Bool 0 0 Production | |
allow_not_comparable_types_in_comparison_functions 0 0 Allows or restricts using not comparable types (like JSON/Object/AggregateFunction) in comparison functions `equal/less/greater/etc`. \N \N 0 Bool 0 0 Production | |
compile_expressions 0 0 Compile some scalar functions and operators to native code. Due to a bug in the LLVM compiler infrastructure, on AArch64 machines, it is known to lead to a nullptr dereference and, consequently, server crash. Do not enable this setting. \N \N 0 Bool 0 0 Production | |
min_count_to_compile_expression 3 0 Minimum count of executing same expression before it is get compiled. \N \N 0 UInt64 3 0 Production | |
compile_aggregate_expressions 1 0 Enables or disables JIT-compilation of aggregate functions to native code. Enabling this setting can improve the performance.\n\nPossible values:\n\n- 0 — Aggregation is done without JIT compilation.\n- 1 — Aggregation is done using JIT compilation.\n\n**See Also**\n\n- [min_count_to_compile_aggregate_expression](#min_count_to_compile_aggregate_expression) \N \N 0 Bool 1 0 Production | |
min_count_to_compile_aggregate_expression 3 0 The minimum number of identical aggregate expressions to start JIT-compilation. Works only if the [compile_aggregate_expressions](#compile_aggregate_expressions) setting is enabled.\n\nPossible values:\n\n- Positive integer.\n- 0 — Identical aggregate expressions are always JIT-compiled. \N \N 0 UInt64 3 0 Production | |
compile_sort_description 1 0 Compile sort description to native code. \N \N 0 Bool 1 0 Production | |
min_count_to_compile_sort_description 3 0 The number of identical sort descriptions before they are JIT-compiled \N \N 0 UInt64 3 0 Production | |
group_by_two_level_threshold 100000 0 From what number of keys, a two-level aggregation starts. 0 - the threshold is not set. \N \N 0 UInt64 100000 0 Production | |
group_by_two_level_threshold_bytes 50000000 0 From what size of the aggregation state in bytes, a two-level aggregation begins to be used. 0 - the threshold is not set. Two-level aggregation is used when at least one of the thresholds is triggered. \N \N 0 UInt64 50000000 0 Production | |
distributed_aggregation_memory_efficient 1 0 Is the memory-saving mode of distributed aggregation enabled. \N \N 0 Bool 1 0 Production | |
aggregation_memory_efficient_merge_threads 0 0 Number of threads to use for merge intermediate aggregation results in memory efficient mode. When bigger, then more memory is consumed. 0 means - same as \'max_threads\'. \N \N 0 UInt64 0 0 Production | |
enable_memory_bound_merging_of_aggregation_results 1 0 Enable memory bound merging strategy for aggregation. \N \N 0 Bool 1 0 Production | |
enable_positional_arguments 1 0 Enables or disables supporting positional arguments for [GROUP BY](/sql-reference/statements/select/group-by), [LIMIT BY](../../sql-reference/statements/select/limit-by.md), [ORDER BY](../../sql-reference/statements/select/order-by.md) statements.\n\nPossible values:\n\n- 0 — Positional arguments aren\'t supported.\n- 1 — Positional arguments are supported: column numbers can use instead of column names.\n\n**Example**\n\nQuery:\n\n```sql\nCREATE TABLE positional_arguments(one Int, two Int, three Int) ENGINE=Memory();\n\nINSERT INTO positional_arguments VALUES (10, 20, 30), (20, 20, 10), (30, 10, 20);\n\nSELECT * FROM positional_arguments ORDER BY 2,3;\n```\n\nResult:\n\n```text\n┌─one─┬─two─┬─three─┐\n│ 30 │ 10 │ 20 │\n│ 20 │ 20 │ 10 │\n│ 10 │ 20 │ 30 │\n└─────┴─────┴───────┘\n``` \N \N 0 Bool 1 0 Production | |
enable_extended_results_for_datetime_functions 0 0 Enables or disables returning results of type:\n- `Date32` with extended range (compared to type `Date`) for functions [toStartOfYear](../../sql-reference/functions/date-time-functions.md/#tostartofyear), [toStartOfISOYear](../../sql-reference/functions/date-time-functions.md/#tostartofisoyear), [toStartOfQuarter](../../sql-reference/functions/date-time-functions.md/#tostartofquarter), [toStartOfMonth](../../sql-reference/functions/date-time-functions.md/#tostartofmonth), [toLastDayOfMonth](../../sql-reference/functions/date-time-functions.md/#tolastdayofmonth), [toStartOfWeek](../../sql-reference/functions/date-time-functions.md/#tostartofweek), [toLastDayOfWeek](../../sql-reference/functions/date-time-functions.md/#tolastdayofweek) and [toMonday](../../sql-reference/functions/date-time-functions.md/#tomonday).\n- `DateTime64` with extended range (compared to type `DateTime`) for functions [toStartOfDay](../../sql-reference/functions/date-time-functions.md/#tostartofday), [toStartOfHour](../../sql-reference/functions/date-time-functions.md/#tostartofhour), [toStartOfMinute](../../sql-reference/functions/date-time-functions.md/#tostartofminute), [toStartOfFiveMinutes](../../sql-reference/functions/date-time-functions.md/#tostartoffiveminutes), [toStartOfTenMinutes](../../sql-reference/functions/date-time-functions.md/#tostartoftenminutes), [toStartOfFifteenMinutes](../../sql-reference/functions/date-time-functions.md/#tostartoffifteenminutes) and [timeSlot](../../sql-reference/functions/date-time-functions.md/#timeslot).\n\nPossible values:\n\n- 0 — Functions return `Date` or `DateTime` for all types of arguments.\n- 1 — Functions return `Date32` or `DateTime64` for `Date32` or `DateTime64` arguments and `Date` or `DateTime` otherwise. \N \N 0 Bool 0 0 Production | |
allow_nonconst_timezone_arguments 0 0 Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*() \N \N 0 Bool 0 0 Production | |
function_locate_has_mysql_compatible_argument_order 1 0 Controls the order of arguments in function [locate](../../sql-reference/functions/string-search-functions.md/#locate).\n\nPossible values:\n\n- 0 — Function `locate` accepts arguments `(haystack, needle[, start_pos])`.\n- 1 — Function `locate` accepts arguments `(needle, haystack, [, start_pos])` (MySQL-compatible behavior) \N \N 0 Bool 1 0 Production | |
group_by_use_nulls 0 0 Changes the way the [GROUP BY clause](/sql-reference/statements/select/group-by) treats the types of aggregation keys.\nWhen the `ROLLUP`, `CUBE`, or `GROUPING SETS` specifiers are used, some aggregation keys may not be used to produce some result rows.\nColumns for these keys are filled with either default value or `NULL` in corresponding rows depending on this setting.\n\nPossible values:\n\n- 0 — The default value for the aggregation key type is used to produce missing values.\n- 1 — ClickHouse executes `GROUP BY` the same way as the SQL standard says. The types of aggregation keys are converted to [Nullable](/sql-reference/data-types/nullable). Columns for corresponding aggregation keys are filled with [NULL](/sql-reference/syntax#null) for rows that didn\'t use it.\n\nSee also:\n\n- [GROUP BY clause](/sql-reference/statements/select/group-by) \N \N 0 Bool 0 0 Production | |
skip_unavailable_shards 0 0 Enables or disables silently skipping of unavailable shards.\n\nShard is considered unavailable if all its replicas are unavailable. A replica is unavailable in the following cases:\n\n- ClickHouse can\'t connect to replica for any reason.\n\n When connecting to a replica, ClickHouse performs several attempts. If all these attempts fail, the replica is considered unavailable.\n\n- Replica can\'t be resolved through DNS.\n\n If replica\'s hostname can\'t be resolved through DNS, it can indicate the following situations:\n\n - Replica\'s host has no DNS record. It can occur in systems with dynamic DNS, for example, [Kubernetes](https://kubernetes.io), where nodes can be unresolvable during downtime, and this is not an error.\n\n - Configuration error. ClickHouse configuration file contains a wrong hostname.\n\nPossible values:\n\n- 1 — skipping enabled.\n\n If a shard is unavailable, ClickHouse returns a result based on partial data and does not report node availability issues.\n\n- 0 — skipping disabled.\n\n If a shard is unavailable, ClickHouse throws an exception. \N \N 0 Bool 0 0 Production | |
parallel_distributed_insert_select 0 0 Enables parallel distributed `INSERT ... SELECT` query.\n\nIf we execute `INSERT INTO distributed_table_a SELECT ... FROM distributed_table_b` queries and both tables use the same cluster, and both tables are either [replicated](../../engines/table-engines/mergetree-family/replication.md) or non-replicated, then this query is processed locally on every shard.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — `SELECT` will be executed on each shard from the underlying table of the distributed engine.\n- 2 — `SELECT` and `INSERT` will be executed on each shard from/to the underlying table of the distributed engine. \N \N 0 UInt64 0 0 Production | |
distributed_group_by_no_merge 0 0 Do not merge aggregation states from different servers for distributed query processing, you can use this in case it is for certain that there are different keys on different shards\n\nPossible values:\n\n- `0` — Disabled (final query processing is done on the initiator node).\n- `1` - Do not merge aggregation states from different servers for distributed query processing (query completely processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards.\n- `2` - Same as `1` but applies `ORDER BY` and `LIMIT` (it is not possible when the query processed completely on the remote node, like for `distributed_group_by_no_merge=1`) on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).\n\n**Example**\n\n```sql\nSELECT *\nFROM remote(\'127.0.0.{2,3}\', system.one)\nGROUP BY dummy\nLIMIT 1\nSETTINGS distributed_group_by_no_merge = 1\nFORMAT PrettyCompactMonoBlock\n\n┌─dummy─┐\n│ 0 │\n│ 0 │\n└───────┘\n```\n\n```sql\nSELECT *\nFROM remote(\'127.0.0.{2,3}\', system.one)\nGROUP BY dummy\nLIMIT 1\nSETTINGS distributed_group_by_no_merge = 2\nFORMAT PrettyCompactMonoBlock\n\n┌─dummy─┐\n│ 0 │\n└───────┘\n``` \N \N 0 UInt64 0 0 Production | |
distributed_push_down_limit 1 0 Enables or disables [LIMIT](#limit) applying on each shard separately.\n\nThis will allow to avoid:\n- Sending extra rows over network;\n- Processing rows behind the limit on the initiator.\n\nStarting from 21.9 version you cannot get inaccurate results anymore, since `distributed_push_down_limit` changes query execution only if at least one of the conditions met:\n- [distributed_group_by_no_merge](#distributed_group_by_no_merge) > 0.\n- Query **does not have** `GROUP BY`/`DISTINCT`/`LIMIT BY`, but it has `ORDER BY`/`LIMIT`.\n- Query **has** `GROUP BY`/`DISTINCT`/`LIMIT BY` with `ORDER BY`/`LIMIT` and:\n - [optimize_skip_unused_shards](#optimize_skip_unused_shards) is enabled.\n - [optimize_distributed_group_by_sharding_key](#optimize_distributed_group_by_sharding_key) is enabled.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\nSee also:\n\n- [distributed_group_by_no_merge](#distributed_group_by_no_merge)\n- [optimize_skip_unused_shards](#optimize_skip_unused_shards)\n- [optimize_distributed_group_by_sharding_key](#optimize_distributed_group_by_sharding_key) \N \N 0 UInt64 1 0 Production | |
optimize_distributed_group_by_sharding_key 1 0 Optimize `GROUP BY sharding_key` queries, by avoiding costly aggregation on the initiator server (which will reduce memory usage for the query on the initiator server).\n\nThe following types of queries are supported (and all combinations of them):\n\n- `SELECT DISTINCT [..., ]sharding_key[, ...] FROM dist`\n- `SELECT ... FROM dist GROUP BY sharding_key[, ...]`\n- `SELECT ... FROM dist GROUP BY sharding_key[, ...] ORDER BY x`\n- `SELECT ... FROM dist GROUP BY sharding_key[, ...] LIMIT 1`\n- `SELECT ... FROM dist GROUP BY sharding_key[, ...] LIMIT 1 BY x`\n\nThe following types of queries are not supported (support for some of them may be added later):\n\n- `SELECT ... GROUP BY sharding_key[, ...] WITH TOTALS`\n- `SELECT ... GROUP BY sharding_key[, ...] WITH ROLLUP`\n- `SELECT ... GROUP BY sharding_key[, ...] WITH CUBE`\n- `SELECT ... GROUP BY sharding_key[, ...] SETTINGS extremes=1`\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\nSee also:\n\n- [distributed_group_by_no_merge](#distributed_group_by_no_merge)\n- [distributed_push_down_limit](#distributed_push_down_limit)\n- [optimize_skip_unused_shards](#optimize_skip_unused_shards)\n\n:::note\nRight now it requires `optimize_skip_unused_shards` (the reason behind this is that one day it may be enabled by default, and it will work correctly only if data was inserted via Distributed table, i.e. data is distributed according to sharding_key).\n::: \N \N 0 Bool 1 0 Production | |
optimize_skip_unused_shards_limit 1000 0 Limit for number of sharding key values, turns off `optimize_skip_unused_shards` if the limit is reached.\n\nToo many values may require significant amount for processing, while the benefit is doubtful, since if you have huge number of values in `IN (...)`, then most likely the query will be sent to all shards anyway. \N \N 0 UInt64 1000 0 Production | |
optimize_skip_unused_shards 0 0 Enables or disables skipping of unused shards for [SELECT](../../sql-reference/statements/select/index.md) queries that have sharding key condition in `WHERE/PREWHERE` (assuming that the data is distributed by sharding key, otherwise a query yields incorrect result).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 0 0 Production | |
optimize_skip_unused_shards_rewrite_in 1 0 Rewrite IN in query for remote shards to exclude values that does not belong to the shard (requires optimize_skip_unused_shards).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 1 0 Production | |
allow_nondeterministic_optimize_skip_unused_shards 0 0 Allow nondeterministic (like `rand` or `dictGet`, since later has some caveats with updates) functions in sharding key.\n\nPossible values:\n\n- 0 — Disallowed.\n- 1 — Allowed. \N \N 0 Bool 0 0 Production | |
force_optimize_skip_unused_shards 0 0 Enables or disables query execution if [optimize_skip_unused_shards](#optimize_skip_unused_shards) is enabled and skipping of unused shards is not possible. If the skipping is not possible and the setting is enabled, an exception will be thrown.\n\nPossible values:\n\n- 0 — Disabled. ClickHouse does not throw an exception.\n- 1 — Enabled. Query execution is disabled only if the table has a sharding key.\n- 2 — Enabled. Query execution is disabled regardless of whether a sharding key is defined for the table. \N \N 0 UInt64 0 0 Production | |
optimize_skip_unused_shards_nesting 0 0 Controls [`optimize_skip_unused_shards`](#optimize_skip_unused_shards) (hence still requires [`optimize_skip_unused_shards`](#optimize_skip_unused_shards)) depends on the nesting level of the distributed query (case when you have `Distributed` table that look into another `Distributed` table).\n\nPossible values:\n\n- 0 — Disabled, `optimize_skip_unused_shards` works always.\n- 1 — Enables `optimize_skip_unused_shards` only for the first level.\n- 2 — Enables `optimize_skip_unused_shards` up to the second level. \N \N 0 UInt64 0 0 Production | |
force_optimize_skip_unused_shards_nesting 0 0 Controls [`force_optimize_skip_unused_shards`](#force_optimize_skip_unused_shards) (hence still requires [`force_optimize_skip_unused_shards`](#force_optimize_skip_unused_shards)) depends on the nesting level of the distributed query (case when you have `Distributed` table that look into another `Distributed` table).\n\nPossible values:\n\n- 0 - Disabled, `force_optimize_skip_unused_shards` works always.\n- 1 — Enables `force_optimize_skip_unused_shards` only for the first level.\n- 2 — Enables `force_optimize_skip_unused_shards` up to the second level. \N \N 0 UInt64 0 0 Production | |
input_format_parallel_parsing 1 0 Enables or disables order-preserving parallel parsing of data formats. Supported only for [TSV](../../interfaces/formats.md/#tabseparated), [TSKV](../../interfaces/formats.md/#tskv), [CSV](../../interfaces/formats.md/#csv) and [JSONEachRow](../../interfaces/formats.md/#jsoneachrow) formats.\n\nPossible values:\n\n- 1 — Enabled.\n- 0 — Disabled. \N \N 0 Bool 1 0 Production | |
min_chunk_bytes_for_parallel_parsing 10485760 0 - Type: unsigned int\n- Default value: 1 MiB\n\nThe minimum chunk size in bytes, which each thread will parse in parallel. \N \N 0 NonZeroUInt64 10485760 0 Production | |
output_format_parallel_formatting 1 0 Enables or disables parallel formatting of data formats. Supported only for [TSV](../../interfaces/formats.md/#tabseparated), [TSKV](../../interfaces/formats.md/#tskv), [CSV](../../interfaces/formats.md/#csv) and [JSONEachRow](../../interfaces/formats.md/#jsoneachrow) formats.\n\nPossible values:\n\n- 1 — Enabled.\n- 0 — Disabled. \N \N 0 Bool 1 0 Production | |
output_format_compression_level 3 0 Default compression level if query output is compressed. The setting is applied when `SELECT` query has `INTO OUTFILE` or when writing to table functions `file`, `url`, `hdfs`, `s3`, or `azureBlobStorage`.\n\nPossible values: from `1` to `22` \N \N 0 UInt64 3 0 Production | |
output_format_compression_zstd_window_log 0 0 Can be used when the output compression method is `zstd`. If greater than `0`, this setting explicitly sets compression window size (power of `2`) and enables a long-range mode for zstd compression. This can help to achieve a better compression ratio.\n\nPossible values: non-negative numbers. Note that if the value is too small or too big, `zstdlib` will throw an exception. Typical values are from `20` (window size = `1MB`) to `30` (window size = `1GB`). \N \N 0 UInt64 0 0 Production | |
enable_parsing_to_custom_serialization 1 0 If true then data can be parsed directly to columns with custom serialization (e.g. Sparse) according to hints for serialization got from the table. \N \N 0 Bool 1 0 Production | |
merge_tree_use_v1_object_and_dynamic_serialization 0 0 When enabled, V1 serialization version of JSON and Dynamic types will be used in MergeTree instead of V2. Changing this setting takes affect only after server restart. \N \N 0 Bool 0 0 Production | |
merge_tree_min_rows_for_concurrent_read 163840 0 If the number of rows to be read from a file of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `merge_tree_min_rows_for_concurrent_read` then ClickHouse tries to perform a concurrent reading from this file on several threads.\n\nPossible values:\n\n- Positive integer. \N \N 0 UInt64 163840 0 Production | |
merge_tree_min_bytes_for_concurrent_read 251658240 0 If the number of bytes to read from one file of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md)-engine table exceeds `merge_tree_min_bytes_for_concurrent_read`, then ClickHouse tries to concurrently read from this file in several threads.\n\nPossible value:\n\n- Positive integer. \N \N 0 UInt64 251658240 0 Production | |
merge_tree_min_rows_for_seek 0 0 If the distance between two data blocks to be read in one file is less than `merge_tree_min_rows_for_seek` rows, then ClickHouse does not seek through the file but reads the data sequentially.\n\nPossible values:\n\n- Any positive integer. \N \N 0 UInt64 0 0 Production | |
merge_tree_min_bytes_for_seek 0 0 If the distance between two data blocks to be read in one file is less than `merge_tree_min_bytes_for_seek` bytes, then ClickHouse sequentially reads a range of file that contains both blocks, thus avoiding extra seek.\n\nPossible values:\n\n- Any positive integer. \N \N 0 UInt64 0 0 Production | |
merge_tree_coarse_index_granularity 8 0 When searching for data, ClickHouse checks the data marks in the index file. If ClickHouse finds that required keys are in some range, it divides this range into `merge_tree_coarse_index_granularity` subranges and searches the required keys there recursively.\n\nPossible values:\n\n- Any positive even integer. \N \N 0 UInt64 8 0 Production | |
merge_tree_max_rows_to_use_cache 1048576 0 If ClickHouse should read more than `merge_tree_max_rows_to_use_cache` rows in one query, it does not use the cache of uncompressed blocks.\n\nThe cache of uncompressed blocks stores data extracted for queries. ClickHouse uses this cache to speed up responses to repeated small queries. This setting protects the cache from trashing by queries that read a large amount of data. The [uncompressed_cache_size](/operations/server-configuration-parameters/settings#uncompressed_cache_size) server setting defines the size of the cache of uncompressed blocks.\n\nPossible values:\n\n- Any positive integer. \N \N 0 UInt64 1048576 0 Production | |
merge_tree_max_bytes_to_use_cache 2013265920 0 If ClickHouse should read more than `merge_tree_max_bytes_to_use_cache` bytes in one query, it does not use the cache of uncompressed blocks.\n\nThe cache of uncompressed blocks stores data extracted for queries. ClickHouse uses this cache to speed up responses to repeated small queries. This setting protects the cache from trashing by queries that read a large amount of data. The [uncompressed_cache_size](/operations/server-configuration-parameters/settings#uncompressed_cache_size) server setting defines the size of the cache of uncompressed blocks.\n\nPossible values:\n\n- Any positive integer. \N \N 0 UInt64 2013265920 0 Production | |
merge_tree_use_deserialization_prefixes_cache 1 0 Enables caching of columns metadata from the file prefixes during reading from Wide parts in MergeTree. \N \N 0 Bool 1 0 Production | |
merge_tree_use_prefixes_deserialization_thread_pool 1 0 Enables usage of the thread pool for parallel prefixes reading in Wide parts in MergeTree. Size of that thread pool is controlled by server setting `max_prefixes_deserialization_thread_pool_size`. \N \N 0 Bool 1 0 Production | |
do_not_merge_across_partitions_select_final 0 0 Merge parts only in one partition in select final \N \N 0 Bool 0 0 Production | |
split_parts_ranges_into_intersecting_and_non_intersecting_final 1 0 Split parts ranges into intersecting and non intersecting during FINAL optimization \N \N 0 Bool 1 0 Production | |
split_intersecting_parts_ranges_into_layers_final 1 0 Split intersecting parts ranges into layers during FINAL optimization \N \N 0 Bool 1 0 Production | |
mysql_max_rows_to_insert 65536 0 The maximum number of rows in MySQL batch insertion of the MySQL storage engine \N \N 0 UInt64 65536 0 Production | |
mysql_map_string_to_text_in_show_columns 1 0 When enabled, [String](../../sql-reference/data-types/string.md) ClickHouse data type will be displayed as `TEXT` in [SHOW COLUMNS](../../sql-reference/statements/show.md/#show_columns).\n\nHas an effect only when the connection is made through the MySQL wire protocol.\n\n- 0 - Use `BLOB`.\n- 1 - Use `TEXT`. \N \N 0 Bool 1 0 Production | |
mysql_map_fixed_string_to_text_in_show_columns 1 0 When enabled, [FixedString](../../sql-reference/data-types/fixedstring.md) ClickHouse data type will be displayed as `TEXT` in [SHOW COLUMNS](../../sql-reference/statements/show.md/#show_columns).\n\nHas an effect only when the connection is made through the MySQL wire protocol.\n\n- 0 - Use `BLOB`.\n- 1 - Use `TEXT`. \N \N 0 Bool 1 0 Production | |
optimize_min_equality_disjunction_chain_length 3 0 The minimum length of the expression `expr = x1 OR ... expr = xN` for optimization \N \N 0 UInt64 3 0 Production | |
optimize_min_inequality_conjunction_chain_length 3 0 The minimum length of the expression `expr <> x1 AND ... expr <> xN` for optimization \N \N 0 UInt64 3 0 Production | |
min_bytes_to_use_direct_io 0 0 The minimum data volume required for using direct I/O access to the storage disk.\n\nClickHouse uses this setting when reading data from tables. If the total storage volume of all the data to be read exceeds `min_bytes_to_use_direct_io` bytes, then ClickHouse reads the data from the storage disk with the `O_DIRECT` option.\n\nPossible values:\n\n- 0 — Direct I/O is disabled.\n- Positive integer. \N \N 0 UInt64 0 0 Production | |
min_bytes_to_use_mmap_io 0 0 This is an experimental setting. Sets the minimum amount of memory for reading large files without copying data from the kernel to userspace. Recommended threshold is about 64 MB, because [mmap/munmap](https://en.wikipedia.org/wiki/Mmap) is slow. It makes sense only for large files and helps only if data reside in the page cache.\n\nPossible values:\n\n- Positive integer.\n- 0 — Big files read with only copying data from kernel to userspace. \N \N 0 UInt64 0 0 Production | |
checksum_on_read 1 0 Validate checksums on reading. It is enabled by default and should be always enabled in production. Please do not expect any benefits in disabling this setting. It may only be used for experiments and benchmarks. The setting is only applicable for tables of MergeTree family. Checksums are always validated for other table engines and when receiving data over the network. \N \N 0 Bool 1 0 Production | |
force_index_by_date 0 0 Disables query execution if the index can\'t be used by date.\n\nWorks with tables in the MergeTree family.\n\nIf `force_index_by_date=1`, ClickHouse checks whether the query has a date key condition that can be used for restricting data ranges. If there is no suitable condition, it throws an exception. However, it does not check whether the condition reduces the amount of data to read. For example, the condition `Date != \' 2000-01-01 \'` is acceptable even when it matches all the data in the table (i.e., running the query requires a full scan). For more information about ranges of data in MergeTree tables, see [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). \N \N 0 Bool 0 0 Production | |
force_primary_key 0 0 Disables query execution if indexing by the primary key is not possible.\n\nWorks with tables in the MergeTree family.\n\nIf `force_primary_key=1`, ClickHouse checks to see if the query has a primary key condition that can be used for restricting data ranges. If there is no suitable condition, it throws an exception. However, it does not check whether the condition reduces the amount of data to read. For more information about data ranges in MergeTree tables, see [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). \N \N 0 Bool 0 0 Production | |
use_skip_indexes 1 0 Use data skipping indexes during query execution.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 1 0 Production | |
use_skip_indexes_if_final 0 0 Controls whether skipping indexes are used when executing a query with the FINAL modifier.\n\nBy default, this setting is disabled because skip indexes may exclude rows (granules) containing the latest data, which could lead to incorrect results. When enabled, skipping indexes are applied even with the FINAL modifier, potentially improving performance but with the risk of missing recent updates.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 0 0 Production | |
materialize_skip_indexes_on_insert 1 0 If INSERTs build and store skip indexes. If disabled, skip indexes will be build and stored during merges or by explicit MATERIALIZE INDEX \N \N 0 Bool 1 0 Production | |
materialize_statistics_on_insert 1 0 If INSERTs build and insert statistics. If disabled, statistics will be build and stored during merges or by explicit MATERIALIZE STATISTICS \N \N 0 Bool 1 0 Production | |
ignore_data_skipping_indices 0 Ignores the skipping indexes specified if used by the query.\n\nConsider the following example:\n\n```sql\nCREATE TABLE data\n(\n key Int,\n x Int,\n y Int,\n INDEX x_idx x TYPE minmax GRANULARITY 1,\n INDEX y_idx y TYPE minmax GRANULARITY 1,\n INDEX xy_idx (x,y) TYPE minmax GRANULARITY 1\n)\nEngine=MergeTree()\nORDER BY key;\n\nINSERT INTO data VALUES (1, 2, 3);\n\nSELECT * FROM data;\nSELECT * FROM data SETTINGS ignore_data_skipping_indices=\'\'; -- query will produce CANNOT_PARSE_TEXT error.\nSELECT * FROM data SETTINGS ignore_data_skipping_indices=\'x_idx\'; -- Ok.\nSELECT * FROM data SETTINGS ignore_data_skipping_indices=\'na_idx\'; -- Ok.\n\nSELECT * FROM data WHERE x = 1 AND y = 1 SETTINGS ignore_data_skipping_indices=\'xy_idx\',force_data_skipping_indices=\'xy_idx\' ; -- query will produce INDEX_NOT_USED error, since xy_idx is explicitly ignored.\nSELECT * FROM data WHERE x = 1 AND y = 2 SETTINGS ignore_data_skipping_indices=\'xy_idx\';\n```\n\nThe query without ignoring any indexes:\n```sql\nEXPLAIN indexes = 1 SELECT * FROM data WHERE x = 1 AND y = 2;\n\nExpression ((Projection + Before ORDER BY))\n Filter (WHERE)\n ReadFromMergeTree (default.data)\n Indexes:\n PrimaryKey\n Condition: true\n Parts: 1/1\n Granules: 1/1\n Skip\n Name: x_idx\n Description: minmax GRANULARITY 1\n Parts: 0/1\n Granules: 0/1\n Skip\n Name: y_idx\n Description: minmax GRANULARITY 1\n Parts: 0/0\n Granules: 0/0\n Skip\n Name: xy_idx\n Description: minmax GRANULARITY 1\n Parts: 0/0\n Granules: 0/0\n```\n\nIgnoring the `xy_idx` index:\n```sql\nEXPLAIN indexes = 1 SELECT * FROM data WHERE x = 1 AND y = 2 SETTINGS ignore_data_skipping_indices=\'xy_idx\';\n\nExpression ((Projection + Before ORDER BY))\n Filter (WHERE)\n ReadFromMergeTree (default.data)\n Indexes:\n PrimaryKey\n Condition: true\n Parts: 1/1\n Granules: 1/1\n Skip\n Name: x_idx\n Description: minmax GRANULARITY 1\n Parts: 0/1\n Granules: 0/1\n Skip\n Name: y_idx\n Description: minmax GRANULARITY 1\n Parts: 0/0\n Granules: 0/0\n```\n\nWorks with tables in the MergeTree family. \N \N 0 String 0 Production | |
force_data_skipping_indices 0 Disables query execution if passed data skipping indices wasn\'t used.\n\nConsider the following example:\n\n```sql\nCREATE TABLE data\n(\n key Int,\n d1 Int,\n d1_null Nullable(Int),\n INDEX d1_idx d1 TYPE minmax GRANULARITY 1,\n INDEX d1_null_idx assumeNotNull(d1_null) TYPE minmax GRANULARITY 1\n)\nEngine=MergeTree()\nORDER BY key;\n\nSELECT * FROM data_01515;\nSELECT * FROM data_01515 SETTINGS force_data_skipping_indices=\'\'; -- query will produce CANNOT_PARSE_TEXT error.\nSELECT * FROM data_01515 SETTINGS force_data_skipping_indices=\'d1_idx\'; -- query will produce INDEX_NOT_USED error.\nSELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=\'d1_idx\'; -- Ok.\nSELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=\'`d1_idx`\'; -- Ok (example of full featured parser).\nSELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=\'`d1_idx`, d1_null_idx\'; -- query will produce INDEX_NOT_USED error, since d1_null_idx is not used.\nSELECT * FROM data_01515 WHERE d1 = 0 AND assumeNotNull(d1_null) = 0 SETTINGS force_data_skipping_indices=\'`d1_idx`, d1_null_idx\'; -- Ok.\n``` \N \N 0 String 0 Production | |
max_streams_to_max_threads_ratio 1 0 Allows you to use more sources than the number of threads - to more evenly distribute work across threads. It is assumed that this is a temporary solution since it will be possible in the future to make the number of sources equal to the number of threads, but for each source to dynamically select available work for itself. \N \N 0 Float 1 0 Production | |
max_streams_multiplier_for_merge_tables 5 0 Ask more streams when reading from Merge table. Streams will be spread across tables that Merge table will use. This allows more even distribution of work across threads and is especially helpful when merged tables differ in size. \N \N 0 Float 5 0 Production | |
network_compression_method LZ4 0 Sets the method of data compression that is used for communication between servers and between server and [clickhouse-client](../../interfaces/cli.md).\n\nPossible values:\n\n- `LZ4` — sets LZ4 compression method.\n- `ZSTD` — sets ZSTD compression method.\n\n**See Also**\n\n- [network_zstd_compression_level](#network_zstd_compression_level) \N \N 0 String LZ4 0 Production | |
network_zstd_compression_level 1 0 Adjusts the level of ZSTD compression. Used only when [network_compression_method](#network_compression_method) is set to `ZSTD`.\n\nPossible values:\n\n- Positive integer from 1 to 15. \N \N 0 Int64 1 0 Production | |
zstd_window_log_max 0 0 Allows you to select the max window log of ZSTD (it will not be used for MergeTree family) \N \N 0 Int64 0 0 Production | |
priority 0 0 Priority of the query. 1 - the highest, higher value - lower priority; 0 - do not use priorities. \N \N 0 UInt64 0 0 Production | |
os_thread_priority 0 0 Sets the priority ([nice](https://en.wikipedia.org/wiki/Nice_(Unix))) for threads that execute queries. The OS scheduler considers this priority when choosing the next thread to run on each available CPU core.\n\n:::note\nTo use this setting, you need to set the `CAP_SYS_NICE` capability. The `clickhouse-server` package sets it up during installation. Some virtual environments do not allow you to set the `CAP_SYS_NICE` capability. In this case, `clickhouse-server` shows a message about it at the start.\n:::\n\nPossible values:\n\n- You can set values in the range `[-20, 19]`.\n\nLower values mean higher priority. Threads with low `nice` priority values are executed more frequently than threads with high values. High values are preferable for long-running non-interactive queries because it allows them to quickly give up resources in favour of short interactive queries when they arrive. \N \N 0 Int64 0 0 Production | |
log_queries 1 0 Setting up query logging.\n\nQueries sent to ClickHouse with this setup are logged according to the rules in the [query_log](../../operations/server-configuration-parameters/settings.md/#query_log) server configuration parameter.\n\nExample:\n\n```text\nlog_queries=1\n``` \N \N 0 Bool 1 0 Production | |
log_formatted_queries 0 0 Allows to log formatted queries to the [system.query_log](../../operations/system-tables/query_log.md) system table (populates `formatted_query` column in the [system.query_log](../../operations/system-tables/query_log.md)).\n\nPossible values:\n\n- 0 — Formatted queries are not logged in the system table.\n- 1 — Formatted queries are logged in the system table. \N \N 0 Bool 0 0 Production | |
log_queries_min_type QUERY_START 0 `query_log` minimal type to log.\n\nPossible values:\n- `QUERY_START` (`=1`)\n- `QUERY_FINISH` (`=2`)\n- `EXCEPTION_BEFORE_START` (`=3`)\n- `EXCEPTION_WHILE_PROCESSING` (`=4`)\n\nCan be used to limit which entities will go to `query_log`, say you are interested only in errors, then you can use `EXCEPTION_WHILE_PROCESSING`:\n\n```text\nlog_queries_min_type=\'EXCEPTION_WHILE_PROCESSING\'\n``` \N \N 0 LogQueriesType QUERY_START 0 Production | |
log_queries_min_query_duration_ms 0 0 If enabled (non-zero), queries faster than the value of this setting will not be logged (you can think about this as a `long_query_time` for [MySQL Slow Query Log](https://dev.mysql.com/doc/refman/5.7/slow-query-log.html)), and this basically means that you will not find them in the following tables:\n\n- `system.query_log`\n- `system.query_thread_log`\n\nOnly the queries with the following type will get to the log:\n\n- `QUERY_FINISH`\n- `EXCEPTION_WHILE_PROCESSING`\n\n- Type: milliseconds\n- Default value: 0 (any query) \N \N 0 Milliseconds 0 0 Production | |
log_queries_cut_to_length 100000 0 If query length is greater than a specified threshold (in bytes), then cut query when writing to query log. Also limit the length of printed query in ordinary text log. \N \N 0 UInt64 100000 0 Production | |
log_queries_probability 1 0 Allows a user to write to [query_log](../../operations/system-tables/query_log.md), [query_thread_log](../../operations/system-tables/query_thread_log.md), and [query_views_log](../../operations/system-tables/query_views_log.md) system tables only a sample of queries selected randomly with the specified probability. It helps to reduce the load with a large volume of queries in a second.\n\nPossible values:\n\n- 0 — Queries are not logged in the system tables.\n- Positive floating-point number in the range [0..1]. For example, if the setting value is `0.5`, about half of the queries are logged in the system tables.\n- 1 — All queries are logged in the system tables. \N \N 0 Float 1 0 Production | |
log_processors_profiles 1 0 Write time that processor spent during execution/waiting for data to `system.processors_profile_log` table.\n\nSee also:\n\n- [`system.processors_profile_log`](../../operations/system-tables/processors_profile_log.md)\n- [`EXPLAIN PIPELINE`](../../sql-reference/statements/explain.md/#explain-pipeline) \N \N 0 Bool 1 0 Production | |
distributed_product_mode deny 0 Changes the behaviour of [distributed subqueries](../../sql-reference/operators/in.md).\n\nClickHouse applies this setting when the query contains the product of distributed tables, i.e. when the query for a distributed table contains a non-GLOBAL subquery for the distributed table.\n\nRestrictions:\n\n- Only applied for IN and JOIN subqueries.\n- Only if the FROM section uses a distributed table containing more than one shard.\n- If the subquery concerns a distributed table containing more than one shard.\n- Not used for a table-valued [remote](../../sql-reference/table-functions/remote.md) function.\n\nPossible values:\n\n- `deny` — Default value. Prohibits using these types of subqueries (returns the "Double-distributed in/JOIN subqueries is denied" exception).\n- `local` — Replaces the database and table in the subquery with local ones for the destination server (shard), leaving the normal `IN`/`JOIN.`\n- `global` — Replaces the `IN`/`JOIN` query with `GLOBAL IN`/`GLOBAL JOIN.`\n- `allow` — Allows the use of these types of subqueries. \N \N 0 DistributedProductMode deny 0 Production | |
max_concurrent_queries_for_all_users 0 0 Throw exception if the value of this setting is less or equal than the current number of simultaneously processed queries.\n\nExample: `max_concurrent_queries_for_all_users` can be set to 99 for all users and database administrator can set it to 100 for itself to run queries for investigation even when the server is overloaded.\n\nModifying the setting for one query or user does not affect other queries.\n\nPossible values:\n\n- Positive integer.\n- 0 — No limit.\n\n**Example**\n\n```xml\n<max_concurrent_queries_for_all_users>99</max_concurrent_queries_for_all_users>\n```\n\n**See Also**\n\n- [max_concurrent_queries](/operations/server-configuration-parameters/settings#max_concurrent_queries) \N \N 0 UInt64 0 0 Production | |
max_concurrent_queries_for_user 0 0 The maximum number of simultaneously processed queries per user.\n\nPossible values:\n\n- Positive integer.\n- 0 — No limit.\n\n**Example**\n\n```xml\n<max_concurrent_queries_for_user>5</max_concurrent_queries_for_user>\n``` \N \N 0 UInt64 0 0 Production | |
insert_deduplicate 1 0 Enables or disables block deduplication of `INSERT` (for Replicated\\* tables).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\nBy default, blocks inserted into replicated tables by the `INSERT` statement are deduplicated (see [Data Replication](../../engines/table-engines/mergetree-family/replication.md)).\nFor the replicated tables by default the only 100 of the most recent blocks for each partition are deduplicated (see [replicated_deduplication_window](merge-tree-settings.md/#replicated_deduplication_window), [replicated_deduplication_window_seconds](merge-tree-settings.md/#replicated_deduplication_window_seconds)).\nFor not replicated tables see [non_replicated_deduplication_window](merge-tree-settings.md/#non_replicated_deduplication_window). \N \N 0 Bool 1 0 Production | |
async_insert_deduplicate 0 0 For async INSERT queries in the replicated table, specifies that deduplication of inserting blocks should be performed \N \N 0 Bool 0 0 Production | |
insert_quorum 0 0 :::note\nThis setting is not applicable to SharedMergeTree, see [SharedMergeTree consistency](/cloud/reference/shared-merge-tree#consistency) for more information.\n:::\n\nEnables the quorum writes.\n\n- If `insert_quorum < 2`, the quorum writes are disabled.\n- If `insert_quorum >= 2`, the quorum writes are enabled.\n- If `insert_quorum = \'auto\'`, use majority number (`number_of_replicas / 2 + 1`) as quorum number.\n\nQuorum writes\n\n`INSERT` succeeds only when ClickHouse manages to correctly write data to the `insert_quorum` of replicas during the `insert_quorum_timeout`. If for any reason the number of replicas with successful writes does not reach the `insert_quorum`, the write is considered failed and ClickHouse will delete the inserted block from all the replicas where data has already been written.\n\nWhen `insert_quorum_parallel` is disabled, all replicas in the quorum are consistent, i.e. they contain data from all previous `INSERT` queries (the `INSERT` sequence is linearized). When reading data written using `insert_quorum` and `insert_quorum_parallel` is disabled, you can turn on sequential consistency for `SELECT` queries using [select_sequential_consistency](#select_sequential_consistency).\n\nClickHouse generates an exception:\n\n- If the number of available replicas at the time of the query is less than the `insert_quorum`.\n- When `insert_quorum_parallel` is disabled and an attempt to write data is made when the previous block has not yet been inserted in `insert_quorum` of replicas. This situation may occur if the user tries to perform another `INSERT` query to the same table before the previous one with `insert_quorum` is completed.\n\nSee also:\n\n- [insert_quorum_timeout](#insert_quorum_timeout)\n- [insert_quorum_parallel](#insert_quorum_parallel)\n- [select_sequential_consistency](#select_sequential_consistency) \N \N 0 UInt64Auto 0 0 Production | |
insert_quorum_timeout 600000 0 Write to a quorum timeout in milliseconds. If the timeout has passed and no write has taken place yet, ClickHouse will generate an exception and the client must repeat the query to write the same block to the same or any other replica.\n\nSee also:\n\n- [insert_quorum](#insert_quorum)\n- [insert_quorum_parallel](#insert_quorum_parallel)\n- [select_sequential_consistency](#select_sequential_consistency) \N \N 0 Milliseconds 600000 0 Production | |
insert_quorum_parallel 1 0 :::note\nThis setting is not applicable to SharedMergeTree, see [SharedMergeTree consistency](/cloud/reference/shared-merge-tree#consistency) for more information.\n:::\n\nEnables or disables parallelism for quorum `INSERT` queries. If enabled, additional `INSERT` queries can be sent while previous queries have not yet finished. If disabled, additional writes to the same table will be rejected.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\nSee also:\n\n- [insert_quorum](#insert_quorum)\n- [insert_quorum_timeout](#insert_quorum_timeout)\n- [select_sequential_consistency](#select_sequential_consistency) \N \N 0 Bool 1 0 Production | |
select_sequential_consistency 0 0 :::note\nThis setting differ in behavior between SharedMergeTree and ReplicatedMergeTree, see [SharedMergeTree consistency](/cloud/reference/shared-merge-tree#consistency) for more information about the behavior of `select_sequential_consistency` in SharedMergeTree.\n:::\n\nEnables or disables sequential consistency for `SELECT` queries. Requires `insert_quorum_parallel` to be disabled (enabled by default).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\nUsage\n\nWhen sequential consistency is enabled, ClickHouse allows the client to execute the `SELECT` query only for those replicas that contain data from all previous `INSERT` queries executed with `insert_quorum`. If the client refers to a partial replica, ClickHouse will generate an exception. The SELECT query will not include data that has not yet been written to the quorum of replicas.\n\nWhen `insert_quorum_parallel` is enabled (the default), then `select_sequential_consistency` does not work. This is because parallel `INSERT` queries can be written to different sets of quorum replicas so there is no guarantee a single replica will have received all writes.\n\nSee also:\n\n- [insert_quorum](#insert_quorum)\n- [insert_quorum_timeout](#insert_quorum_timeout)\n- [insert_quorum_parallel](#insert_quorum_parallel) \N \N 0 UInt64 0 0 Production | |
table_function_remote_max_addresses 1000 0 Sets the maximum number of addresses generated from patterns for the [remote](../../sql-reference/table-functions/remote.md) function.\n\nPossible values:\n\n- Positive integer. \N \N 0 UInt64 1000 0 Production | |
read_backoff_min_latency_ms 1000 0 Setting to reduce the number of threads in case of slow reads. Pay attention only to reads that took at least that much time. \N \N 0 Milliseconds 1000 0 Production | |
read_backoff_max_throughput 1048576 0 Settings to reduce the number of threads in case of slow reads. Count events when the read bandwidth is less than that many bytes per second. \N \N 0 UInt64 1048576 0 Production | |
read_backoff_min_interval_between_events_ms 1000 0 Settings to reduce the number of threads in case of slow reads. Do not pay attention to the event, if the previous one has passed less than a certain amount of time. \N \N 0 Milliseconds 1000 0 Production | |
read_backoff_min_events 2 0 Settings to reduce the number of threads in case of slow reads. The number of events after which the number of threads will be reduced. \N \N 0 UInt64 2 0 Production | |
read_backoff_min_concurrency 1 0 Settings to try keeping the minimal number of threads in case of slow reads. \N \N 0 UInt64 1 0 Production | |
memory_tracker_fault_probability 0 0 For testing of `exception safety` - throw an exception every time you allocate memory with the specified probability. \N \N 0 Float 0 0 Production | |
merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability 0 0 For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability. \N \N 0 Float 0 0 Production | |
enable_http_compression 0 0 Enables or disables data compression in the response to an HTTP request.\n\nFor more information, read the [HTTP interface description](../../interfaces/http.md).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 0 0 Production | |
http_zlib_compression_level 3 0 Sets the level of data compression in the response to an HTTP request if [enable_http_compression = 1](#enable_http_compression).\n\nPossible values: Numbers from 1 to 9. \N \N 0 Int64 3 0 Production | |
http_native_compression_disable_checksumming_on_decompress 0 0 Enables or disables checksum verification when decompressing the HTTP POST data from the client. Used only for ClickHouse native compression format (not used with `gzip` or `deflate`).\n\nFor more information, read the [HTTP interface description](../../interfaces/http.md).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 0 0 Production | |
http_response_headers {} 0 Allows to add or override HTTP headers which the server will return in the response with a successful query result.\nThis only affects the HTTP interface.\n\nIf the header is already set by default, the provided value will override it.\nIf the header was not set by default, it will be added to the list of headers.\nHeaders that are set by the server by default and not overridden by this setting, will remain.\n\nThe setting allows you to set a header to a constant value. Currently there is no way to set a header to a dynamically calculated value.\n\nNeither names or values can contain ASCII control characters.\n\nIf you implement a UI application which allows users to modify settings but at the same time makes decisions based on the returned headers, it is recommended to restrict this setting to readonly.\n\nExample: `SET http_response_headers = \'{"Content-Type": "image/png"}\'` \N \N 0 Map {} 0 Production | |
count_distinct_implementation uniqExact 0 Specifies which of the `uniq*` functions should be used to perform the [COUNT(DISTINCT ...)](/sql-reference/aggregate-functions/reference/count) construction.\n\nPossible values:\n\n- [uniq](/sql-reference/aggregate-functions/reference/uniq)\n- [uniqCombined](/sql-reference/aggregate-functions/reference/uniqcombined)\n- [uniqCombined64](/sql-reference/aggregate-functions/reference/uniqcombined64)\n- [uniqHLL12](/sql-reference/aggregate-functions/reference/uniqhll12)\n- [uniqExact](/sql-reference/aggregate-functions/reference/uniqexact) \N \N 0 String uniqExact 0 Production | |
add_http_cors_header 0 0 Write add http CORS header. \N \N 0 Bool 0 0 Production | |
max_http_get_redirects 0 0 Max number of HTTP GET redirects hops allowed. Ensures additional security measures are in place to prevent a malicious server from redirecting your requests to unexpected services.\\n\\nIt is the case when an external server redirects to another address, but that address appears to be internal to the company\'s infrastructure, and by sending an HTTP request to an internal server, you could request an internal API from the internal network, bypassing the auth, or even query other services, such as Redis or Memcached. When you don\'t have an internal infrastructure (including something running on your localhost), or you trust the server, it is safe to allow redirects. Although keep in mind, that if the URL uses HTTP instead of HTTPS, and you will have to trust not only the remote server but also your ISP and every network in the middle. \N \N 0 UInt64 0 0 Production | |
use_client_time_zone 0 0 Use client timezone for interpreting DateTime string values, instead of adopting server timezone. \N \N 0 Bool 0 0 Production | |
send_progress_in_http_headers 0 0 Enables or disables `X-ClickHouse-Progress` HTTP response headers in `clickhouse-server` responses.\n\nFor more information, read the [HTTP interface description](../../interfaces/http.md).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 0 0 Production | |
http_headers_progress_interval_ms 100 0 Do not send HTTP headers X-ClickHouse-Progress more frequently than at each specified interval. \N \N 0 UInt64 100 0 Production | |
http_wait_end_of_query 0 0 Enable HTTP response buffering on the server-side. \N \N 0 Bool 0 0 Production | |
http_write_exception_in_output_format 1 0 Write exception in output format to produce valid output. Works with JSON and XML formats. \N \N 0 Bool 1 0 Production | |
http_response_buffer_size 0 0 The number of bytes to buffer in the server memory before sending a HTTP response to the client or flushing to disk (when http_wait_end_of_query is enabled). \N \N 0 UInt64 0 0 Production | |
fsync_metadata 1 0 Enables or disables [fsync](http://pubs.opengroup.org/onlinepubs/9699919799/functions/fsync.html) when writing `.sql` files. Enabled by default.\n\nIt makes sense to disable it if the server has millions of tiny tables that are constantly being created and destroyed. \N \N 0 Bool 1 0 Production | |
join_use_nulls 0 0 Sets the type of [JOIN](../../sql-reference/statements/select/join.md) behaviour. When merging tables, empty cells may appear. ClickHouse fills them differently based on this setting.\n\nPossible values:\n\n- 0 — The empty cells are filled with the default value of the corresponding field type.\n- 1 — `JOIN` behaves the same way as in standard SQL. The type of the corresponding field is converted to [Nullable](/sql-reference/data-types/nullable), and empty cells are filled with [NULL](/sql-reference/syntax). \N \N 0 Bool 0 0 Production | |
join_output_by_rowlist_perkey_rows_threshold 5 0 The lower limit of per-key average rows in the right table to determine whether to output by row list in hash join. \N \N 0 UInt64 5 0 Production | |
join_default_strictness ALL 0 Sets default strictness for [JOIN clauses](/sql-reference/statements/select/join).\n\nPossible values:\n\n- `ALL` — If the right table has several matching rows, ClickHouse creates a [Cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) from matching rows. This is the normal `JOIN` behaviour from standard SQL.\n- `ANY` — If the right table has several matching rows, only the first one found is joined. If the right table has only one matching row, the results of `ANY` and `ALL` are the same.\n- `ASOF` — For joining sequences with an uncertain match.\n- `Empty string` — If `ALL` or `ANY` is not specified in the query, ClickHouse throws an exception. \N \N 0 JoinStrictness ALL 0 Production | |
any_join_distinct_right_table_keys 0 0 Enables legacy ClickHouse server behaviour in `ANY INNER|LEFT JOIN` operations.\n\n:::note\nUse this setting only for backward compatibility if your use cases depend on legacy `JOIN` behaviour.\n:::\n\nWhen the legacy behaviour is enabled:\n\n- Results of `t1 ANY LEFT JOIN t2` and `t2 ANY RIGHT JOIN t1` operations are not equal because ClickHouse uses the logic with many-to-one left-to-right table keys mapping.\n- Results of `ANY INNER JOIN` operations contain all rows from the left table like the `SEMI LEFT JOIN` operations do.\n\nWhen the legacy behaviour is disabled:\n\n- Results of `t1 ANY LEFT JOIN t2` and `t2 ANY RIGHT JOIN t1` operations are equal because ClickHouse uses the logic which provides one-to-many keys mapping in `ANY RIGHT JOIN` operations.\n- Results of `ANY INNER JOIN` operations contain one row per key from both the left and right tables.\n\nPossible values:\n\n- 0 — Legacy behaviour is disabled.\n- 1 — Legacy behaviour is enabled.\n\nSee also:\n\n- [JOIN strictness](/sql-reference/statements/select/join#settings) \N \N 0 Bool 0 0 Production | |
single_join_prefer_left_table 1 0 For single JOIN in case of identifier ambiguity prefer left table \N \N 0 Bool 1 0 Production | |
query_plan_join_swap_table auto 0 Determine which side of the join should be the build table (also called inner, the one inserted into the hash table for a hash join) in the query plan. This setting is supported only for `ALL` join strictness with the `JOIN ON` clause. Possible values are:\n - \'auto\': Let the planner decide which table to use as the build table.\n - \'false\': Never swap tables (the right table is the build table).\n - \'true\': Always swap tables (the left table is the build table). \N \N 0 BoolAuto auto 0 Production | |
query_plan_join_shard_by_pk_ranges 0 0 Apply sharding for JOIN if join keys contain a prefix of PRIMARY KEY for both tables. Supported for hash, parallel_hash and full_sorting_merge algorithms\n \N \N 0 Bool 0 0 Production | |
preferred_block_size_bytes 1000000 0 This setting adjusts the data block size for query processing and represents additional fine-tuning to the more rough \'max_block_size\' setting. If the columns are large and with \'max_block_size\' rows the block size is likely to be larger than the specified amount of bytes, its size will be lowered for better CPU cache locality. \N \N 0 UInt64 1000000 0 Production | |
max_replica_delay_for_distributed_queries 300 0 Disables lagging replicas for distributed queries. See [Replication](../../engines/table-engines/mergetree-family/replication.md).\n\nSets the time in seconds. If a replica\'s lag is greater than or equal to the set value, this replica is not used.\n\nPossible values:\n\n- Positive integer.\n- 0 — Replica lags are not checked.\n\nTo prevent the use of any replica with a non-zero lag, set this parameter to 1.\n\nUsed when performing `SELECT` from a distributed table that points to replicated tables. \N \N 0 UInt64 300 0 Production | |
fallback_to_stale_replicas_for_distributed_queries 1 0 Forces a query to an out-of-date replica if updated data is not available. See [Replication](../../engines/table-engines/mergetree-family/replication.md).\n\nClickHouse selects the most relevant from the outdated replicas of the table.\n\nUsed when performing `SELECT` from a distributed table that points to replicated tables.\n\nBy default, 1 (enabled). \N \N 0 Bool 1 0 Production | |
preferred_max_column_in_block_size_bytes 0 0 Limit on max column size in block while reading. Helps to decrease cache misses count. Should be close to L2 cache size. \N \N 0 UInt64 0 0 Production | |
parts_to_delay_insert 0 0 If the destination table contains at least that many active parts in a single partition, artificially slow down insert into table. \N \N 0 UInt64 0 0 Production | |
parts_to_throw_insert 0 0 If more than this number active parts in a single partition of the destination table, throw \'Too many parts ...\' exception. \N \N 0 UInt64 0 0 Production | |
number_of_mutations_to_delay 0 0 If the mutated table contains at least that many unfinished mutations, artificially slow down mutations of table. 0 - disabled \N \N 0 UInt64 0 0 Production | |
number_of_mutations_to_throw 0 0 If the mutated table contains at least that many unfinished mutations, throw \'Too many mutations ...\' exception. 0 - disabled \N \N 0 UInt64 0 0 Production | |
distributed_ddl_task_timeout 180 0 Sets timeout for DDL query responses from all hosts in cluster. If a DDL request has not been performed on all hosts, a response will contain a timeout error and a request will be executed in an async mode. Negative value means infinite.\n\nPossible values:\n\n- Positive integer.\n- 0 — Async mode.\n- Negative integer — infinite timeout. \N \N 0 Int64 180 0 Production | |
stream_flush_interval_ms 7500 0 Works for tables with streaming in the case of a timeout, or when a thread generates [max_insert_block_size](#max_insert_block_size) rows.\n\nThe default value is 7500.\n\nThe smaller the value, the more often data is flushed into the table. Setting the value too low leads to poor performance. \N \N 0 Milliseconds 7500 0 Production | |
stream_poll_timeout_ms 500 0 Timeout for polling data from/to streaming storages. \N \N 0 Milliseconds 500 0 Production | |
min_free_disk_bytes_to_perform_insert 0 0 Minimum free disk space bytes to perform an insert. \N \N 0 UInt64 0 0 Production | |
min_free_disk_ratio_to_perform_insert 0 0 Minimum free disk space ratio to perform an insert. \N \N 0 Float 0 0 Production | |
final 0 0 Automatically applies [FINAL](../../sql-reference/statements/select/from.md/#final-modifier) modifier to all tables in a query, to tables where [FINAL](../../sql-reference/statements/select/from.md/#final-modifier) is applicable, including joined tables and tables in sub-queries, and\ndistributed tables.\n\nPossible values:\n\n- 0 - disabled\n- 1 - enabled\n\nExample:\n\n```sql\nCREATE TABLE test\n(\n key Int64,\n some String\n)\nENGINE = ReplacingMergeTree\nORDER BY key;\n\nINSERT INTO test FORMAT Values (1, \'first\');\nINSERT INTO test FORMAT Values (1, \'second\');\n\nSELECT * FROM test;\n┌─key─┬─some───┐\n│ 1 │ second │\n└─────┴────────┘\n┌─key─┬─some──┐\n│ 1 │ first │\n└─────┴───────┘\n\nSELECT * FROM test SETTINGS final = 1;\n┌─key─┬─some───┐\n│ 1 │ second │\n└─────┴────────┘\n\nSET final = 1;\nSELECT * FROM test;\n┌─key─┬─some───┐\n│ 1 │ second │\n└─────┴────────┘\n``` \N \N 0 Bool 0 0 Production | |
partial_result_on_first_cancel 0 0 Allows query to return a partial result after cancel. \N \N 0 Bool 0 0 Production | |
ignore_on_cluster_for_replicated_udf_queries 0 0 Ignore ON CLUSTER clause for replicated UDF management queries. \N \N 0 Bool 0 0 Production | |
ignore_on_cluster_for_replicated_access_entities_queries 0 0 Ignore ON CLUSTER clause for replicated access entities management queries. \N \N 0 Bool 0 0 Production | |
ignore_on_cluster_for_replicated_named_collections_queries 0 0 Ignore ON CLUSTER clause for replicated named collections management queries. \N \N 0 Bool 0 0 Production | |
sleep_in_send_tables_status_ms 0 0 Time to sleep in sending tables status response in TCPHandler \N \N 0 Milliseconds 0 0 Production | |
sleep_in_send_data_ms 0 0 Time to sleep in sending data in TCPHandler \N \N 0 Milliseconds 0 0 Production | |
sleep_after_receiving_query_ms 0 0 Time to sleep after receiving query in TCPHandler \N \N 0 Milliseconds 0 0 Production | |
unknown_packet_in_send_data 0 0 Send unknown packet instead of data Nth data packet \N \N 0 UInt64 0 0 Production | |
insert_allow_materialized_columns 0 0 If setting is enabled, Allow materialized columns in INSERT. \N \N 0 Bool 0 0 Production | |
http_connection_timeout 1 0 HTTP connection timeout (in seconds).\n\nPossible values:\n\n- Any positive integer.\n- 0 - Disabled (infinite timeout). \N \N 0 Seconds 1 0 Production | |
http_send_timeout 30 0 HTTP send timeout (in seconds).\n\nPossible values:\n\n- Any positive integer.\n- 0 - Disabled (infinite timeout).\n\n:::note\nIt\'s applicable only to the default profile. A server reboot is required for the changes to take effect.\n::: \N \N 0 Seconds 30 0 Production | |
http_receive_timeout 30 0 HTTP receive timeout (in seconds).\n\nPossible values:\n\n- Any positive integer.\n- 0 - Disabled (infinite timeout). \N \N 0 Seconds 30 0 Production | |
http_max_uri_size 1048576 0 Sets the maximum URI length of an HTTP request.\n\nPossible values:\n\n- Positive integer. \N \N 0 UInt64 1048576 0 Production | |
http_max_fields 1000000 0 Maximum number of fields in HTTP header \N \N 0 UInt64 1000000 0 Production | |
http_max_field_name_size 131072 0 Maximum length of field name in HTTP header \N \N 0 UInt64 131072 0 Production | |
http_max_field_value_size 131072 0 Maximum length of field value in HTTP header \N \N 0 UInt64 131072 0 Production | |
http_skip_not_found_url_for_globs 1 0 Skip URLs for globs with HTTP_NOT_FOUND error \N \N 0 Bool 1 0 Production | |
http_make_head_request 1 0 The `http_make_head_request` setting allows the execution of a `HEAD` request while reading data from HTTP to retrieve information about the file to be read, such as its size. Since it\'s enabled by default, it may be desirable to disable this setting in cases where the server does not support `HEAD` requests. \N \N 0 Bool 1 0 Production | |
optimize_throw_if_noop 0 0 Enables or disables throwing an exception if an [OPTIMIZE](../../sql-reference/statements/optimize.md) query didn\'t perform a merge.\n\nBy default, `OPTIMIZE` returns successfully even if it didn\'t do anything. This setting lets you differentiate these situations and get the reason in an exception message.\n\nPossible values:\n\n- 1 — Throwing an exception is enabled.\n- 0 — Throwing an exception is disabled. \N \N 0 Bool 0 0 Production | |
use_index_for_in_with_subqueries 1 0 Try using an index if there is a subquery or a table expression on the right side of the IN operator. \N \N 0 Bool 1 0 Production | |
use_index_for_in_with_subqueries_max_values 0 0 The maximum size of the set in the right-hand side of the IN operator to use table index for filtering. It allows to avoid performance degradation and higher memory usage due to the preparation of additional data structures for large queries. Zero means no limit. \N \N 0 UInt64 0 0 Production | |
analyze_index_with_space_filling_curves 1 0 If a table has a space-filling curve in its index, e.g. `ORDER BY mortonEncode(x, y)` or `ORDER BY hilbertEncode(x, y)`, and the query has conditions on its arguments, e.g. `x >= 10 AND x <= 20 AND y >= 20 AND y <= 30`, use the space-filling curve for index analysis. \N \N 0 Bool 1 0 Production | |
joined_subquery_requires_alias 1 0 Force joined subqueries and table functions to have aliases for correct name qualification. \N \N 0 Bool 1 0 Production | |
empty_result_for_aggregation_by_empty_set 0 0 Return empty result when aggregating without keys on empty set. \N \N 0 Bool 0 0 Production | |
empty_result_for_aggregation_by_constant_keys_on_empty_set 1 0 Return empty result when aggregating by constant keys on empty set. \N \N 0 Bool 1 0 Production | |
allow_distributed_ddl 1 0 If it is set to true, then a user is allowed to executed distributed DDL queries. \N \N 0 Bool 1 0 Production | |
allow_suspicious_codecs 0 0 If it is set to true, allow to specify meaningless compression codecs. \N \N 0 Bool 0 0 Production | |
enable_zstd_qat_codec 0 0 If turned on, the ZSTD_QAT codec may be used to compress columns. \N \N 0 Bool 0 0 Production | |
enable_deflate_qpl_codec 0 0 If turned on, the DEFLATE_QPL codec may be used to compress columns. \N \N 0 Bool 0 0 Production | |
query_profiler_real_time_period_ns 1000000000 0 Sets the period for a real clock timer of the [query profiler](../../operations/optimizing-performance/sampling-query-profiler.md). Real clock timer counts wall-clock time.\n\nPossible values:\n\n- Positive integer number, in nanoseconds.\n\n Recommended values:\n\n - 10000000 (100 times a second) nanoseconds and less for single queries.\n - 1000000000 (once a second) for cluster-wide profiling.\n\n- 0 for turning off the timer.\n\n**Temporarily disabled in ClickHouse Cloud.**\n\nSee also:\n\n- System table [trace_log](/operations/system-tables/trace_log) \N \N 0 UInt64 1000000000 0 Production | |
query_profiler_cpu_time_period_ns 1000000000 0 Sets the period for a CPU clock timer of the [query profiler](../../operations/optimizing-performance/sampling-query-profiler.md). This timer counts only CPU time.\n\nPossible values:\n\n- A positive integer number of nanoseconds.\n\n Recommended values:\n\n - 10000000 (100 times a second) nanoseconds and more for single queries.\n - 1000000000 (once a second) for cluster-wide profiling.\n\n- 0 for turning off the timer.\n\n**Temporarily disabled in ClickHouse Cloud.**\n\nSee also:\n\n- System table [trace_log](/operations/system-tables/trace_log) \N \N 0 UInt64 1000000000 0 Production | |
metrics_perf_events_enabled 0 0 If enabled, some of the perf events will be measured throughout queries\' execution. \N \N 0 Bool 0 0 Production | |
metrics_perf_events_list 0 Comma separated list of perf metrics that will be measured throughout queries\' execution. Empty means all events. See PerfEventInfo in sources for the available events. \N \N 0 String 0 Production | |
opentelemetry_start_trace_probability 0 0 Sets the probability that the ClickHouse can start a trace for executed queries (if no parent [trace context](https://www.w3.org/TR/trace-context/) is supplied).\n\nPossible values:\n\n- 0 — The trace for all executed queries is disabled (if no parent trace context is supplied).\n- Positive floating-point number in the range [0..1]. For example, if the setting value is `0,5`, ClickHouse can start a trace on average for half of the queries.\n- 1 — The trace for all executed queries is enabled. \N \N 0 Float 0 0 Production | |
opentelemetry_trace_processors 0 0 Collect OpenTelemetry spans for processors. \N \N 0 Bool 0 0 Production | |
prefer_column_name_to_alias 0 0 Enables or disables using the original column names instead of aliases in query expressions and clauses. It especially matters when alias is the same as the column name, see [Expression Aliases](/sql-reference/syntax#notes-on-usage). Enable this setting to make aliases syntax rules in ClickHouse more compatible with most other database engines.\n\nPossible values:\n\n- 0 — The column name is substituted with the alias.\n- 1 — The column name is not substituted with the alias.\n\n**Example**\n\nThe difference between enabled and disabled:\n\nQuery:\n\n```sql\nSET prefer_column_name_to_alias = 0;\nSELECT avg(number) AS number, max(number) FROM numbers(10);\n```\n\nResult:\n\n```text\nReceived exception from server (version 21.5.1):\nCode: 184. DB::Exception: Received from localhost:9000. DB::Exception: Aggregate function avg(number) is found inside another aggregate function in query: While processing avg(number) AS number.\n```\n\nQuery:\n\n```sql\nSET prefer_column_name_to_alias = 1;\nSELECT avg(number) AS number, max(number) FROM numbers(10);\n```\n\nResult:\n\n```text\n┌─number─┬─max(number)─┐\n│ 4.5 │ 9 │\n└────────┴─────────────┘\n``` \N \N 0 Bool 0 0 Production | |
skip_redundant_aliases_in_udf 0 0 Redundant aliases are not used (substituted) in user-defined functions in order to simplify it\'s usage.\n\nPossible values:\n\n- 1 — The aliases are skipped (substituted) in UDFs.\n- 0 — The aliases are not skipped (substituted) in UDFs.\n\n**Example**\n\nThe difference between enabled and disabled:\n\nQuery:\n\n```sql\nSET skip_redundant_aliases_in_udf = 0;\nCREATE FUNCTION IF NOT EXISTS test_03274 AS ( x ) -> ((x + 1 as y, y + 2));\n\nEXPLAIN SYNTAX SELECT test_03274(4 + 2);\n```\n\nResult:\n\n```text\nSELECT ((4 + 2) + 1 AS y, y + 2)\n```\n\nQuery:\n\n```sql\nSET skip_redundant_aliases_in_udf = 1;\nCREATE FUNCTION IF NOT EXISTS test_03274 AS ( x ) -> ((x + 1 as y, y + 2));\n\nEXPLAIN SYNTAX SELECT test_03274(4 + 2);\n```\n\nResult:\n\n```text\nSELECT ((4 + 2) + 1, ((4 + 2) + 1) + 2)\n``` \N \N 0 Bool 0 0 Production | |
prefer_global_in_and_join 0 0 Enables the replacement of `IN`/`JOIN` operators with `GLOBAL IN`/`GLOBAL JOIN`.\n\nPossible values:\n\n- 0 — Disabled. `IN`/`JOIN` operators are not replaced with `GLOBAL IN`/`GLOBAL JOIN`.\n- 1 — Enabled. `IN`/`JOIN` operators are replaced with `GLOBAL IN`/`GLOBAL JOIN`.\n\n**Usage**\n\nAlthough `SET distributed_product_mode=global` can change the queries behavior for the distributed tables, it\'s not suitable for local tables or tables from external resources. Here is when the `prefer_global_in_and_join` setting comes into play.\n\nFor example, we have query serving nodes that contain local tables, which are not suitable for distribution. We need to scatter their data on the fly during distributed processing with the `GLOBAL` keyword — `GLOBAL IN`/`GLOBAL JOIN`.\n\nAnother use case of `prefer_global_in_and_join` is accessing tables created by external engines. This setting helps to reduce the number of calls to external sources while joining such tables: only one call per query.\n\n**See also:**\n\n- [Distributed subqueries](/sql-reference/operators/in#distributed-subqueries) for more information on how to use `GLOBAL IN`/`GLOBAL JOIN` \N \N 0 Bool 0 0 Production | |
enable_vertical_final 1 0 If enable, remove duplicated rows during FINAL by marking rows as deleted and filtering them later instead of merging rows \N \N 0 Bool 1 0 Production | |
max_rows_to_read 0 0 The maximum number of rows that can be read from a table when running a query.\nThe restriction is checked for each processed chunk of data, applied only to the\ndeepest table expression and when reading from a remote server, checked only on\nthe remote server. \N \N 0 UInt64 0 0 Production | |
max_bytes_to_read 0 0 The maximum number of bytes (of uncompressed data) that can be read from a table when running a query.\nThe restriction is checked for each processed chunk of data, applied only to the\ndeepest table expression and when reading from a remote server, checked only on\nthe remote server. \N \N 0 UInt64 0 0 Production | |
read_overflow_mode throw 0 What to do when the limit is exceeded. \N \N 0 OverflowMode throw 0 Production | |
max_rows_to_read_leaf 0 0 The maximum number of rows that can be read from a local table on a leaf node when\nrunning a distributed query. While distributed queries can issue multiple sub-queries\nto each shard (leaf) - this limit will be checked only on the read stage on the\nleaf nodes and ignored on the merging of results stage on the root node.\n\nFor example, a cluster consists of 2 shards and each shard contains a table with\n100 rows. The distributed query which is supposed to read all the data from both\ntables with setting `max_rows_to_read=150` will fail, as in total there will be\n200 rows. A query with `max_rows_to_read_leaf=150` will succeed, since leaf nodes\nwill read at max 100 rows.\n\nThe restriction is checked for each processed chunk of data.\n\n:::note\nThis setting is unstable with `prefer_localhost_replica=1`.\n::: \N \N 0 UInt64 0 0 Production | |
max_bytes_to_read_leaf 0 0 The maximum number of bytes (of uncompressed data) that can be read from a local\ntable on a leaf node when running a distributed query. While distributed queries\ncan issue a multiple sub-queries to each shard (leaf) - this limit will\nbe checked only on the read stage on the leaf nodes and will be ignored on the\nmerging of results stage on the root node.\n\nFor example, a cluster consists of 2 shards and each shard contains a table with\n100 bytes of data. A distributed query which is supposed to read all the data\nfrom both tables with setting `max_bytes_to_read=150` will fail as in total it\nwill be 200 bytes. A query with `max_bytes_to_read_leaf=150` will succeed since\nleaf nodes will read 100 bytes at max.\n\nThe restriction is checked for each processed chunk of data.\n\n:::note\nThis setting is unstable with `prefer_localhost_replica=1`.\n::: \N \N 0 UInt64 0 0 Production | |
read_overflow_mode_leaf throw 0 Sets what happens when the volume of data read exceeds one of the leaf limits.\n\nPossible options:\n- `throw`: throw an exception (default).\n- `break`: stop executing the query and return the partial result. \N \N 0 OverflowMode throw 0 Production | |
max_rows_to_group_by 0 0 The maximum number of unique keys received from aggregation. This setting lets\nyou limit memory consumption when aggregating.\n\nIf aggregation during GROUP BY is generating more than the specified number of\nrows (unique GROUP BY keys), the behavior will be determined by the\n\'group_by_overflow_mode\' which by default is `throw`, but can be also switched\nto an approximate GROUP BY mode. \N \N 0 UInt64 0 0 Production | |
group_by_overflow_mode throw 0 Sets what happens when the number of unique keys for aggregation exceeds the limit:\n- `throw`: throw an exception\n- `break`: stop executing the query and return the partial result\n- `any`: continue aggregation for the keys that got into the set, but do not add new keys to the set.\n\nUsing the \'any\' value lets you run an approximation of GROUP BY. The quality of\nthis approximation depends on the statistical nature of the data. \N \N 0 OverflowModeGroupBy throw 0 Production | |
max_bytes_before_external_group_by 0 0 Cloud default value: half the memory amount per replica.\n\nEnables or disables execution of `GROUP BY` clauses in external memory.\n(See [GROUP BY in external memory](/sql-reference/statements/select/group-by#group-by-in-external-memory))\n\nPossible values:\n\n- Maximum volume of RAM (in bytes) that can be used by the single [GROUP BY](/sql-reference/statements/select/group-by) operation.\n- `0` — `GROUP BY` in external memory disabled.\n\n:::note\nIf memory usage during GROUP BY operations is exceeding this threshold in bytes,\nactivate the \'external aggregation\' mode (spill data to disk).\n\nThe recommended value is half of the available system memory.\n::: \N \N 0 UInt64 0 0 Production | |
max_bytes_ratio_before_external_group_by 0.5 0 The ratio of available memory that is allowed for `GROUP BY`. Once reached,\nexternal memory is used for aggregation.\n\nFor example, if set to `0.6`, `GROUP BY` will allow using 60% of the available memory\n(to server/user/merges) at the beginning of the execution, after that, it will\nstart using external aggregation. \N \N 0 Double 0.5 0 Production | |
max_rows_to_sort 0 0 The maximum number of rows before sorting. This allows you to limit memory consumption when sorting.\nIf more than the specified amount of records have to be processed for the ORDER BY operation,\nthe behavior will be determined by the `sort_overflow_mode` which by default is set to `throw`. \N \N 0 UInt64 0 0 Production | |
max_bytes_to_sort 0 0 The maximum number of bytes before sorting. If more than the specified amount of\nuncompressed bytes have to be processed for ORDER BY operation, the behavior will\nbe determined by the `sort_overflow_mode` which by default is set to `throw`. \N \N 0 UInt64 0 0 Production | |
sort_overflow_mode throw 0 Sets what happens if the number of rows received before sorting exceeds one of the limits.\n\nPossible values:\n- `throw`: throw an exception.\n- `break`: stop executing the query and return the partial result. \N \N 0 OverflowMode throw 0 Production | |
prefer_external_sort_block_bytes 16744704 0 Prefer maximum block bytes for external sort, reduce the memory usage during merging. \N \N 0 UInt64 16744704 0 Production | |
max_bytes_before_external_sort 0 0 Cloud default value: half the memory amount per replica.\n\nEnables or disables execution of `ORDER BY` clauses in external memory. See [ORDER BY Implementation Details](../../sql-reference/statements/select/order-by.md#implementation-details)\nIf memory usage during ORDER BY operation exceeds this threshold in bytes, the \'external sorting\' mode (spill data to disk) is activated.\n\nPossible values:\n\n- Maximum volume of RAM (in bytes) that can be used by the single [ORDER BY](../../sql-reference/statements/select/order-by.md) operation.\n The recommended value is half of available system memory\n- `0` — `ORDER BY` in external memory disabled. \N \N 0 UInt64 0 0 Production | |
max_bytes_ratio_before_external_sort 0.5 0 The ratio of available memory that is allowed for `ORDER BY`. Once reached, external sort is used.\n\nFor example, if set to `0.6`, `ORDER BY` will allow using `60%` of available memory (to server/user/merges) at the beginning of the execution, after that, it will start using external sort.\n\nNote, that `max_bytes_before_external_sort` is still respected, spilling to disk will be done only if the sorting block is bigger then `max_bytes_before_external_sort`. \N \N 0 Double 0.5 0 Production | |
max_bytes_before_remerge_sort 1000000000 0 In case of ORDER BY with LIMIT, when memory usage is higher than specified threshold, perform additional steps of merging blocks before final merge to keep just top LIMIT rows. \N \N 0 UInt64 1000000000 0 Production | |
remerge_sort_lowered_memory_bytes_ratio 2 0 If memory usage after remerge does not reduced by this ratio, remerge will be disabled. \N \N 0 Float 2 0 Production | |
max_result_rows 0 0 Cloud default value: `0`.\n\nLimits the number of rows in the result. Also checked for subqueries, and on remote servers when running parts of a distributed query.\nNo limit is applied when the value is `0`.\n\nThe query will stop after processing a block of data if the threshold is met, but\nit will not cut the last block of the result, therefore the result size can be\nlarger than the threshold. \N \N 0 UInt64 0 0 Production | |
max_result_bytes 0 0 Limits the result size in bytes (uncompressed). The query will stop after processing a block of data if the threshold is met,\nbut it will not cut the last block of the result, therefore the result size can be larger than the threshold.\n\n**Caveats**\n\nThe result size in memory is taken into account for this threshold.\nEven if the result size is small, it can reference larger data structures in memory,\nrepresenting dictionaries of LowCardinality columns, and Arenas of AggregateFunction columns,\nso the threshold can be exceeded despite the small result size.\n\n:::warning\nThe setting is fairly low level and should be used with caution\n::: \N \N 0 UInt64 0 0 Production | |
result_overflow_mode throw 0 Cloud default value: `throw`\n\nSets what to do if the volume of the result exceeds one of the limits.\n\nPossible values:\n- `throw`: throw an exception (default).\n- `break`: stop executing the query and return the partial result, as if the\n source data ran out.\n\nUsing \'break\' is similar to using LIMIT. `Break` interrupts execution only at the\nblock level. This means that amount of returned rows is greater than\n[`max_result_rows`](/operations/settings/settings#max_result_rows), multiple of [`max_block_size`](/operations/settings/settings#max_block_size)\nand depends on [`max_threads`](/operations/settings/settings#max_threads).\n\n**Example**\n\n```sql title="Query"\nSET max_threads = 3, max_block_size = 3333;\nSET max_result_rows = 3334, result_overflow_mode = \'break\';\n\nSELECT *\nFROM numbers_mt(100000)\nFORMAT Null;\n```\n\n```text title="Result"\n6666 rows in set. ...\n``` \N \N 0 OverflowMode throw 0 Production | |
max_execution_time 0 0 The maximum query execution time in seconds.\n\nThe `max_execution_time` parameter can be a bit tricky to understand.\nIt operates based on interpolation relative to the current query execution speed\n(this behaviour is controlled by [`timeout_before_checking_execution_speed`](/operations/settings/settings#timeout_before_checking_execution_speed)).\n\nClickHouse will interrupt a query if the projected execution time exceeds the\nspecified `max_execution_time`. By default, the `timeout_before_checking_execution_speed`\nis set to 10 seconds. This means that after 10 seconds of query execution, ClickHouse\nwill begin estimating the total execution time. If, for example, `max_execution_time`\nis set to 3600 seconds (1 hour), ClickHouse will terminate the query if the estimated\ntime exceeds this 3600-second limit. If you set `timeout_before_checking_execution_speed`\nto 0, ClickHouse will use the clock time as the basis for `max_execution_time`.\n\nIf query runtime exceeds the specified number of seconds, the behavior will be\ndetermined by the \'timeout_overflow_mode\', which by default is set to `throw`.\n\n:::note\nThe timeout is checked and the query can stop only in designated places during data processing.\nIt currently cannot stop during merging of aggregation states or during query analysis,\nand the actual run time will be higher than the value of this setting.\n::: \N \N 0 Seconds 0 0 Production | |
timeout_overflow_mode throw 0 Sets what to do if the query is run longer than the `max_execution_time` or the\nestimated running time is longer than `max_estimated_execution_time`.\n\nPossible values:\n- `throw`: throw an exception (default).\n- `break`: stop executing the query and return the partial result, as if the\nsource data ran out. \N \N 0 OverflowMode throw 0 Production | |
max_execution_time_leaf 0 0 Similar semantically to [`max_execution_time`](#max_execution_time) but only\napplied on leaf nodes for distributed or remote queries.\n\nFor example, if we want to limit the execution time on a leaf node to `10s` but\nhave no limit on the initial node, instead of having `max_execution_time` in the\nnested subquery settings:\n\n```sql\nSELECT count()\nFROM cluster(cluster, view(SELECT * FROM t SETTINGS max_execution_time = 10));\n```\n\nWe can use `max_execution_time_leaf` as the query settings:\n\n```sql\nSELECT count()\nFROM cluster(cluster, view(SELECT * FROM t)) SETTINGS max_execution_time_leaf = 10;\n``` \N \N 0 Seconds 0 0 Production | |
timeout_overflow_mode_leaf throw 0 Sets what happens when the query in leaf node run longer than `max_execution_time_leaf`.\n\nPossible values:\n- `throw`: throw an exception (default).\n- `break`: stop executing the query and return the partial result, as if the\nsource data ran out. \N \N 0 OverflowMode throw 0 Production | |
min_execution_speed 0 0 Minimal execution speed in rows per second. Checked on every data block when\n[`timeout_before_checking_execution_speed`](/operations/settings/settings#timeout_before_checking_execution_speed)\nexpires. If the execution speed is lower, an exception is thrown. \N \N 0 UInt64 0 0 Production | |
max_execution_speed 0 0 The maximum number of execution rows per second. Checked on every data block when\n[`timeout_before_checking_execution_speed`](/operations/settings/settings#timeout_before_checking_execution_speed)\nexpires. If the execution speed is high, the execution speed will be reduced. \N \N 0 UInt64 0 0 Production | |
min_execution_speed_bytes 0 0 The minimum number of execution bytes per second. Checked on every data block when\n[`timeout_before_checking_execution_speed`](/operations/settings/settings#timeout_before_checking_execution_speed)\nexpires. If the execution speed is lower, an exception is thrown. \N \N 0 UInt64 0 0 Production | |
max_execution_speed_bytes 0 0 The maximum number of execution bytes per second. Checked on every data block when\n[`timeout_before_checking_execution_speed`](/operations/settings/settings#timeout_before_checking_execution_speed)\nexpires. If the execution speed is high, the execution speed will be reduced. \N \N 0 UInt64 0 0 Production | |
timeout_before_checking_execution_speed 10 0 Checks that execution speed is not too slow (no less than `min_execution_speed`),\nafter the specified time in seconds has expired. \N \N 0 Seconds 10 0 Production | |
max_estimated_execution_time 0 0 Maximum query estimate execution time in seconds. Checked on every data block\nwhen [`timeout_before_checking_execution_speed`](/operations/settings/settings#timeout_before_checking_execution_speed)\nexpires. \N \N 0 Seconds 0 0 Production | |
max_columns_to_read 0 0 The maximum number of columns that can be read from a table in a single query.\nIf a query requires reading more than the specified number of columns, an exception\nis thrown.\n\n:::tip\nThis setting is useful for preventing overly complex queries.\n:::\n\n`0` value means unlimited. \N \N 0 UInt64 0 0 Production | |
max_temporary_columns 0 0 The maximum number of temporary columns that must be kept in RAM simultaneously\nwhen running a query, including constant columns. If a query generates more than\nthe specified number of temporary columns in memory as a result of intermediate\ncalculation, then an exception is thrown.\n\n:::tip\nThis setting is useful for preventing overly complex queries.\n:::\n\n`0` value means unlimited. \N \N 0 UInt64 0 0 Production | |
max_temporary_non_const_columns 0 0 Like `max_temporary_columns`, the maximum number of temporary columns that must\nbe kept in RAM simultaneously when running a query, but without counting constant\ncolumns.\n\n:::note\nConstant columns are formed fairly often when running a query, but they require\napproximately zero computing resources.\n::: \N \N 0 UInt64 0 0 Production | |
max_sessions_for_user 0 0 Maximum number of simultaneous sessions per authenticated user to the ClickHouse server.\n\nExample:\n\n```xml\n<profiles>\n <single_session_profile>\n <max_sessions_for_user>1</max_sessions_for_user>\n </single_session_profile>\n <two_sessions_profile>\n <max_sessions_for_user>2</max_sessions_for_user>\n </two_sessions_profile>\n <unlimited_sessions_profile>\n <max_sessions_for_user>0</max_sessions_for_user>\n </unlimited_sessions_profile>\n</profiles>\n<users>\n <!-- User Alice can connect to a ClickHouse server no more than once at a time. -->\n <Alice>\n <profile>single_session_user</profile>\n </Alice>\n <!-- User Bob can use 2 simultaneous sessions. -->\n <Bob>\n <profile>two_sessions_profile</profile>\n </Bob>\n <!-- User Charles can use arbitrarily many of simultaneous sessions. -->\n <Charles>\n <profile>unlimited_sessions_profile</profile>\n </Charles>\n</users>\n```\n\nPossible values:\n- Positive integer\n- `0` - infinite count of simultaneous sessions (default) \N \N 0 UInt64 0 0 Production | |
max_subquery_depth 100 0 If a query has more than the specified number of nested subqueries, throws an\nexception.\n\n:::tip\nThis allows you to have a sanity check to protect against the users of your\ncluster from writing overly complex queries.\n::: \N \N 0 UInt64 100 0 Production | |
max_analyze_depth 5000 0 Maximum number of analyses performed by interpreter. \N \N 0 UInt64 5000 0 Production | |
max_ast_depth 1000 0 The maximum nesting depth of a query syntactic tree. If exceeded, an exception is thrown.\n\n:::note\nAt this time, it isn\'t checked during parsing, but only after parsing the query.\nThis means that a syntactic tree that is too deep can be created during parsing,\nbut the query will fail.\n::: \N \N 0 UInt64 1000 0 Production | |
max_ast_elements 50000 0 The maximum number of elements in a query syntactic tree. If exceeded, an exception is thrown.\n\n:::note\nAt this time, it isn\'t checked during parsing, but only after parsing the query.\nThis means that a syntactic tree that is too deep can be created during parsing,\nbut the query will fail.\n::: \N \N 0 UInt64 50000 0 Production | |
max_expanded_ast_elements 500000 0 Maximum size of query syntax tree in number of nodes after expansion of aliases and the asterisk. \N \N 0 UInt64 500000 0 Production | |
readonly 0 0 0 - no read-only restrictions. 1 - only read requests, as well as changing explicitly allowed settings. 2 - only read requests, as well as changing settings, except for the \'readonly\' setting. \N \N 0 UInt64 0 0 Production | |
max_rows_in_set 0 0 The maximum number of rows for a data set in the IN clause created from a subquery. \N \N 0 UInt64 0 0 Production | |
max_bytes_in_set 0 0 The maximum number of bytes (of uncompressed data) used by a set in the IN clause\ncreated from a subquery. \N \N 0 UInt64 0 0 Production | |
set_overflow_mode throw 0 Sets what happens when the amount of data exceeds one of the limits.\n\nPossible values:\n- `throw`: throw an exception (default).\n- `break`: stop executing the query and return the partial result, as if the\nsource data ran out. \N \N 0 OverflowMode throw 0 Production | |
max_rows_in_join 0 0 Limits the number of rows in the hash table that is used when joining tables.\n\nThis settings applies to [SELECT ... JOIN](/sql-reference/statements/select/join)\noperations and the [Join](/engines/table-engines/special/join) table engine.\n\nIf a query contains multiple joins, ClickHouse checks this setting for every intermediate result.\n\nClickHouse can proceed with different actions when the limit is reached. Use the\n[`join_overflow_mode`](/operations/settings/settings#join_overflow_mode) setting to choose the action.\n\nPossible values:\n\n- Positive integer.\n- `0` — Unlimited number of rows. \N \N 0 UInt64 0 0 Production | |
max_bytes_in_join 0 0 The maximum size in number of bytes of the hash table used when joining tables.\n\nThis setting applies to [SELECT ... JOIN](/sql-reference/statements/select/join)\noperations and the [Join table engine](/engines/table-engines/special/join).\n\nIf the query contains joins, ClickHouse checks this setting for every intermediate result.\n\nClickHouse can proceed with different actions when the limit is reached. Use\nthe [join_overflow_mode](/operations/settings/settings#join_overflow_mode) settings to choose the action.\n\nPossible values:\n\n- Positive integer.\n- 0 — Memory control is disabled. \N \N 0 UInt64 0 0 Production | |
join_overflow_mode throw 0 Defines what action ClickHouse performs when any of the following join limits is reached:\n\n- [max_bytes_in_join](/operations/settings/settings#max_bytes_in_join)\n- [max_rows_in_join](/operations/settings/settings#max_rows_in_join)\n\nPossible values:\n\n- `THROW` — ClickHouse throws an exception and breaks operation.\n- `BREAK` — ClickHouse breaks operation and does not throw an exception.\n\nDefault value: `THROW`.\n\n**See Also**\n\n- [JOIN clause](/sql-reference/statements/select/join)\n- [Join table engine](/engines/table-engines/special/join) \N \N 0 OverflowMode throw 0 Production | |
join_any_take_last_row 0 0 Changes the behaviour of join operations with `ANY` strictness.\n\n:::note\nThis setting applies only for `JOIN` operations with [Join](../../engines/table-engines/special/join.md) engine tables.\n:::\n\nPossible values:\n\n- 0 — If the right table has more than one matching row, only the first one found is joined.\n- 1 — If the right table has more than one matching row, only the last one found is joined.\n\nSee also:\n\n- [JOIN clause](/sql-reference/statements/select/join)\n- [Join table engine](../../engines/table-engines/special/join.md)\n- [join_default_strictness](#join_default_strictness) \N \N 0 Bool 0 0 Production | |
join_algorithm direct,parallel_hash,hash 0 Specifies which [JOIN](../../sql-reference/statements/select/join.md) algorithm is used.\n\nSeveral algorithms can be specified, and an available one would be chosen for a particular query based on kind/strictness and table engine.\n\nPossible values:\n\n- grace_hash\n\n [Grace hash join](https://en.wikipedia.org/wiki/Hash_join#Grace_hash_join) is used. Grace hash provides an algorithm option that provides performant complex joins while limiting memory use.\n\n The first phase of a grace join reads the right table and splits it into N buckets depending on the hash value of key columns (initially, N is `grace_hash_join_initial_buckets`). This is done in a way to ensure that each bucket can be processed independently. Rows from the first bucket are added to an in-memory hash table while the others are saved to disk. If the hash table grows beyond the memory limit (e.g., as set by [`max_bytes_in_join`](/operations/settings/settings#max_bytes_in_join), the number of buckets is increased and the assigned bucket for each row. Any rows which don\'t belong to the current bucket are flushed and reassigned.\n\n Supports `INNER/LEFT/RIGHT/FULL ALL/ANY JOIN`.\n\n- hash\n\n [Hash join algorithm](https://en.wikipedia.org/wiki/Hash_join) is used. The most generic implementation that supports all combinations of kind and strictness and multiple join keys that are combined with `OR` in the `JOIN ON` section.\n\n When using the `hash` algorithm, the right part of `JOIN` is uploaded into RAM.\n\n- parallel_hash\n\n A variation of `hash` join that splits the data into buckets and builds several hashtables instead of one concurrently to speed up this process.\n\n When using the `parallel_hash` algorithm, the right part of `JOIN` is uploaded into RAM.\n\n- partial_merge\n\n A variation of the [sort-merge algorithm](https://en.wikipedia.org/wiki/Sort-merge_join), where only the right table is fully sorted.\n\n The `RIGHT JOIN` and `FULL JOIN` are supported only with `ALL` strictness (`SEMI`, `ANTI`, `ANY`, and `ASOF` are not supported).\n\n When using the `partial_merge` algorithm, ClickHouse sorts the data and dumps it to the disk. The `partial_merge` algorithm in ClickHouse differs slightly from the classic realization. First, ClickHouse sorts the right table by joining keys in blocks and creates a min-max index for sorted blocks. Then it sorts parts of the left table by the `join key` and joins them over the right table. The min-max index is also used to skip unneeded right table blocks.\n\n- direct\n\n This algorithm can be applied when the storage for the right table supports key-value requests.\n\n The `direct` algorithm performs a lookup in the right table using rows from the left table as keys. It\'s supported only by special storage such as [Dictionary](/engines/table-engines/special/dictionary) or [EmbeddedRocksDB](../../engines/table-engines/integrations/embedded-rocksdb.md) and only the `LEFT` and `INNER` JOINs.\n\n- auto\n\n When set to `auto`, `hash` join is tried first, and the algorithm is switched on the fly to another algorithm if the memory limit is violated.\n\n- full_sorting_merge\n\n [Sort-merge algorithm](https://en.wikipedia.org/wiki/Sort-merge_join) with full sorting joined tables before joining.\n\n- prefer_partial_merge\n\n ClickHouse always tries to use `partial_merge` join if possible, otherwise, it uses `hash`. *Deprecated*, same as `partial_merge,hash`.\n\n- default (deprecated)\n\n Legacy value, please don\'t use anymore.\n Same as `direct,hash`, i.e. try to use direct join and hash join join (in this order).\n \N \N 0 JoinAlgorithm direct,parallel_hash,hash 0 Production | |
cross_join_min_rows_to_compress 10000000 0 Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached. \N \N 0 UInt64 10000000 0 Production | |
cross_join_min_bytes_to_compress 1073741824 0 Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached. \N \N 0 UInt64 1073741824 0 Production | |
default_max_bytes_in_join 1000000000 0 Maximum size of right-side table if limit is required but `max_bytes_in_join` is not set. \N \N 0 UInt64 1000000000 0 Production | |
partial_merge_join_left_table_buffer_bytes 0 0 If not 0 group left table blocks in bigger ones for left-side table in partial merge join. It uses up to 2x of specified memory per joining thread. \N \N 0 UInt64 0 0 Production | |
partial_merge_join_rows_in_right_blocks 65536 0 Limits sizes of right-hand join data blocks in partial merge join algorithm for [JOIN](../../sql-reference/statements/select/join.md) queries.\n\nClickHouse server:\n\n1. Splits right-hand join data into blocks with up to the specified number of rows.\n2. Indexes each block with its minimum and maximum values.\n3. Unloads prepared blocks to disk if it is possible.\n\nPossible values:\n\n- Any positive integer. Recommended range of values: \\[1000, 100000\\]. \N \N 0 UInt64 65536 0 Production | |
join_on_disk_max_files_to_merge 64 0 Limits the number of files allowed for parallel sorting in MergeJoin operations when they are executed on disk.\n\nThe bigger the value of the setting, the more RAM is used and the less disk I/O is needed.\n\nPossible values:\n\n- Any positive integer, starting from 2. \N \N 0 UInt64 64 0 Production | |
max_rows_in_set_to_optimize_join 0 0 Maximal size of the set to filter joined tables by each other\'s row sets before joining.\n\nPossible values:\n\n- 0 — Disable.\n- Any positive integer. \N \N 0 UInt64 0 0 Production | |
compatibility_ignore_collation_in_create_table 1 0 Compatibility ignore collation in create table \N \N 0 Bool 1 0 Production | |
temporary_files_codec LZ4 0 Sets compression codec for temporary files used in sorting and joining operations on disk.\n\nPossible values:\n\n- LZ4 — [LZ4](https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)) compression is applied.\n- NONE — No compression is applied. \N \N 0 String LZ4 0 Production | |
max_rows_to_transfer 0 0 Maximum size (in rows) that can be passed to a remote server or saved in a\ntemporary table when the GLOBAL IN/JOIN section is executed. \N \N 0 UInt64 0 0 Production | |
max_bytes_to_transfer 0 0 The maximum number of bytes (uncompressed data) that can be passed to a remote\nserver or saved in a temporary table when the GLOBAL IN/JOIN section is executed. \N \N 0 UInt64 0 0 Production | |
transfer_overflow_mode throw 0 Sets what happens when the amount of data exceeds one of the limits.\n\nPossible values:\n- `throw`: throw an exception (default).\n- `break`: stop executing the query and return the partial result, as if the\nsource data ran out. \N \N 0 OverflowMode throw 0 Production | |
max_rows_in_distinct 0 0 The maximum number of different rows when using DISTINCT. \N \N 0 UInt64 0 0 Production | |
max_bytes_in_distinct 0 0 The maximum number of bytes of the state (in uncompressed bytes) in memory, which\nis used by a hash table when using DISTINCT. \N \N 0 UInt64 0 0 Production | |
distinct_overflow_mode throw 0 Sets what happens when the amount of data exceeds one of the limits.\n\nPossible values:\n- `throw`: throw an exception (default).\n- `break`: stop executing the query and return the partial result, as if the\nsource data ran out. \N \N 0 OverflowMode throw 0 Production | |
max_memory_usage 0 0 Cloud default value: depends on the amount of RAM on the replica.\n\nThe maximum amount of RAM to use for running a query on a single server.\nA value of `0` means unlimited.\n\nThis setting does not consider the volume of available memory or the total volume\nof memory on the machine. The restriction applies to a single query within a\nsingle server.\n\nYou can use `SHOW PROCESSLIST` to see the current memory consumption for each query.\nPeak memory consumption is tracked for each query and written to the log.\n\nMemory usage is not fully tracked for states of the following aggregate functions\nfrom `String` and `Array` arguments:\n- `min`\n- `max`\n- `any`\n- `anyLast`\n- `argMin`\n- `argMax`\n\nMemory consumption is also restricted by the parameters [`max_memory_usage_for_user`](/operations/settings/settings#max_memory_usage_for_user)\nand [`max_server_memory_usage`](/operations/server-configuration-parameters/settings#max_server_memory_usage). \N \N 0 UInt64 0 0 Production | |
memory_overcommit_ratio_denominator 1073741824 0 It represents the soft memory limit when the hard limit is reached on the global level.\nThis value is used to compute the overcommit ratio for the query.\nZero means skip the query.\nRead more about [memory overcommit](memory-overcommit.md). \N \N 0 UInt64 1073741824 0 Production | |
max_memory_usage_for_user 0 0 The maximum amount of RAM to use for running a user\'s queries on a single server. Zero means unlimited.\n\nBy default, the amount is not restricted (`max_memory_usage_for_user = 0`).\n\nAlso see the description of [`max_memory_usage`](/operations/settings/settings#max_memory_usage).\n\nFor example if you want to set `max_memory_usage_for_user` to 1000 bytes for a user named `clickhouse_read`, you can use the statement\n\n```sql\nALTER USER clickhouse_read SETTINGS max_memory_usage_for_user = 1000;\n```\n\nYou can verify it worked by logging out of your client, logging back in, then use the `getSetting` function:\n\n```sql\nSELECT getSetting(\'max_memory_usage_for_user\');\n``` \N \N 0 UInt64 0 0 Production | |
memory_overcommit_ratio_denominator_for_user 1073741824 0 It represents the soft memory limit when the hard limit is reached on the user level.\nThis value is used to compute the overcommit ratio for the query.\nZero means skip the query.\nRead more about [memory overcommit](memory-overcommit.md). \N \N 0 UInt64 1073741824 0 Production | |
max_untracked_memory 4194304 0 Small allocations and deallocations are grouped in thread local variable and tracked or profiled only when an amount (in absolute value) becomes larger than the specified value. If the value is higher than \'memory_profiler_step\' it will be effectively lowered to \'memory_profiler_step\'. \N \N 0 UInt64 4194304 0 Production | |
memory_profiler_step 4194304 0 Sets the step of memory profiler. Whenever query memory usage becomes larger than every next step in number of bytes the memory profiler will collect the allocating stacktrace and will write it into [trace_log](/operations/system-tables/trace_log).\n\nPossible values:\n\n- A positive integer number of bytes.\n\n- 0 for turning off the memory profiler. \N \N 0 UInt64 4194304 0 Production | |
memory_profiler_sample_probability 0 0 Collect random allocations and deallocations and write them into system.trace_log with \'MemorySample\' trace_type. The probability is for every alloc/free regardless of the size of the allocation (can be changed with `memory_profiler_sample_min_allocation_size` and `memory_profiler_sample_max_allocation_size`). Note that sampling happens only when the amount of untracked memory exceeds \'max_untracked_memory\'. You may want to set \'max_untracked_memory\' to 0 for extra fine-grained sampling. \N \N 0 Float 0 0 Production | |
memory_profiler_sample_min_allocation_size 0 0 Collect random allocations of size greater or equal than the specified value with probability equal to `memory_profiler_sample_probability`. 0 means disabled. You may want to set \'max_untracked_memory\' to 0 to make this threshold work as expected. \N \N 0 UInt64 0 0 Production | |
memory_profiler_sample_max_allocation_size 0 0 Collect random allocations of size less or equal than the specified value with probability equal to `memory_profiler_sample_probability`. 0 means disabled. You may want to set \'max_untracked_memory\' to 0 to make this threshold work as expected. \N \N 0 UInt64 0 0 Production | |
trace_profile_events 0 0 Enables or disables collecting stacktraces on each update of profile events along with the name of profile event and the value of increment and sending them into [trace_log](/operations/system-tables/trace_log).\n\nPossible values:\n\n- 1 — Tracing of profile events enabled.\n- 0 — Tracing of profile events disabled. \N \N 0 Bool 0 0 Production | |
memory_usage_overcommit_max_wait_microseconds 5000000 0 Maximum time thread will wait for memory to be freed in the case of memory overcommit on a user level.\nIf the timeout is reached and memory is not freed, an exception is thrown.\nRead more about [memory overcommit](memory-overcommit.md). \N \N 0 UInt64 5000000 0 Production | |
max_network_bandwidth 0 0 Limits the speed of the data exchange over the network in bytes per second. This setting applies to every query.\n\nPossible values:\n\n- Positive integer.\n- 0 — Bandwidth control is disabled. \N \N 0 UInt64 0 0 Production | |
max_network_bytes 0 0 Limits the data volume (in bytes) that is received or transmitted over the network when executing a query. This setting applies to every individual query.\n\nPossible values:\n\n- Positive integer.\n- 0 — Data volume control is disabled. \N \N 0 UInt64 0 0 Production | |
max_network_bandwidth_for_user 0 0 Limits the speed of the data exchange over the network in bytes per second. This setting applies to all concurrently running queries performed by a single user.\n\nPossible values:\n\n- Positive integer.\n- 0 — Control of the data speed is disabled. \N \N 0 UInt64 0 0 Production | |
max_network_bandwidth_for_all_users 0 0 Limits the speed that data is exchanged at over the network in bytes per second. This setting applies to all concurrently running queries on the server.\n\nPossible values:\n\n- Positive integer.\n- 0 — Control of the data speed is disabled. \N \N 0 UInt64 0 0 Production | |
max_temporary_data_on_disk_size_for_user 0 0 The maximum amount of data consumed by temporary files on disk in bytes for all\nconcurrently running user queries.\n\nPossible values:\n\n- Positive integer.\n- `0` — unlimited (default) \N \N 0 UInt64 0 0 Production | |
max_temporary_data_on_disk_size_for_query 0 0 The maximum amount of data consumed by temporary files on disk in bytes for all\nconcurrently running queries.\n\nPossible values:\n\n- Positive integer.\n- `0` — unlimited (default) \N \N 0 UInt64 0 0 Production | |
backup_restore_keeper_max_retries 1000 0 Max retries for [Zoo]Keeper operations in the middle of a BACKUP or RESTORE operation.\nShould be big enough so the whole operation won\'t fail because of a temporary [Zoo]Keeper failure. \N \N 0 UInt64 1000 0 Production | |
backup_restore_keeper_retry_initial_backoff_ms 100 0 Initial backoff timeout for [Zoo]Keeper operations during backup or restore \N \N 0 UInt64 100 0 Production | |
backup_restore_keeper_retry_max_backoff_ms 5000 0 Max backoff timeout for [Zoo]Keeper operations during backup or restore \N \N 0 UInt64 5000 0 Production | |
backup_restore_failure_after_host_disconnected_for_seconds 3600 0 If a host during a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation doesn\'t recreate its ephemeral \'alive\' node in ZooKeeper for this amount of time then the whole backup or restore is considered as failed.\nThis value should be bigger than any reasonable time for a host to reconnect to ZooKeeper after a failure.\nZero means unlimited. \N \N 0 UInt64 3600 0 Production | |
backup_restore_keeper_max_retries_while_initializing 20 0 Max retries for [Zoo]Keeper operations during the initialization of a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation. \N \N 0 UInt64 20 0 Production | |
backup_restore_keeper_max_retries_while_handling_error 20 0 Max retries for [Zoo]Keeper operations while handling an error of a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation. \N \N 0 UInt64 20 0 Production | |
backup_restore_finish_timeout_after_error_sec 180 0 How long the initiator should wait for other host to react to the \'error\' node and stop their work on the current BACKUP ON CLUSTER or RESTORE ON CLUSTER operation. \N \N 0 UInt64 180 0 Production | |
backup_restore_keeper_value_max_size 1048576 0 Maximum size of data of a [Zoo]Keeper\'s node during backup \N \N 0 UInt64 1048576 0 Production | |
backup_restore_batch_size_for_keeper_multi 1000 0 Maximum size of batch for multi request to [Zoo]Keeper during backup or restore \N \N 0 UInt64 1000 0 Production | |
backup_restore_batch_size_for_keeper_multiread 10000 0 Maximum size of batch for multiread request to [Zoo]Keeper during backup or restore \N \N 0 UInt64 10000 0 Production | |
backup_restore_keeper_fault_injection_probability 0 0 Approximate probability of failure for a keeper request during backup or restore. Valid value is in interval [0.0f, 1.0f] \N \N 0 Float 0 0 Production | |
backup_restore_keeper_fault_injection_seed 0 0 0 - random seed, otherwise the setting value \N \N 0 UInt64 0 0 Production | |
backup_restore_s3_retry_attempts 1000 0 Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries. It takes place only for backup/restore. \N \N 0 UInt64 1000 0 Production | |
max_backup_bandwidth 0 0 The maximum read speed in bytes per second for particular backup on server. Zero means unlimited. \N \N 0 UInt64 0 0 Production | |
restore_replicated_merge_tree_to_shared_merge_tree 0 0 Replace table engine from Replicated*MergeTree -> Shared*MergeTree during RESTORE. \N \N 0 Bool 0 0 Production | |
log_profile_events 1 0 Log query performance statistics into the query_log, query_thread_log and query_views_log. \N \N 0 Bool 1 0 Production | |
log_query_settings 1 0 Log query settings into the query_log and OpenTelemetry span log. \N \N 0 Bool 1 0 Production | |
log_query_threads 0 0 Setting up query threads logging.\n\nQuery threads log into the [system.query_thread_log](../../operations/system-tables/query_thread_log.md) table. This setting has effect only when [log_queries](#log_queries) is true. Queries\' threads run by ClickHouse with this setup are logged according to the rules in the [query_thread_log](/operations/server-configuration-parameters/settings#query_thread_log) server configuration parameter.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\n**Example**\n\n```text\nlog_query_threads=1\n``` \N \N 0 Bool 0 0 Production | |
log_query_views 1 0 Setting up query views logging.\n\nWhen a query run by ClickHouse with this setting enabled has associated views (materialized or live views), they are logged in the [query_views_log](/operations/server-configuration-parameters/settings#query_views_log) server configuration parameter.\n\nExample:\n\n```text\nlog_query_views=1\n``` \N \N 0 Bool 1 0 Production | |
log_comment 0 Specifies the value for the `log_comment` field of the [system.query_log](../system-tables/query_log.md) table and comment text for the server log.\n\nIt can be used to improve the readability of server logs. Additionally, it helps to select queries related to the test from the `system.query_log` after running [clickhouse-test](../../development/tests.md).\n\nPossible values:\n\n- Any string no longer than [max_query_size](#max_query_size). If the max_query_size is exceeded, the server throws an exception.\n\n**Example**\n\nQuery:\n\n```sql\nSET log_comment = \'log_comment test\', log_queries = 1;\nSELECT 1;\nSYSTEM FLUSH LOGS;\nSELECT type, query FROM system.query_log WHERE log_comment = \'log_comment test\' AND event_date >= yesterday() ORDER BY event_time DESC LIMIT 2;\n```\n\nResult:\n\n```text\n┌─type────────┬─query─────┐\n│ QueryStart │ SELECT 1; │\n│ QueryFinish │ SELECT 1; │\n└─────────────┴───────────┘\n``` \N \N 0 String 0 Production | |
query_metric_log_interval -1 0 The interval in milliseconds at which the [query_metric_log](../../operations/system-tables/query_metric_log.md) for individual queries is collected.\n\nIf set to any negative value, it will take the value `collect_interval_milliseconds` from the [query_metric_log setting](/operations/server-configuration-parameters/settings#query_metric_log) or default to 1000 if not present.\n\nTo disable the collection of a single query, set `query_metric_log_interval` to 0.\n\nDefault value: -1\n \N \N 0 Int64 -1 0 Production | |
send_logs_level fatal 0 Send server text logs with specified minimum level to client. Valid values: \'trace\', \'debug\', \'information\', \'warning\', \'error\', \'fatal\', \'none\' \N \N 0 LogsLevel fatal 0 Production | |
send_logs_source_regexp 0 Send server text logs with specified regexp to match log source name. Empty means all sources. \N \N 0 String 0 Production | |
enable_optimize_predicate_expression 1 0 Turns on predicate pushdown in `SELECT` queries.\n\nPredicate pushdown may significantly reduce network traffic for distributed queries.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\nUsage\n\nConsider the following queries:\n\n1. `SELECT count() FROM test_table WHERE date = \'2018-10-10\'`\n2. `SELECT count() FROM (SELECT * FROM test_table) WHERE date = \'2018-10-10\'`\n\nIf `enable_optimize_predicate_expression = 1`, then the execution time of these queries is equal because ClickHouse applies `WHERE` to the subquery when processing it.\n\nIf `enable_optimize_predicate_expression = 0`, then the execution time of the second query is much longer because the `WHERE` clause applies to all the data after the subquery finishes. \N \N 0 Bool 1 0 Production | |
enable_optimize_predicate_expression_to_final_subquery 1 0 Allow push predicate to final subquery. \N \N 0 Bool 1 0 Production | |
allow_push_predicate_when_subquery_contains_with 1 0 Allows push predicate when subquery contains WITH clause \N \N 0 Bool 1 0 Production | |
allow_push_predicate_ast_for_distributed_subqueries 1 0 Allows push predicate on AST level for distributed subqueries with enabled anlyzer \N \N 0 Bool 1 0 Production | |
low_cardinality_max_dictionary_size 8192 0 Sets a maximum size in rows of a shared global dictionary for the [LowCardinality](../../sql-reference/data-types/lowcardinality.md) data type that can be written to a storage file system. This setting prevents issues with RAM in case of unlimited dictionary growth. All the data that can\'t be encoded due to maximum dictionary size limitation ClickHouse writes in an ordinary method.\n\nPossible values:\n\n- Any positive integer. \N \N 0 UInt64 8192 0 Production | |
low_cardinality_use_single_dictionary_for_part 0 0 Turns on or turns off using of single dictionary for the data part.\n\nBy default, the ClickHouse server monitors the size of dictionaries and if a dictionary overflows then the server starts to write the next one. To prohibit creating several dictionaries set `low_cardinality_use_single_dictionary_for_part = 1`.\n\nPossible values:\n\n- 1 — Creating several dictionaries for the data part is prohibited.\n- 0 — Creating several dictionaries for the data part is not prohibited. \N \N 0 Bool 0 0 Production | |
decimal_check_overflow 1 0 Check overflow of decimal arithmetic/comparison operations \N \N 0 Bool 1 0 Production | |
allow_custom_error_code_in_throwif 0 0 Enable custom error code in function throwIf(). If true, thrown exceptions may have unexpected error codes. \N \N 0 Bool 0 0 Production | |
prefer_localhost_replica 1 0 Enables/disables preferable using the localhost replica when processing distributed queries.\n\nPossible values:\n\n- 1 — ClickHouse always sends a query to the localhost replica if it exists.\n- 0 — ClickHouse uses the balancing strategy specified by the [load_balancing](#load_balancing) setting.\n\n:::note\nDisable this setting if you use [max_parallel_replicas](#max_parallel_replicas) without [parallel_replicas_custom_key](#parallel_replicas_custom_key).\nIf [parallel_replicas_custom_key](#parallel_replicas_custom_key) is set, disable this setting only if it\'s used on a cluster with multiple shards containing multiple replicas.\nIf it\'s used on a cluster with a single shard and multiple replicas, disabling this setting will have negative effects.\n::: \N \N 0 Bool 1 0 Production | |
max_fetch_partition_retries_count 5 0 Amount of retries while fetching partition from another host. \N \N 0 UInt64 5 0 Production | |
http_max_multipart_form_data_size 1073741824 0 Limit on size of multipart/form-data content. This setting cannot be parsed from URL parameters and should be set in a user profile. Note that content is parsed and external tables are created in memory before the start of query execution. And this is the only limit that has an effect on that stage (limits on max memory usage and max execution time have no effect while reading HTTP form data). \N \N 0 UInt64 1073741824 0 Production | |
calculate_text_stack_trace 1 0 Calculate text stack trace in case of exceptions during query execution. This is the default. It requires symbol lookups that may slow down fuzzing tests when a huge amount of wrong queries are executed. In normal cases, you should not disable this option. \N \N 0 Bool 1 0 Production | |
enable_job_stack_trace 0 0 Output stack trace of a job creator when job results in exception. Disabled by default to avoid performance overhead. \N \N 0 Bool 0 0 Production | |
allow_ddl 1 0 If it is set to true, then a user is allowed to executed DDL queries. \N \N 0 Bool 1 0 Production | |
parallel_view_processing 0 0 Enables pushing to attached views concurrently instead of sequentially. \N \N 0 Bool 0 0 Production | |
enable_unaligned_array_join 0 0 Allow ARRAY JOIN with multiple arrays that have different sizes. When this settings is enabled, arrays will be resized to the longest one. \N \N 0 Bool 0 0 Production | |
optimize_read_in_order 1 0 Enables [ORDER BY](/sql-reference/statements/select/order-by#optimization-of-data-reading) optimization in [SELECT](../../sql-reference/statements/select/index.md) queries for reading data from [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables.\n\nPossible values:\n\n- 0 — `ORDER BY` optimization is disabled.\n- 1 — `ORDER BY` optimization is enabled.\n\n**See Also**\n\n- [ORDER BY Clause](/sql-reference/statements/select/order-by#optimization-of-data-reading) \N \N 0 Bool 1 0 Production | |
read_in_order_use_virtual_row 0 0 Use virtual row while reading in order of primary key or its monotonic function fashion. It is useful when searching over multiple parts as only relevant ones are touched. \N \N 0 Bool 0 0 Production | |
optimize_read_in_window_order 1 0 Enable ORDER BY optimization in window clause for reading data in corresponding order in MergeTree tables. \N \N 0 Bool 1 0 Production | |
optimize_aggregation_in_order 0 0 Enables [GROUP BY](/sql-reference/statements/select/group-by) optimization in [SELECT](../../sql-reference/statements/select/index.md) queries for aggregating data in corresponding order in [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables.\n\nPossible values:\n\n- 0 — `GROUP BY` optimization is disabled.\n- 1 — `GROUP BY` optimization is enabled.\n\n**See Also**\n\n- [GROUP BY optimization](/sql-reference/statements/select/group-by#group-by-optimization-depending-on-table-sorting-key) \N \N 0 Bool 0 0 Production | |
read_in_order_use_buffering 1 0 Use buffering before merging while reading in order of primary key. It increases the parallelism of query execution \N \N 0 Bool 1 0 Production | |
aggregation_in_order_max_block_bytes 50000000 0 Maximal size of block in bytes accumulated during aggregation in order of primary key. Lower block size allows to parallelize more final merge stage of aggregation. \N \N 0 UInt64 50000000 0 Production | |
read_in_order_two_level_merge_threshold 100 0 Minimal number of parts to read to run preliminary merge step during multithread reading in order of primary key. \N \N 0 UInt64 100 0 Production | |
low_cardinality_allow_in_native_format 1 0 Allows or restricts using the [LowCardinality](../../sql-reference/data-types/lowcardinality.md) data type with the [Native](../../interfaces/formats.md/#native) format.\n\nIf usage of `LowCardinality` is restricted, ClickHouse server converts `LowCardinality`-columns to ordinary ones for `SELECT` queries, and convert ordinary columns to `LowCardinality`-columns for `INSERT` queries.\n\nThis setting is required mainly for third-party clients which do not support `LowCardinality` data type.\n\nPossible values:\n\n- 1 — Usage of `LowCardinality` is not restricted.\n- 0 — Usage of `LowCardinality` is restricted. \N \N 0 Bool 1 0 Production | |
cancel_http_readonly_queries_on_client_close 0 0 Cancels HTTP read-only queries (e.g. SELECT) when a client closes the connection without waiting for the response.\n\nCloud default value: `1`. \N \N 0 Bool 0 0 Production | |
external_table_functions_use_nulls 1 0 Defines how [mysql](../../sql-reference/table-functions/mysql.md), [postgresql](../../sql-reference/table-functions/postgresql.md) and [odbc](../../sql-reference/table-functions/odbc.md) table functions use Nullable columns.\n\nPossible values:\n\n- 0 — The table function explicitly uses Nullable columns.\n- 1 — The table function implicitly uses Nullable columns.\n\n**Usage**\n\nIf the setting is set to `0`, the table function does not make Nullable columns and inserts default values instead of NULL. This is also applicable for NULL values inside arrays. \N \N 0 Bool 1 0 Production | |
external_table_strict_query 0 0 If it is set to true, transforming expression to local filter is forbidden for queries to external tables. \N \N 0 Bool 0 0 Production | |
allow_hyperscan 1 0 Allow functions that use Hyperscan library. Disable to avoid potentially long compilation times and excessive resource usage. \N \N 0 Bool 1 0 Production | |
max_hyperscan_regexp_length 0 0 Defines the maximum length for each regular expression in the [hyperscan multi-match functions](/sql-reference/functions/string-search-functions#multimatchany).\n\nPossible values:\n\n- Positive integer.\n- 0 - The length is not limited.\n\n**Example**\n\nQuery:\n\n```sql\nSELECT multiMatchAny(\'abcd\', [\'ab\',\'bcd\',\'c\',\'d\']) SETTINGS max_hyperscan_regexp_length = 3;\n```\n\nResult:\n\n```text\n┌─multiMatchAny(\'abcd\', [\'ab\', \'bcd\', \'c\', \'d\'])─┐\n│ 1 │\n└────────────────────────────────────────────────┘\n```\n\nQuery:\n\n```sql\nSELECT multiMatchAny(\'abcd\', [\'ab\',\'bcd\',\'c\',\'d\']) SETTINGS max_hyperscan_regexp_length = 2;\n```\n\nResult:\n\n```text\nException: Regexp length too large.\n```\n\n**See Also**\n\n- [max_hyperscan_regexp_total_length](#max_hyperscan_regexp_total_length) \N \N 0 UInt64 0 0 Production | |
max_hyperscan_regexp_total_length 0 0 Sets the maximum length total of all regular expressions in each [hyperscan multi-match function](/sql-reference/functions/string-search-functions#multimatchany).\n\nPossible values:\n\n- Positive integer.\n- 0 - The length is not limited.\n\n**Example**\n\nQuery:\n\n```sql\nSELECT multiMatchAny(\'abcd\', [\'a\',\'b\',\'c\',\'d\']) SETTINGS max_hyperscan_regexp_total_length = 5;\n```\n\nResult:\n\n```text\n┌─multiMatchAny(\'abcd\', [\'a\', \'b\', \'c\', \'d\'])─┐\n│ 1 │\n└─────────────────────────────────────────────┘\n```\n\nQuery:\n\n```sql\nSELECT multiMatchAny(\'abcd\', [\'ab\',\'bc\',\'c\',\'d\']) SETTINGS max_hyperscan_regexp_total_length = 5;\n```\n\nResult:\n\n```text\nException: Total regexp lengths too large.\n```\n\n**See Also**\n\n- [max_hyperscan_regexp_length](#max_hyperscan_regexp_length) \N \N 0 UInt64 0 0 Production | |
reject_expensive_hyperscan_regexps 1 0 Reject patterns which will likely be expensive to evaluate with hyperscan (due to NFA state explosion) \N \N 0 Bool 1 0 Production | |
allow_simdjson 1 0 Allow using simdjson library in \'JSON*\' functions if AVX2 instructions are available. If disabled rapidjson will be used. \N \N 0 Bool 1 0 Production | |
allow_introspection_functions 0 0 Enables or disables [introspection functions](../../sql-reference/functions/introspection.md) for query profiling.\n\nPossible values:\n\n- 1 — Introspection functions enabled.\n- 0 — Introspection functions disabled.\n\n**See Also**\n\n- [Sampling Query Profiler](../../operations/optimizing-performance/sampling-query-profiler.md)\n- System table [trace_log](/operations/system-tables/trace_log) \N \N 0 Bool 0 0 Production | |
splitby_max_substrings_includes_remaining_string 0 0 Controls whether function [splitBy*()](../../sql-reference/functions/splitting-merging-functions.md) with argument `max_substrings` > 0 will include the remaining string in the last element of the result array.\n\nPossible values:\n\n- `0` - The remaining string will not be included in the last element of the result array.\n- `1` - The remaining string will be included in the last element of the result array. This is the behavior of Spark\'s [`split()`](https://spark.apache.org/docs/3.1.2/api/python/reference/api/pyspark.sql.functions.split.html) function and Python\'s [\'string.split()\'](https://docs.python.org/3/library/stdtypes.html#str.split) method. \N \N 0 Bool 0 0 Production | |
allow_execute_multiif_columnar 1 0 Allow execute multiIf function columnar \N \N 0 Bool 1 0 Production | |
formatdatetime_f_prints_single_zero 0 0 Formatter \'%f\' in function \'formatDateTime\' prints a single zero instead of six zeros if the formatted value has no fractional seconds. \N \N 0 Bool 0 0 Production | |
formatdatetime_f_prints_scale_number_of_digits 0 0 Formatter \'%f\' in function \'formatDateTime\' prints only the scale amount of digits for a DateTime64 instead of fixed 6 digits. \N \N 0 Bool 0 0 Production | |
formatdatetime_parsedatetime_m_is_month_name 1 0 Formatter \'%M\' in functions \'formatDateTime\' and \'parseDateTime\' print/parse the month name instead of minutes. \N \N 0 Bool 1 0 Production | |
parsedatetime_parse_without_leading_zeros 1 0 Formatters \'%c\', \'%l\' and \'%k\' in function \'parseDateTime\' parse months and hours without leading zeros. \N \N 0 Bool 1 0 Production | |
formatdatetime_format_without_leading_zeros 0 0 Formatters \'%c\', \'%l\' and \'%k\' in function \'formatDateTime\' print months and hours without leading zeros. \N \N 0 Bool 0 0 Production | |
least_greatest_legacy_null_behavior 0 0 If enabled, functions \'least\' and \'greatest\' return NULL if one of their arguments is NULL. \N \N 0 Bool 0 0 Production | |
h3togeo_lon_lat_result_order 0 0 Function \'h3ToGeo\' returns (lon, lat) if true, otherwise (lat, lon). \N \N 0 Bool 0 0 Production | |
max_partitions_per_insert_block 100 0 Limits the maximum number of partitions in a single inserted block\nand an exception is thrown if the block contains too many partitions.\n\n- Positive integer.\n- `0` — Unlimited number of partitions.\n\n**Details**\n\nWhen inserting data, ClickHouse calculates the number of partitions in the\ninserted block. If the number of partitions is more than\n`max_partitions_per_insert_block`, ClickHouse either logs a warning or throws an\nexception based on `throw_on_max_partitions_per_insert_block`. Exceptions have\nthe following text:\n\n> "Too many partitions for a single INSERT block (`partitions_count` partitions, limit is " + toString(max_partitions) + ").\n The limit is controlled by the \'max_partitions_per_insert_block\' setting.\n A large number of partitions is a common misconception. It will lead to severe\n negative performance impact, including slow server startup, slow INSERT queries\n and slow SELECT queries. Recommended total number of partitions for a table is\n under 1000..10000. Please note, that partitioning is not intended to speed up\n SELECT queries (ORDER BY key is sufficient to make range queries fast).\n Partitions are intended for data manipulation (DROP PARTITION, etc)."\n\n:::note\nThis setting is a safety threshold because using a large number of partitions is a common misconception.\n::: \N \N 0 UInt64 100 0 Production | |
throw_on_max_partitions_per_insert_block 1 0 Allows you to control the behaviour when `max_partitions_per_insert_block` is reached.\n\nPossible values:\n- `true` - When an insert block reaches `max_partitions_per_insert_block`, an exception is raised.\n- `false` - Logs a warning when `max_partitions_per_insert_block` is reached.\n\n:::tip\nThis can be useful if you\'re trying to understand the impact on users when changing [`max_partitions_per_insert_block`](/operations/settings/settings#max_partitions_per_insert_block).\n::: \N \N 0 Bool 1 0 Production | |
max_partitions_to_read -1 0 Limits the maximum number of partitions that can be accessed in a single query.\n\nThe setting value specified when the table is created can be overridden via query-level setting.\n\nPossible values:\n\n- Positive integer\n- `-1` - unlimited (default)\n\n:::note\nYou can also specify the MergeTree setting [`max_partitions_to_read`](/operations/settings/settings#max_partitions_to_read) in tables\' setting.\n::: \N \N 0 Int64 -1 0 Production | |
check_query_single_value_result 1 0 Defines the level of detail for the [CHECK TABLE](/sql-reference/statements/check-table) query result for `MergeTree` family engines .\n\nPossible values:\n\n- 0 — the query shows a check status for every individual data part of a table.\n- 1 — the query shows the general table check status. \N \N 0 Bool 1 0 Production | |
allow_drop_detached 0 0 Allow ALTER TABLE ... DROP DETACHED PART[ITION] ... queries \N \N 0 Bool 0 0 Production | |
max_parts_to_move 1000 0 Limit the number of parts that can be moved in one query. Zero means unlimited. \N \N 0 UInt64 1000 0 Production | |
max_table_size_to_drop 50000000000 0 Restriction on deleting tables in query time. The value 0 means that you can delete all tables without any restrictions.\n\nCloud default value: 1 TB.\n\n:::note\nThis query setting overwrites its server setting equivalent, see [max_table_size_to_drop](/operations/server-configuration-parameters/settings#max_table_size_to_drop)\n::: \N \N 0 UInt64 50000000000 0 Production | |
max_partition_size_to_drop 50000000000 0 Restriction on dropping partitions in query time. The value 0 means that you can drop partitions without any restrictions.\n\nCloud default value: 1 TB.\n\n:::note\nThis query setting overwrites its server setting equivalent, see [max_partition_size_to_drop](/operations/server-configuration-parameters/settings#max_partition_size_to_drop)\n::: \N \N 0 UInt64 50000000000 0 Production | |
postgresql_connection_pool_size 16 0 Connection pool size for PostgreSQL table engine and database engine. \N \N 0 UInt64 16 0 Production | |
postgresql_connection_attempt_timeout 2 0 Connection timeout in seconds of a single attempt to connect PostgreSQL end-point.\nThe value is passed as a `connect_timeout` parameter of the connection URL. \N \N 0 UInt64 2 0 Production | |
postgresql_connection_pool_wait_timeout 5000 0 Connection pool push/pop timeout on empty pool for PostgreSQL table engine and database engine. By default it will block on empty pool. \N \N 0 UInt64 5000 0 Production | |
postgresql_connection_pool_retries 2 0 Connection pool push/pop retries number for PostgreSQL table engine and database engine. \N \N 0 UInt64 2 0 Production | |
postgresql_connection_pool_auto_close_connection 0 0 Close connection before returning connection to the pool. \N \N 0 Bool 0 0 Production | |
postgresql_fault_injection_probability 0 0 Approximate probability of failing internal (for replication) PostgreSQL queries. Valid value is in interval [0.0f, 1.0f] \N \N 0 Float 0 0 Production | |
glob_expansion_max_elements 1000 0 Maximum number of allowed addresses (For external storages, table functions, etc). \N \N 0 UInt64 1000 0 Production | |
odbc_bridge_connection_pool_size 16 0 Connection pool size for each connection settings string in ODBC bridge. \N \N 0 UInt64 16 0 Production | |
odbc_bridge_use_connection_pooling 1 0 Use connection pooling in ODBC bridge. If set to false, a new connection is created every time. \N \N 0 Bool 1 0 Production | |
distributed_replica_error_half_life 60 0 - Type: seconds\n- Default value: 60 seconds\n\nControls how fast errors in distributed tables are zeroed. If a replica is unavailable for some time, accumulates 5 errors, and distributed_replica_error_half_life is set to 1 second, then the replica is considered normal 3 seconds after the last error.\n\nSee also:\n\n- [load_balancing](#load_balancing-round_robin)\n- [Table engine Distributed](../../engines/table-engines/special/distributed.md)\n- [distributed_replica_error_cap](#distributed_replica_error_cap)\n- [distributed_replica_max_ignored_errors](#distributed_replica_max_ignored_errors) \N \N 0 Seconds 60 0 Production | |
distributed_replica_error_cap 1000 0 - Type: unsigned int\n- Default value: 1000\n\nThe error count of each replica is capped at this value, preventing a single replica from accumulating too many errors.\n\nSee also:\n\n- [load_balancing](#load_balancing-round_robin)\n- [Table engine Distributed](../../engines/table-engines/special/distributed.md)\n- [distributed_replica_error_half_life](#distributed_replica_error_half_life)\n- [distributed_replica_max_ignored_errors](#distributed_replica_max_ignored_errors) \N \N 0 UInt64 1000 0 Production | |
distributed_replica_max_ignored_errors 0 0 - Type: unsigned int\n- Default value: 0\n\nThe number of errors that will be ignored while choosing replicas (according to `load_balancing` algorithm).\n\nSee also:\n\n- [load_balancing](#load_balancing-round_robin)\n- [Table engine Distributed](../../engines/table-engines/special/distributed.md)\n- [distributed_replica_error_cap](#distributed_replica_error_cap)\n- [distributed_replica_error_half_life](#distributed_replica_error_half_life) \N \N 0 UInt64 0 0 Production | |
min_free_disk_space_for_temporary_data 0 0 The minimum disk space to keep while writing temporary data used in external sorting and aggregation. \N \N 0 UInt64 0 0 Production | |
default_temporary_table_engine Memory 0 Same as [default_table_engine](#default_table_engine) but for temporary tables.\n\nIn this example, any new temporary table that does not specify an `Engine` will use the `Log` table engine:\n\nQuery:\n\n```sql\nSET default_temporary_table_engine = \'Log\';\n\nCREATE TEMPORARY TABLE my_table (\n x UInt32,\n y UInt32\n);\n\nSHOW CREATE TEMPORARY TABLE my_table;\n```\n\nResult:\n\n```response\n┌─statement────────────────────────────────────────────────────────────────┐\n│ CREATE TEMPORARY TABLE default.my_table\n(\n `x` UInt32,\n `y` UInt32\n)\nENGINE = Log\n└──────────────────────────────────────────────────────────────────────────┘\n``` \N \N 0 DefaultTableEngine Memory 0 Production | |
default_table_engine MergeTree 0 Default table engine to use when `ENGINE` is not set in a `CREATE` statement.\n\nPossible values:\n\n- a string representing any valid table engine name\n\nCloud default value: `SharedMergeTree`.\n\n**Example**\n\nQuery:\n\n```sql\nSET default_table_engine = \'Log\';\n\nSELECT name, value, changed FROM system.settings WHERE name = \'default_table_engine\';\n```\n\nResult:\n\n```response\n┌─name─────────────────┬─value─┬─changed─┐\n│ default_table_engine │ Log │ 1 │\n└──────────────────────┴───────┴─────────┘\n```\n\nIn this example, any new table that does not specify an `Engine` will use the `Log` table engine:\n\nQuery:\n\n```sql\nCREATE TABLE my_table (\n x UInt32,\n y UInt32\n);\n\nSHOW CREATE TABLE my_table;\n```\n\nResult:\n\n```response\n┌─statement────────────────────────────────────────────────────────────────┐\n│ CREATE TABLE default.my_table\n(\n `x` UInt32,\n `y` UInt32\n)\nENGINE = Log\n└──────────────────────────────────────────────────────────────────────────┘\n``` \N \N 0 DefaultTableEngine MergeTree 0 Production | |
show_table_uuid_in_table_create_query_if_not_nil 0 0 Sets the `SHOW TABLE` query display.\n\nPossible values:\n\n- 0 — The query will be displayed without table UUID.\n- 1 — The query will be displayed with table UUID. \N \N 0 Bool 0 0 Production | |
database_atomic_wait_for_drop_and_detach_synchronously 0 0 Adds a modifier `SYNC` to all `DROP` and `DETACH` queries.\n\nPossible values:\n\n- 0 — Queries will be executed with delay.\n- 1 — Queries will be executed without delay. \N \N 0 Bool 0 0 Production | |
enable_scalar_subquery_optimization 1 0 If it is set to true, prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once. \N \N 0 Bool 1 0 Production | |
optimize_trivial_count_query 1 0 Enables or disables the optimization to trivial query `SELECT count() FROM table` using metadata from MergeTree. If you need to use row-level security, disable this setting.\n\nPossible values:\n\n - 0 — Optimization disabled.\n - 1 — Optimization enabled.\n\nSee also:\n\n- [optimize_functions_to_subcolumns](#optimize_functions_to_subcolumns) \N \N 0 Bool 1 0 Production | |
optimize_trivial_approximate_count_query 0 0 Use an approximate value for trivial count optimization of storages that support such estimation, for example, EmbeddedRocksDB.\n\nPossible values:\n\n - 0 — Optimization disabled.\n - 1 — Optimization enabled. \N \N 0 Bool 0 0 Production | |
optimize_count_from_files 1 0 Enables or disables the optimization of counting number of rows from files in different input formats. It applies to table functions/engines `file`/`s3`/`url`/`hdfs`/`azureBlobStorage`.\n\nPossible values:\n\n- 0 — Optimization disabled.\n- 1 — Optimization enabled. \N \N 0 Bool 1 0 Production | |
use_cache_for_count_from_files 1 0 Enables caching of rows number during count from files in table functions `file`/`s3`/`url`/`hdfs`/`azureBlobStorage`.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
optimize_respect_aliases 1 0 If it is set to true, it will respect aliases in WHERE/GROUP BY/ORDER BY, that will help with partition pruning/secondary indexes/optimize_aggregation_in_order/optimize_read_in_order/optimize_trivial_count \N \N 0 Bool 1 0 Production | |
mutations_sync 0 0 Allows to execute `ALTER TABLE ... UPDATE|DELETE|MATERIALIZE INDEX|MATERIALIZE PROJECTION|MATERIALIZE COLUMN|MATERIALIZE STATISTICS` queries ([mutations](../../sql-reference/statements/alter/index.md/#mutations)) synchronously.\n\nPossible values:\n\n- 0 - Mutations execute asynchronously.\n- 1 - The query waits for all mutations to complete on the current server.\n- 2 - The query waits for all mutations to complete on all replicas (if they exist). \N \N 0 UInt64 0 0 Production | |
enable_lightweight_delete 1 0 Enable lightweight DELETE mutations for mergetree tables. \N \N 0 Bool 1 0 Production | |
allow_experimental_lightweight_delete 1 0 Enable lightweight DELETE mutations for mergetree tables. \N \N 0 Bool 1 enable_lightweight_delete 0 Production | |
lightweight_deletes_sync 2 0 The same as [`mutations_sync`](#mutations_sync), but controls only execution of lightweight deletes.\n\nPossible values:\n\n- 0 - Mutations execute asynchronously.\n- 1 - The query waits for the lightweight deletes to complete on the current server.\n- 2 - The query waits for the lightweight deletes to complete on all replicas (if they exist).\n\n**See Also**\n\n- [Synchronicity of ALTER Queries](../../sql-reference/statements/alter/index.md/#synchronicity-of-alter-queries)\n- [Mutations](../../sql-reference/statements/alter/index.md/#mutations) \N \N 0 UInt64 2 0 Production | |
apply_deleted_mask 1 0 Enables filtering out rows deleted with lightweight DELETE. If disabled, a query will be able to read those rows. This is useful for debugging and \\"undelete\\" scenarios \N \N 0 Bool 1 0 Production | |
optimize_normalize_count_variants 1 0 Rewrite aggregate functions that semantically equals to count() as count(). \N \N 0 Bool 1 0 Production | |
optimize_injective_functions_inside_uniq 1 0 Delete injective functions of one argument inside uniq*() functions. \N \N 0 Bool 1 0 Production | |
rewrite_count_distinct_if_with_count_distinct_implementation 0 0 Allows you to rewrite `countDistcintIf` with [count_distinct_implementation](#count_distinct_implementation) setting.\n\nPossible values:\n\n- true — Allow.\n- false — Disallow. \N \N 0 Bool 0 0 Production | |
convert_query_to_cnf 0 0 When set to `true`, a `SELECT` query will be converted to conjuctive normal form (CNF). There are scenarios where rewriting a query in CNF may execute faster (view this [Github issue](https://github.com/ClickHouse/ClickHouse/issues/11749) for an explanation).\n\nFor example, notice how the following `SELECT` query is not modified (the default behavior):\n\n```sql\nEXPLAIN SYNTAX\nSELECT *\nFROM\n(\n SELECT number AS x\n FROM numbers(20)\n) AS a\nWHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15))\nSETTINGS convert_query_to_cnf = false;\n```\n\nThe result is:\n\n```response\n┌─explain────────────────────────────────────────────────────────┐\n│ SELECT x │\n│ FROM │\n│ ( │\n│ SELECT number AS x │\n│ FROM numbers(20) │\n│ WHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15)) │\n│ ) AS a │\n│ WHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15)) │\n│ SETTINGS convert_query_to_cnf = 0 │\n└────────────────────────────────────────────────────────────────┘\n```\n\nLet\'s set `convert_query_to_cnf` to `true` and see what changes:\n\n```sql\nEXPLAIN SYNTAX\nSELECT *\nFROM\n(\n SELECT number AS x\n FROM numbers(20)\n) AS a\nWHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15))\nSETTINGS convert_query_to_cnf = true;\n```\n\nNotice the `WHERE` clause is rewritten in CNF, but the result set is the identical - the Boolean logic is unchanged:\n\n```response\n┌─explain───────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ SELECT x │\n│ FROM │\n│ ( │\n│ SELECT number AS x │\n│ FROM numbers(20) │\n│ WHERE ((x <= 15) OR (x <= 5)) AND ((x <= 15) OR (x >= 1)) AND ((x >= 10) OR (x <= 5)) AND ((x >= 10) OR (x >= 1)) │\n│ ) AS a │\n│ WHERE ((x >= 10) OR (x >= 1)) AND ((x >= 10) OR (x <= 5)) AND ((x <= 15) OR (x >= 1)) AND ((x <= 15) OR (x <= 5)) │\n│ SETTINGS convert_query_to_cnf = 1 │\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n```\n\nPossible values: true, false \N \N 0 Bool 0 0 Production | |
optimize_or_like_chain 0 0 Optimize multiple OR LIKE into multiMatchAny. This optimization should not be enabled by default, because it defies index analysis in some cases. \N \N 0 Bool 0 0 Production | |
optimize_arithmetic_operations_in_aggregate_functions 1 0 Move arithmetic operations out of aggregation functions \N \N 0 Bool 1 0 Production | |
optimize_redundant_functions_in_order_by 1 0 Remove functions from ORDER BY if its argument is also in ORDER BY \N \N 0 Bool 1 0 Production | |
optimize_if_chain_to_multiif 0 0 Replace if(cond1, then1, if(cond2, ...)) chains to multiIf. Currently it\'s not beneficial for numeric types. \N \N 0 Bool 0 0 Production | |
optimize_multiif_to_if 1 0 Replace \'multiIf\' with only one condition to \'if\'. \N \N 0 Bool 1 0 Production | |
optimize_if_transform_strings_to_enum 0 0 Replaces string-type arguments in If and Transform to enum. Disabled by default cause it could make inconsistent change in distributed query that would lead to its fail. \N \N 0 Bool 0 0 Production | |
optimize_functions_to_subcolumns 1 0 Enables or disables optimization by transforming some functions to reading subcolumns. This reduces the amount of data to read.\n\nThese functions can be transformed:\n\n- [length](/sql-reference/functions/array-functions#length) to read the [size0](../../sql-reference/data-types/array.md/#array-size) subcolumn.\n- [empty](/sql-reference/functions/array-functions#empty) to read the [size0](../../sql-reference/data-types/array.md/#array-size) subcolumn.\n- [notEmpty](/sql-reference/functions/array-functions#notempty) to read the [size0](../../sql-reference/data-types/array.md/#array-size) subcolumn.\n- [isNull](/sql-reference/functions/functions-for-nulls#isnull) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.\n- [isNotNull](/sql-reference/functions/functions-for-nulls#isnotnull) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.\n- [count](/sql-reference/aggregate-functions/reference/count) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.\n- [mapKeys](/sql-reference/functions/tuple-map-functions#mapkeys) to read the [keys](/sql-reference/data-types/map#reading-subcolumns-of-map) subcolumn.\n- [mapValues](/sql-reference/functions/tuple-map-functions#mapvalues) to read the [values](/sql-reference/data-types/map#reading-subcolumns-of-map) subcolumn.\n\nPossible values:\n\n- 0 — Optimization disabled.\n- 1 — Optimization enabled. \N \N 0 Bool 1 0 Production | |
optimize_using_constraints 0 0 Use [constraints](../../sql-reference/statements/create/table.md/#constraints) for query optimization. The default is `false`.\n\nPossible values:\n\n- true, false \N \N 0 Bool 0 0 Production | |
optimize_substitute_columns 0 0 Use [constraints](../../sql-reference/statements/create/table.md/#constraints) for column substitution. The default is `false`.\n\nPossible values:\n\n- true, false \N \N 0 Bool 0 0 Production | |
optimize_append_index 0 0 Use [constraints](../../sql-reference/statements/create/table.md/#constraints) in order to append index condition. The default is `false`.\n\nPossible values:\n\n- true, false \N \N 0 Bool 0 0 Production | |
optimize_time_filter_with_preimage 1 0 Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. `toYear(col) = 2023 -> col >= \'2023-01-01\' AND col <= \'2023-12-31\'`) \N \N 0 Bool 1 0 Production | |
normalize_function_names 1 0 Normalize function names to their canonical names \N \N 0 Bool 1 0 Production | |
enable_early_constant_folding 1 0 Enable query optimization where we analyze function and subqueries results and rewrite query if there are constants there \N \N 0 Bool 1 0 Production | |
deduplicate_blocks_in_dependent_materialized_views 0 0 Enables or disables the deduplication check for materialized views that receive data from Replicated\\* tables.\n\nPossible values:\n\n 0 — Disabled.\n 1 — Enabled.\n\nUsage\n\nBy default, deduplication is not performed for materialized views but is done upstream, in the source table.\nIf an INSERTed block is skipped due to deduplication in the source table, there will be no insertion into attached materialized views. This behaviour exists to enable the insertion of highly aggregated data into materialized views, for cases where inserted blocks are the same after materialized view aggregation but derived from different INSERTs into the source table.\nAt the same time, this behaviour "breaks" `INSERT` idempotency. If an `INSERT` into the main table was successful and `INSERT` into a materialized view failed (e.g. because of communication failure with ClickHouse Keeper) a client will get an error and can retry the operation. However, the materialized view won\'t receive the second insert because it will be discarded by deduplication in the main (source) table. The setting `deduplicate_blocks_in_dependent_materialized_views` allows for changing this behaviour. On retry, a materialized view will receive the repeat insert and will perform a deduplication check by itself,\nignoring check result for the source table, and will insert rows lost because of the first failure. \N \N 0 Bool 0 0 Production | |
throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert 1 0 Throw exception on INSERT query when the setting `deduplicate_blocks_in_dependent_materialized_views` is enabled along with `async_insert`. It guarantees correctness, because these features can\'t work together. \N \N 0 Bool 1 0 Production | |
materialized_views_ignore_errors 0 0 Allows to ignore errors for MATERIALIZED VIEW, and deliver original block to the table regardless of MVs \N \N 0 Bool 0 0 Production | |
ignore_materialized_views_with_dropped_target_table 0 0 Ignore MVs with dropped target table during pushing to views \N \N 0 Bool 0 0 Production | |
allow_materialized_view_with_bad_select 0 0 Allow CREATE MATERIALIZED VIEW with SELECT query that references nonexistent tables or columns. It must still be syntactically valid. Doesn\'t apply to refreshable MVs. Doesn\'t apply if the MV schema needs to be inferred from the SELECT query (i.e. if the CREATE has no column list and no TO table). Can be used for creating MV before its source table. \N \N 0 Bool 0 0 Production | |
use_compact_format_in_distributed_parts_names 1 0 Uses compact format for storing blocks for background (`distributed_foreground_insert`) INSERT into tables with `Distributed` engine.\n\nPossible values:\n\n- 0 — Uses `user[:password]@host:port#default_database` directory format.\n- 1 — Uses `[shard{shard_index}[_replica{replica_index}]]` directory format.\n\n:::note\n- with `use_compact_format_in_distributed_parts_names=0` changes from cluster definition will not be applied for background INSERT.\n- with `use_compact_format_in_distributed_parts_names=1` changing the order of the nodes in the cluster definition, will change the `shard_index`/`replica_index` so be aware.\n::: \N \N 0 Bool 1 0 Production | |
validate_polygons 1 0 Enables or disables throwing an exception in the [pointInPolygon](/sql-reference/functions/geo/coordinates#pointinpolygon) function, if the polygon is self-intersecting or self-tangent.\n\nPossible values:\n\n- 0 — Throwing an exception is disabled. `pointInPolygon` accepts invalid polygons and returns possibly incorrect results for them.\n- 1 — Throwing an exception is enabled. \N \N 0 Bool 1 0 Production | |
max_parser_depth 1000 0 Limits maximum recursion depth in the recursive descent parser. Allows controlling the stack size.\n\nPossible values:\n\n- Positive integer.\n- 0 — Recursion depth is unlimited. \N \N 0 UInt64 1000 0 Production | |
max_parser_backtracks 1000000 0 Maximum parser backtracking (how many times it tries different alternatives in the recursive descend parsing process). \N \N 0 UInt64 1000000 0 Production | |
max_recursive_cte_evaluation_depth 1000 0 Maximum limit on recursive CTE evaluation depth \N \N 0 UInt64 1000 0 Production | |
allow_settings_after_format_in_insert 0 0 Control whether `SETTINGS` after `FORMAT` in `INSERT` queries is allowed or not. It is not recommended to use this, since this may interpret part of `SETTINGS` as values.\n\nExample:\n\n```sql\nINSERT INTO FUNCTION null(\'foo String\') SETTINGS max_threads=1 VALUES (\'bar\');\n```\n\nBut the following query will work only with `allow_settings_after_format_in_insert`:\n\n```sql\nSET allow_settings_after_format_in_insert=1;\nINSERT INTO FUNCTION null(\'foo String\') VALUES (\'bar\') SETTINGS max_threads=1;\n```\n\nPossible values:\n\n- 0 — Disallow.\n- 1 — Allow.\n\n:::note\nUse this setting only for backward compatibility if your use cases depend on old syntax.\n::: \N \N 0 Bool 0 0 Production | |
periodic_live_view_refresh 60 0 Interval after which periodically refreshed live view is forced to refresh. \N \N 0 Seconds 60 0 Production | |
transform_null_in 0 0 Enables equality of [NULL](/sql-reference/syntax#null) values for [IN](../../sql-reference/operators/in.md) operator.\n\nBy default, `NULL` values can\'t be compared because `NULL` means undefined value. Thus, comparison `expr = NULL` must always return `false`. With this setting `NULL = NULL` returns `true` for `IN` operator.\n\nPossible values:\n\n- 0 — Comparison of `NULL` values in `IN` operator returns `false`.\n- 1 — Comparison of `NULL` values in `IN` operator returns `true`.\n\n**Example**\n\nConsider the `null_in` table:\n\n```text\n┌──idx─┬─────i─┐\n│ 1 │ 1 │\n│ 2 │ NULL │\n│ 3 │ 3 │\n└──────┴───────┘\n```\n\nQuery:\n\n```sql\nSELECT idx, i FROM null_in WHERE i IN (1, NULL) SETTINGS transform_null_in = 0;\n```\n\nResult:\n\n```text\n┌──idx─┬────i─┐\n│ 1 │ 1 │\n└──────┴──────┘\n```\n\nQuery:\n\n```sql\nSELECT idx, i FROM null_in WHERE i IN (1, NULL) SETTINGS transform_null_in = 1;\n```\n\nResult:\n\n```text\n┌──idx─┬─────i─┐\n│ 1 │ 1 │\n│ 2 │ NULL │\n└──────┴───────┘\n```\n\n**See Also**\n\n- [NULL Processing in IN Operators](/sql-reference/operators/in#null-processing) \N \N 0 Bool 0 0 Production | |
allow_nondeterministic_mutations 0 0 User-level setting that allows mutations on replicated tables to make use of non-deterministic functions such as `dictGet`.\n\nGiven that, for example, dictionaries, can be out of sync across nodes, mutations that pull values from them are disallowed on replicated tables by default. Enabling this setting allows this behavior, making it the user\'s responsibility to ensure that the data used is in sync across all nodes.\n\n**Example**\n\n```xml\n<profiles>\n <default>\n <allow_nondeterministic_mutations>1</allow_nondeterministic_mutations>\n\n <!-- ... -->\n </default>\n\n <!-- ... -->\n\n</profiles>\n``` \N \N 0 Bool 0 0 Production | |
validate_mutation_query 1 0 Validate mutation queries before accepting them. Mutations are executed in the background, and running an invalid query will cause mutations to get stuck, requiring manual intervention.\n\nOnly change this setting if you encounter a backward-incompatible bug. \N \N 0 Bool 1 0 Production | |
lock_acquire_timeout 120 0 Defines how many seconds a locking request waits before failing.\n\nLocking timeout is used to protect from deadlocks while executing read/write operations with tables. When the timeout expires and the locking request fails, the ClickHouse server throws an exception "Locking attempt timed out! Possible deadlock avoided. Client should retry." with error code `DEADLOCK_AVOIDED`.\n\nPossible values:\n\n- Positive integer (in seconds).\n- 0 — No locking timeout. \N \N 0 Seconds 120 0 Production | |
materialize_ttl_after_modify 1 0 Apply TTL for old data, after ALTER MODIFY TTL query \N \N 0 Bool 1 0 Production | |
function_implementation 0 Choose function implementation for specific target or variant (experimental). If empty enable all of them. \N \N 0 String 0 Production | |
data_type_default_nullable 0 0 Allows data types without explicit modifiers [NULL or NOT NULL](/sql-reference/statements/create/table#null-or-not-null-modifiers) in column definition will be [Nullable](/sql-reference/data-types/nullable).\n\nPossible values:\n\n- 1 — The data types in column definitions are set to `Nullable` by default.\n- 0 — The data types in column definitions are set to not `Nullable` by default. \N \N 0 Bool 0 0 Production | |
cast_keep_nullable 0 0 Enables or disables keeping of the `Nullable` data type in [CAST](/sql-reference/functions/type-conversion-functions#cast) operations.\n\nWhen the setting is enabled and the argument of `CAST` function is `Nullable`, the result is also transformed to `Nullable` type. When the setting is disabled, the result always has the destination type exactly.\n\nPossible values:\n\n- 0 — The `CAST` result has exactly the destination type specified.\n- 1 — If the argument type is `Nullable`, the `CAST` result is transformed to `Nullable(DestinationDataType)`.\n\n**Examples**\n\nThe following query results in the destination data type exactly:\n\n```sql\nSET cast_keep_nullable = 0;\nSELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x);\n```\n\nResult:\n\n```text\n┌─x─┬─toTypeName(CAST(toNullable(toInt32(0)), \'Int32\'))─┐\n│ 0 │ Int32 │\n└───┴───────────────────────────────────────────────────┘\n```\n\nThe following query results in the `Nullable` modification on the destination data type:\n\n```sql\nSET cast_keep_nullable = 1;\nSELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x);\n```\n\nResult:\n\n```text\n┌─x─┬─toTypeName(CAST(toNullable(toInt32(0)), \'Int32\'))─┐\n│ 0 │ Nullable(Int32) │\n└───┴───────────────────────────────────────────────────┘\n```\n\n**See Also**\n\n- [CAST](/sql-reference/functions/type-conversion-functions#cast) function \N \N 0 Bool 0 0 Production | |
cast_ipv4_ipv6_default_on_conversion_error 0 0 CAST operator into IPv4, CAST operator into IPV6 type, toIPv4, toIPv6 functions will return default value instead of throwing exception on conversion error. \N \N 0 Bool 0 0 Production | |
alter_partition_verbose_result 0 0 Enables or disables the display of information about the parts to which the manipulation operations with partitions and parts have been successfully applied.\nApplicable to [ATTACH PARTITION|PART](/sql-reference/statements/alter/partition#attach-partitionpart) and to [FREEZE PARTITION](/sql-reference/statements/alter/partition#freeze-partition).\n\nPossible values:\n\n- 0 — disable verbosity.\n- 1 — enable verbosity.\n\n**Example**\n\n```sql\nCREATE TABLE test(a Int64, d Date, s String) ENGINE = MergeTree PARTITION BY toYYYYMDECLARE(d) ORDER BY a;\nINSERT INTO test VALUES(1, \'2021-01-01\', \'\');\nINSERT INTO test VALUES(1, \'2021-01-01\', \'\');\nALTER TABLE test DETACH PARTITION ID \'202101\';\n\nALTER TABLE test ATTACH PARTITION ID \'202101\' SETTINGS alter_partition_verbose_result = 1;\n\n┌─command_type─────┬─partition_id─┬─part_name────┬─old_part_name─┐\n│ ATTACH PARTITION │ 202101 │ 202101_7_7_0 │ 202101_5_5_0 │\n│ ATTACH PARTITION │ 202101 │ 202101_8_8_0 │ 202101_6_6_0 │\n└──────────────────┴──────────────┴──────────────┴───────────────┘\n\nALTER TABLE test FREEZE SETTINGS alter_partition_verbose_result = 1;\n\n┌─command_type─┬─partition_id─┬─part_name────┬─backup_name─┬─backup_path───────────────────┬─part_backup_path────────────────────────────────────────────┐\n│ FREEZE ALL │ 202101 │ 202101_7_7_0 │ 8 │ /var/lib/clickhouse/shadow/8/ │ /var/lib/clickhouse/shadow/8/data/default/test/202101_7_7_0 │\n│ FREEZE ALL │ 202101 │ 202101_8_8_0 │ 8 │ /var/lib/clickhouse/shadow/8/ │ /var/lib/clickhouse/shadow/8/data/default/test/202101_8_8_0 │\n└──────────────┴──────────────┴──────────────┴─────────────┴───────────────────────────────┴─────────────────────────────────────────────────────────────┘\n``` \N \N 0 Bool 0 0 Production | |
system_events_show_zero_values 0 0 Allows to select zero-valued events from [`system.events`](../../operations/system-tables/events.md).\n\nSome monitoring systems require passing all the metrics values to them for each checkpoint, even if the metric value is zero.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\n**Examples**\n\nQuery\n\n```sql\nSELECT * FROM system.events WHERE event=\'QueryMemoryLimitExceeded\';\n```\n\nResult\n\n```text\nOk.\n```\n\nQuery\n```sql\nSET system_events_show_zero_values = 1;\nSELECT * FROM system.events WHERE event=\'QueryMemoryLimitExceeded\';\n```\n\nResult\n\n```text\n┌─event────────────────────┬─value─┬─description───────────────────────────────────────────┐\n│ QueryMemoryLimitExceeded │ 0 │ Number of times when memory limit exceeded for query. │\n└──────────────────────────┴───────┴───────────────────────────────────────────────────────┘\n``` \N \N 0 Bool 0 0 Production | |
mysql_datatypes_support_level 0 Defines how MySQL types are converted to corresponding ClickHouse types. A comma separated list in any combination of `decimal`, `datetime64`, `date2Date32` or `date2String`.\n- `decimal`: convert `NUMERIC` and `DECIMAL` types to `Decimal` when precision allows it.\n- `datetime64`: convert `DATETIME` and `TIMESTAMP` types to `DateTime64` instead of `DateTime` when precision is not `0`.\n- `date2Date32`: convert `DATE` to `Date32` instead of `Date`. Takes precedence over `date2String`.\n- `date2String`: convert `DATE` to `String` instead of `Date`. Overridden by `datetime64`. \N \N 0 MySQLDataTypesSupport 0 Production | |
optimize_trivial_insert_select 0 0 Optimize trivial \'INSERT INTO table SELECT ... FROM TABLES\' query \N \N 0 Bool 0 0 Production | |
allow_non_metadata_alters 1 0 Allow to execute alters which affects not only tables metadata, but also data on disk \N \N 0 Bool 1 0 Production | |
enable_global_with_statement 1 0 Propagate WITH statements to UNION queries and all subqueries \N \N 0 Bool 1 0 Production | |
aggregate_functions_null_for_empty 0 0 Enables or disables rewriting all aggregate functions in a query, adding [-OrNull](/sql-reference/aggregate-functions/combinators#-ornull) suffix to them. Enable it for SQL standard compatibility.\nIt is implemented via query rewrite (similar to [count_distinct_implementation](#count_distinct_implementation) setting) to get consistent results for distributed queries.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\n**Example**\n\nConsider the following query with aggregate functions:\n```sql\nSELECT SUM(-1), MAX(0) FROM system.one WHERE 0;\n```\n\nWith `aggregate_functions_null_for_empty = 0` it would produce:\n```text\n┌─SUM(-1)─┬─MAX(0)─┐\n│ 0 │ 0 │\n└─────────┴────────┘\n```\n\nWith `aggregate_functions_null_for_empty = 1` the result would be:\n```text\n┌─SUMOrNull(-1)─┬─MAXOrNull(0)─┐\n│ NULL │ NULL │\n└───────────────┴──────────────┘\n``` \N \N 0 Bool 0 0 Production | |
optimize_syntax_fuse_functions 0 0 Enables to fuse aggregate functions with identical argument. It rewrites query contains at least two aggregate functions from [sum](/sql-reference/aggregate-functions/reference/sum), [count](/sql-reference/aggregate-functions/reference/count) or [avg](/sql-reference/aggregate-functions/reference/avg) with identical argument to [sumCount](/sql-reference/aggregate-functions/reference/sumcount).\n\nPossible values:\n\n- 0 — Functions with identical argument are not fused.\n- 1 — Functions with identical argument are fused.\n\n**Example**\n\nQuery:\n\n```sql\nCREATE TABLE fuse_tbl(a Int8, b Int8) Engine = Log;\nSET optimize_syntax_fuse_functions = 1;\nEXPLAIN SYNTAX SELECT sum(a), sum(b), count(b), avg(b) from fuse_tbl FORMAT TSV;\n```\n\nResult:\n\n```text\nSELECT\n sum(a),\n sumCount(b).1,\n sumCount(b).2,\n (sumCount(b).1) / (sumCount(b).2)\nFROM fuse_tbl\n``` \N \N 0 Bool 0 0 Production | |
flatten_nested 1 0 Sets the data format of a [nested](../../sql-reference/data-types/nested-data-structures/index.md) columns.\n\nPossible values:\n\n- 1 — Nested column is flattened to separate arrays.\n- 0 — Nested column stays a single array of tuples.\n\n**Usage**\n\nIf the setting is set to `0`, it is possible to use an arbitrary level of nesting.\n\n**Examples**\n\nQuery:\n\n```sql\nSET flatten_nested = 1;\nCREATE TABLE t_nest (`n` Nested(a UInt32, b UInt32)) ENGINE = MergeTree ORDER BY tuple();\n\nSHOW CREATE TABLE t_nest;\n```\n\nResult:\n\n```text\n┌─statement───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ CREATE TABLE default.t_nest\n(\n `n.a` Array(UInt32),\n `n.b` Array(UInt32)\n)\nENGINE = MergeTree\nORDER BY tuple()\nSETTINGS index_granularity = 8192 │\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n```\n\nQuery:\n\n```sql\nSET flatten_nested = 0;\n\nCREATE TABLE t_nest (`n` Nested(a UInt32, b UInt32)) ENGINE = MergeTree ORDER BY tuple();\n\nSHOW CREATE TABLE t_nest;\n```\n\nResult:\n\n```text\n┌─statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ CREATE TABLE default.t_nest\n(\n `n` Nested(a UInt32, b UInt32)\n)\nENGINE = MergeTree\nORDER BY tuple()\nSETTINGS index_granularity = 8192 │\n└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n``` \N \N 0 Bool 1 0 Production | |
asterisk_include_materialized_columns 0 0 Include [MATERIALIZED](/sql-reference/statements/create/view#materialized-view) columns for wildcard query (`SELECT *`).\n\nPossible values:\n\n- 0 - disabled\n- 1 - enabled \N \N 0 Bool 0 0 Production | |
asterisk_include_alias_columns 0 0 Include [ALIAS](../../sql-reference/statements/create/table.md/#alias) columns for wildcard query (`SELECT *`).\n\nPossible values:\n\n- 0 - disabled\n- 1 - enabled \N \N 0 Bool 0 0 Production | |
optimize_skip_merged_partitions 0 0 Enables or disables optimization for [OPTIMIZE TABLE ... FINAL](../../sql-reference/statements/optimize.md) query if there is only one part with level > 0 and it doesn\'t have expired TTL.\n\n- `OPTIMIZE TABLE ... FINAL SETTINGS optimize_skip_merged_partitions=1`\n\nBy default, `OPTIMIZE TABLE ... FINAL` query rewrites the one part even if there is only a single part.\n\nPossible values:\n\n- 1 - Enable optimization.\n- 0 - Disable optimization. \N \N 0 Bool 0 0 Production | |
optimize_on_insert 1 0 Enables or disables data transformation before the insertion, as if merge was done on this block (according to table engine).\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\n**Example**\n\nThe difference between enabled and disabled:\n\nQuery:\n\n```sql\nSET optimize_on_insert = 1;\n\nCREATE TABLE test1 (`FirstTable` UInt32) ENGINE = ReplacingMergeTree ORDER BY FirstTable;\n\nINSERT INTO test1 SELECT number % 2 FROM numbers(5);\n\nSELECT * FROM test1;\n\nSET optimize_on_insert = 0;\n\nCREATE TABLE test2 (`SecondTable` UInt32) ENGINE = ReplacingMergeTree ORDER BY SecondTable;\n\nINSERT INTO test2 SELECT number % 2 FROM numbers(5);\n\nSELECT * FROM test2;\n```\n\nResult:\n\n```text\n┌─FirstTable─┐\n│ 0 │\n│ 1 │\n└────────────┘\n\n┌─SecondTable─┐\n│ 0 │\n│ 0 │\n│ 0 │\n│ 1 │\n│ 1 │\n└─────────────┘\n```\n\nNote that this setting influences [Materialized view](/sql-reference/statements/create/view#materialized-view) behaviour. \N \N 0 Bool 1 0 Production | |
optimize_use_projections 1 0 Enables or disables [projection](../../engines/table-engines/mergetree-family/mergetree.md/#projections) optimization when processing `SELECT` queries.\n\nPossible values:\n\n- 0 — Projection optimization disabled.\n- 1 — Projection optimization enabled. \N \N 0 Bool 1 0 Production | |
allow_experimental_projection_optimization 1 0 Enables or disables [projection](../../engines/table-engines/mergetree-family/mergetree.md/#projections) optimization when processing `SELECT` queries.\n\nPossible values:\n\n- 0 — Projection optimization disabled.\n- 1 — Projection optimization enabled. \N \N 0 Bool 1 optimize_use_projections 0 Production | |
optimize_use_implicit_projections 1 0 Automatically choose implicit projections to perform SELECT query \N \N 0 Bool 1 0 Production | |
force_optimize_projection 0 0 Enables or disables the obligatory use of [projections](../../engines/table-engines/mergetree-family/mergetree.md/#projections) in `SELECT` queries, when projection optimization is enabled (see [optimize_use_projections](#optimize_use_projections) setting).\n\nPossible values:\n\n- 0 — Projection optimization is not obligatory.\n- 1 — Projection optimization is obligatory. \N \N 0 Bool 0 0 Production | |
force_optimize_projection_name 0 If it is set to a non-empty string, check that this projection is used in the query at least once.\n\nPossible values:\n\n- string: name of projection that used in a query \N \N 0 String 0 Production | |
preferred_optimize_projection_name 0 If it is set to a non-empty string, ClickHouse will try to apply specified projection in query.\n\n\nPossible values:\n\n- string: name of preferred projection \N \N 0 String 0 Production | |
async_socket_for_remote 1 0 Enables asynchronous read from socket while executing remote query.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
async_query_sending_for_remote 1 0 Enables asynchronous connection creation and query sending while executing remote query.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
insert_null_as_default 1 0 Enables or disables the insertion of [default values](/sql-reference/statements/create/table#default_values) instead of [NULL](/sql-reference/syntax#null) into columns with not [nullable](/sql-reference/data-types/nullable) data type.\nIf column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting.\n\nThis setting is applicable to [INSERT ... SELECT](../../sql-reference/statements/insert-into.md/#inserting-the-results-of-select) queries. Note that `SELECT` subqueries may be concatenated with `UNION ALL` clause.\n\nPossible values:\n\n- 0 — Inserting `NULL` into a not nullable column causes an exception.\n- 1 — Default column value is inserted instead of `NULL`. \N \N 0 Bool 1 0 Production | |
describe_extend_object_types 0 0 Deduce concrete type of columns of type Object in DESCRIBE query \N \N 0 Bool 0 0 Production | |
describe_include_subcolumns 0 0 Enables describing subcolumns for a [DESCRIBE](../../sql-reference/statements/describe-table.md) query. For example, members of a [Tuple](../../sql-reference/data-types/tuple.md) or subcolumns of a [Map](/sql-reference/data-types/map#reading-subcolumns-of-map), [Nullable](../../sql-reference/data-types/nullable.md/#finding-null) or an [Array](../../sql-reference/data-types/array.md/#array-size) data type.\n\nPossible values:\n\n- 0 — Subcolumns are not included in `DESCRIBE` queries.\n- 1 — Subcolumns are included in `DESCRIBE` queries.\n\n**Example**\n\nSee an example for the [DESCRIBE](../../sql-reference/statements/describe-table.md) statement. \N \N 0 Bool 0 0 Production | |
describe_include_virtual_columns 0 0 If true, virtual columns of table will be included into result of DESCRIBE query \N \N 0 Bool 0 0 Production | |
describe_compact_output 0 0 If true, include only column names and types into result of DESCRIBE query \N \N 0 Bool 0 0 Production | |
apply_mutations_on_fly 0 0 If true, mutations (UPDATEs and DELETEs) which are not materialized in data part will be applied on SELECTs. \N \N 0 Bool 0 0 Production | |
mutations_execute_nondeterministic_on_initiator 0 0 If true constant nondeterministic functions (e.g. function `now()`) are executed on initiator and replaced to literals in `UPDATE` and `DELETE` queries. It helps to keep data in sync on replicas while executing mutations with constant nondeterministic functions. Default value: `false`. \N \N 0 Bool 0 0 Production | |
mutations_execute_subqueries_on_initiator 0 0 If true scalar subqueries are executed on initiator and replaced to literals in `UPDATE` and `DELETE` queries. Default value: `false`. \N \N 0 Bool 0 0 Production | |
mutations_max_literal_size_to_replace 16384 0 The maximum size of serialized literal in bytes to replace in `UPDATE` and `DELETE` queries. Takes effect only if at least one the two settings above is enabled. Default value: 16384 (16 KiB). \N \N 0 UInt64 16384 0 Production | |
create_replicated_merge_tree_fault_injection_probability 0 0 The probability of a fault injection during table creation after creating metadata in ZooKeeper \N \N 0 Float 0 0 Production | |
use_iceberg_metadata_files_cache 1 0 If turned on, iceberg table function and iceberg storage may utilize the iceberg metadata files cache. \N \N 0 Bool 1 0 Production | |
use_query_cache 0 0 If turned on, `SELECT` queries may utilize the [query cache](../query-cache.md). Parameters [enable_reads_from_query_cache](#enable_reads_from_query_cache)\nand [enable_writes_to_query_cache](#enable_writes_to_query_cache) control in more detail how the cache is used.\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 0 0 Production | |
enable_writes_to_query_cache 1 0 If turned on, results of `SELECT` queries are stored in the [query cache](../query-cache.md).\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 1 0 Production | |
enable_reads_from_query_cache 1 0 If turned on, results of `SELECT` queries are retrieved from the [query cache](../query-cache.md).\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 1 0 Production | |
query_cache_nondeterministic_function_handling throw 0 Controls how the [query cache](../query-cache.md) handles `SELECT` queries with non-deterministic functions like `rand()` or `now()`.\n\nPossible values:\n\n- `\'throw\'` - Throw an exception and don\'t cache the query result.\n- `\'save\'` - Cache the query result.\n- `\'ignore\'` - Don\'t cache the query result and don\'t throw an exception. \N \N 0 QueryResultCacheNondeterministicFunctionHandling throw 0 Production | |
query_cache_system_table_handling throw 0 Controls how the [query cache](../query-cache.md) handles `SELECT` queries against system tables, i.e. tables in databases `system.*` and `information_schema.*`.\n\nPossible values:\n\n- `\'throw\'` - Throw an exception and don\'t cache the query result.\n- `\'save\'` - Cache the query result.\n- `\'ignore\'` - Don\'t cache the query result and don\'t throw an exception. \N \N 0 QueryResultCacheSystemTableHandling throw 0 Production | |
query_cache_max_size_in_bytes 0 0 The maximum amount of memory (in bytes) the current user may allocate in the [query cache](../query-cache.md). 0 means unlimited.\n\nPossible values:\n\n- Positive integer >= 0. \N \N 0 UInt64 0 0 Production | |
query_cache_max_entries 0 0 The maximum number of query results the current user may store in the [query cache](../query-cache.md). 0 means unlimited.\n\nPossible values:\n\n- Positive integer >= 0. \N \N 0 UInt64 0 0 Production | |
query_cache_min_query_runs 0 0 Minimum number of times a `SELECT` query must run before its result is stored in the [query cache](../query-cache.md).\n\nPossible values:\n\n- Positive integer >= 0. \N \N 0 UInt64 0 0 Production | |
query_cache_min_query_duration 0 0 Minimum duration in milliseconds a query needs to run for its result to be stored in the [query cache](../query-cache.md).\n\nPossible values:\n\n- Positive integer >= 0. \N \N 0 Milliseconds 0 0 Production | |
query_cache_compress_entries 1 0 Compress entries in the [query cache](../query-cache.md). Lessens the memory consumption of the query cache at the cost of slower inserts into / reads from it.\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 1 0 Production | |
query_cache_squash_partial_results 1 0 Squash partial result blocks to blocks of size [max_block_size](#max_block_size). Reduces performance of inserts into the [query cache](../query-cache.md) but improves the compressability of cache entries (see [query_cache_compress-entries](#query_cache_compress_entries)).\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 1 0 Production | |
query_cache_ttl 60 0 After this time in seconds entries in the [query cache](../query-cache.md) become stale.\n\nPossible values:\n\n- Positive integer >= 0. \N \N 0 Seconds 60 0 Production | |
query_cache_share_between_users 0 0 If turned on, the result of `SELECT` queries cached in the [query cache](../query-cache.md) can be read by other users.\nIt is not recommended to enable this setting due to security reasons.\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 0 0 Production | |
query_cache_tag 0 A string which acts as a label for [query cache](../query-cache.md) entries.\nThe same queries with different tags are considered different by the query cache.\n\nPossible values:\n\n- Any string \N \N 0 String 0 Production | |
enable_sharing_sets_for_mutations 1 0 Allow sharing set objects build for IN subqueries between different tasks of the same mutation. This reduces memory usage and CPU consumption \N \N 0 Bool 1 0 Production | |
use_query_condition_cache 1 0 Enable the [query condition cache](/operations/query-condition-cache). The cache stores ranges of granules in data parts which do not satisfy the condition in the `WHERE` clause,\nand reuse this information as an ephemeral index for subsequent queries.\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 1 0 Production | |
query_condition_cache_store_conditions_as_plaintext 0 0 Stores the filter condition for the [query condition cache](/operations/query-condition-cache) in plaintext.\nIf enabled, system.query_condition_cache shows the verbatim filter condition which makes it easier to debug issues with the cache.\nDisabled by default because plaintext filter conditions may expose sensitive information.\n\nPossible values:\n\n- 0 - Disabled\n- 1 - Enabled \N \N 0 Bool 0 0 Production | |
optimize_rewrite_sum_if_to_count_if 1 0 Rewrite sumIf() and sum(if()) function countIf() function when logically equivalent \N \N 0 Bool 1 0 Production | |
optimize_rewrite_aggregate_function_with_if 1 0 Rewrite aggregate functions with if expression as argument when logically equivalent.\nFor example, `avg(if(cond, col, null))` can be rewritten to `avgOrNullIf(cond, col)`. It may improve performance.\n\n:::note\nSupported only with the analyzer (`enable_analyzer = 1`).\n::: \N \N 0 Bool 1 0 Production | |
optimize_rewrite_array_exists_to_has 0 0 Rewrite arrayExists() functions to has() when logically equivalent. For example, arrayExists(x -> x = 1, arr) can be rewritten to has(arr, 1) \N \N 0 Bool 0 0 Production | |
insert_shard_id 0 0 If not `0`, specifies the shard of [Distributed](/engines/table-engines/special/distributed) table into which the data will be inserted synchronously.\n\nIf `insert_shard_id` value is incorrect, the server will throw an exception.\n\nTo get the number of shards on `requested_cluster`, you can check server config or use this query:\n\n```sql\nSELECT uniq(shard_num) FROM system.clusters WHERE cluster = \'requested_cluster\';\n```\n\nPossible values:\n\n- 0 — Disabled.\n- Any number from `1` to `shards_num` of corresponding [Distributed](/engines/table-engines/special/distributed) table.\n\n**Example**\n\nQuery:\n\n```sql\nCREATE TABLE x AS system.numbers ENGINE = MergeTree ORDER BY number;\nCREATE TABLE x_dist AS x ENGINE = Distributed(\'test_cluster_two_shards_localhost\', currentDatabase(), x);\nINSERT INTO x_dist SELECT * FROM numbers(5) SETTINGS insert_shard_id = 1;\nSELECT * FROM x_dist ORDER BY number ASC;\n```\n\nResult:\n\n```text\n┌─number─┐\n│ 0 │\n│ 0 │\n│ 1 │\n│ 1 │\n│ 2 │\n│ 2 │\n│ 3 │\n│ 3 │\n│ 4 │\n│ 4 │\n└────────┘\n``` \N \N 0 UInt64 0 0 Production | |
collect_hash_table_stats_during_aggregation 1 0 Enable collecting hash table statistics to optimize memory allocation \N \N 0 Bool 1 0 Production | |
max_size_to_preallocate_for_aggregation 1000000000000 0 For how many elements it is allowed to preallocate space in all hash tables in total before aggregation \N \N 0 UInt64 1000000000000 0 Production | |
collect_hash_table_stats_during_joins 1 0 Enable collecting hash table statistics to optimize memory allocation \N \N 0 Bool 1 0 Production | |
max_size_to_preallocate_for_joins 1000000000000 0 For how many elements it is allowed to preallocate space in all hash tables in total before join \N \N 0 UInt64 1000000000000 0 Production | |
kafka_disable_num_consumers_limit 0 0 Disable limit on kafka_num_consumers that depends on the number of available CPU cores. \N \N 0 Bool 0 0 Production | |
allow_experimental_kafka_offsets_storage_in_keeper 0 0 Allow experimental feature to store Kafka related offsets in ClickHouse Keeper. When enabled a ClickHouse Keeper path and replica name can be specified to the Kafka table engine. As a result instead of the regular Kafka engine, a new type of storage engine will be used that stores the committed offsets primarily in ClickHouse Keeper \N \N 0 Bool 0 0 Experimental | |
enable_software_prefetch_in_aggregation 1 0 Enable use of software prefetch in aggregation \N \N 0 Bool 1 0 Production | |
allow_aggregate_partitions_independently 0 0 Enable independent aggregation of partitions on separate threads when partition key suits group by key. Beneficial when number of partitions close to number of cores and partitions have roughly the same size \N \N 0 Bool 0 0 Production | |
force_aggregate_partitions_independently 0 0 Force the use of optimization when it is applicable, but heuristics decided not to use it \N \N 0 Bool 0 0 Production | |
max_number_of_partitions_for_independent_aggregation 128 0 Maximal number of partitions in table to apply optimization \N \N 0 UInt64 128 0 Production | |
min_hit_rate_to_use_consecutive_keys_optimization 0.5 0 Minimal hit rate of a cache which is used for consecutive keys optimization in aggregation to keep it enabled \N \N 0 Float 0.5 0 Production | |
engine_file_empty_if_not_exists 0 0 Allows to select data from a file engine table without file.\n\nPossible values:\n- 0 — `SELECT` throws exception.\n- 1 — `SELECT` returns empty result. \N \N 0 Bool 0 0 Production | |
engine_file_truncate_on_insert 0 0 Enables or disables truncate before insert in [File](../../engines/table-engines/special/file.md) engine tables.\n\nPossible values:\n- 0 — `INSERT` query appends new data to the end of the file.\n- 1 — `INSERT` query replaces existing content of the file with the new data. \N \N 0 Bool 0 0 Production | |
engine_file_allow_create_multiple_files 0 0 Enables or disables creating a new file on each insert in file engine tables if the format has the suffix (`JSON`, `ORC`, `Parquet`, etc.). If enabled, on each insert a new file will be created with a name following this pattern:\n\n`data.Parquet` -> `data.1.Parquet` -> `data.2.Parquet`, etc.\n\nPossible values:\n- 0 — `INSERT` query appends new data to the end of the file.\n- 1 — `INSERT` query creates a new file. \N \N 0 Bool 0 0 Production | |
engine_file_skip_empty_files 0 0 Enables or disables skipping empty files in [File](../../engines/table-engines/special/file.md) engine tables.\n\nPossible values:\n- 0 — `SELECT` throws an exception if empty file is not compatible with requested format.\n- 1 — `SELECT` returns empty result for empty file. \N \N 0 Bool 0 0 Production | |
engine_url_skip_empty_files 0 0 Enables or disables skipping empty files in [URL](../../engines/table-engines/special/url.md) engine tables.\n\nPossible values:\n- 0 — `SELECT` throws an exception if empty file is not compatible with requested format.\n- 1 — `SELECT` returns empty result for empty file. \N \N 0 Bool 0 0 Production | |
enable_url_encoding 1 0 Allows to enable/disable decoding/encoding path in uri in [URL](../../engines/table-engines/special/url.md) engine tables.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
database_replicated_initial_query_timeout_sec 300 0 Sets how long initial DDL query should wait for Replicated database to process previous DDL queue entries in seconds.\n\nPossible values:\n\n- Positive integer.\n- 0 — Unlimited. \N \N 0 UInt64 300 0 Production | |
database_replicated_enforce_synchronous_settings 0 0 Enforces synchronous waiting for some queries (see also database_atomic_wait_for_drop_and_detach_synchronously, mutations_sync, alter_sync). Not recommended to enable these settings. \N \N 0 Bool 0 0 Production | |
max_distributed_depth 5 0 Limits the maximum depth of recursive queries for [Distributed](../../engines/table-engines/special/distributed.md) tables.\n\nIf the value is exceeded, the server throws an exception.\n\nPossible values:\n\n- Positive integer.\n- 0 — Unlimited depth. \N \N 0 UInt64 5 0 Production | |
database_replicated_always_detach_permanently 0 0 Execute DETACH TABLE as DETACH TABLE PERMANENTLY if database engine is Replicated \N \N 0 Bool 0 0 Production | |
database_replicated_allow_only_replicated_engine 0 0 Allow to create only Replicated tables in database with engine Replicated \N \N 0 Bool 0 0 Production | |
database_replicated_allow_replicated_engine_arguments 0 0 0 - Don\'t allow to explicitly specify ZooKeeper path and replica name for *MergeTree tables in Replicated databases. 1 - Allow. 2 - Allow, but ignore the specified path and use default one instead. 3 - Allow and don\'t log a warning. \N \N 0 UInt64 0 0 Production | |
database_replicated_allow_explicit_uuid 0 0 0 - Don\'t allow to explicitly specify UUIDs for tables in Replicated databases. 1 - Allow. 2 - Allow, but ignore the specified UUID and generate a random one instead. \N \N 0 UInt64 0 0 Production | |
database_replicated_allow_heavy_create 0 0 Allow long-running DDL queries (CREATE AS SELECT and POPULATE) in Replicated database engine. Note that it can block DDL queue for a long time. \N \N 0 Bool 0 0 Production | |
cloud_mode 0 0 Cloud mode \N \N 0 Bool 0 0 Production | |
cloud_mode_engine 1 0 The engine family allowed in Cloud.\n\n- 0 - allow everything\n- 1 - rewrite DDLs to use *ReplicatedMergeTree\n- 2 - rewrite DDLs to use SharedMergeTree\n- 3 - rewrite DDLs to use SharedMergeTree except when explicitly passed remote disk is specified\n\nUInt64 to minimize public part \N \N 0 UInt64 1 0 Production | |
cloud_mode_database_engine 1 0 The database engine allowed in Cloud. 1 - rewrite DDLs to use Replicated database, 2 - rewrite DDLs to use Shared database \N \N 0 UInt64 1 0 Production | |
distributed_ddl_output_mode throw 0 Sets format of distributed DDL query result.\n\nPossible values:\n\n- `throw` — Returns result set with query execution status for all hosts where query is finished. If query has failed on some hosts, then it will rethrow the first exception. If query is not finished yet on some hosts and [distributed_ddl_task_timeout](#distributed_ddl_task_timeout) exceeded, then it throws `TIMEOUT_EXCEEDED` exception.\n- `none` — Is similar to throw, but distributed DDL query returns no result set.\n- `null_status_on_timeout` — Returns `NULL` as execution status in some rows of result set instead of throwing `TIMEOUT_EXCEEDED` if query is not finished on the corresponding hosts.\n- `never_throw` — Do not throw `TIMEOUT_EXCEEDED` and do not rethrow exceptions if query has failed on some hosts.\n- `none_only_active` - similar to `none`, but doesn\'t wait for inactive replicas of the `Replicated` database. Note: with this mode it\'s impossible to figure out that the query was not executed on some replica and will be executed in background.\n- `null_status_on_timeout_only_active` — similar to `null_status_on_timeout`, but doesn\'t wait for inactive replicas of the `Replicated` database\n- `throw_only_active` — similar to `throw`, but doesn\'t wait for inactive replicas of the `Replicated` database\n\nCloud default value: `none`. \N \N 0 DistributedDDLOutputMode throw 0 Production | |
distributed_ddl_entry_format_version 5 0 Compatibility version of distributed DDL (ON CLUSTER) queries \N \N 0 UInt64 5 0 Production | |
external_storage_max_read_rows 0 0 Limit maximum number of rows when table with external engine should flush history data. Now supported only for MySQL table engine, database engine, and dictionary. If equal to 0, this setting is disabled \N \N 0 UInt64 0 0 Production | |
external_storage_max_read_bytes 0 0 Limit maximum number of bytes when table with external engine should flush history data. Now supported only for MySQL table engine, database engine, and dictionary. If equal to 0, this setting is disabled \N \N 0 UInt64 0 0 Production | |
external_storage_connect_timeout_sec 10 0 Connect timeout in seconds. Now supported only for MySQL \N \N 0 UInt64 10 0 Production | |
external_storage_rw_timeout_sec 300 0 Read/write timeout in seconds. Now supported only for MySQL \N \N 0 UInt64 300 0 Production | |
allow_experimental_correlated_subqueries 0 0 Allow to execute correlated subqueries. \N \N 0 Bool 0 0 Experimental | |
union_default_mode 0 Sets a mode for combining `SELECT` query results. The setting is only used when shared with [UNION](../../sql-reference/statements/select/union.md) without explicitly specifying the `UNION ALL` or `UNION DISTINCT`.\n\nPossible values:\n\n- `\'DISTINCT\'` — ClickHouse outputs rows as a result of combining queries removing duplicate rows.\n- `\'ALL\'` — ClickHouse outputs all rows as a result of combining queries including duplicate rows.\n- `\'\'` — ClickHouse generates an exception when used with `UNION`.\n\nSee examples in [UNION](../../sql-reference/statements/select/union.md). \N \N 0 SetOperationMode 0 Production | |
intersect_default_mode ALL 0 Set default mode in INTERSECT query. Possible values: empty string, \'ALL\', \'DISTINCT\'. If empty, query without mode will throw exception. \N \N 0 SetOperationMode ALL 0 Production | |
except_default_mode ALL 0 Set default mode in EXCEPT query. Possible values: empty string, \'ALL\', \'DISTINCT\'. If empty, query without mode will throw exception. \N \N 0 SetOperationMode ALL 0 Production | |
optimize_aggregators_of_group_by_keys 1 0 Eliminates min/max/any/anyLast aggregators of GROUP BY keys in SELECT section \N \N 0 Bool 1 0 Production | |
optimize_injective_functions_in_group_by 1 0 Replaces injective functions by it\'s arguments in GROUP BY section \N \N 0 Bool 1 0 Production | |
optimize_group_by_function_keys 1 0 Eliminates functions of other keys in GROUP BY section \N \N 0 Bool 1 0 Production | |
optimize_group_by_constant_keys 1 0 Optimize GROUP BY when all keys in block are constant \N \N 0 Bool 1 0 Production | |
legacy_column_name_of_tuple_literal 0 0 List all names of element of large tuple literals in their column names instead of hash. This settings exists only for compatibility reasons. It makes sense to set to \'true\', while doing rolling update of cluster from version lower than 21.7 to higher. \N \N 0 Bool 0 0 Production | |
enable_named_columns_in_function_tuple 0 0 Generate named tuples in function tuple() when all names are unique and can be treated as unquoted identifiers. \N \N 0 Bool 0 0 Production | |
query_plan_enable_optimizations 1 0 Toggles query optimization at the query plan level.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable all optimizations at the query plan level\n- 1 - Enable optimizations at the query plan level (but individual optimizations may still be disabled via their individual settings) \N \N 0 Bool 1 0 Production | |
query_plan_max_optimizations_to_apply 10000 0 Limits the total number of optimizations applied to query plan, see setting [query_plan_enable_optimizations](#query_plan_enable_optimizations).\nUseful to avoid long optimization times for complex queries.\nIn the EXPLAIN PLAN query, stop applying optimizations after this limit is reached and return the plan as is.\nFor regular query execution if the actual number of optimizations exceeds this setting, an exception is thrown.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n::: \N \N 0 UInt64 10000 0 Production | |
query_plan_lift_up_array_join 1 0 Toggles a query-plan-level optimization which moves ARRAY JOINs up in the execution plan.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_push_down_limit 1 0 Toggles a query-plan-level optimization which moves LIMITs down in the execution plan.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_split_filter 1 0 :::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nToggles a query-plan-level optimization which splits filters into expressions.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_merge_expressions 1 0 Toggles a query-plan-level optimization which merges consecutive filters.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_merge_filters 1 0 Allow to merge filters in the query plan \N \N 0 Bool 1 0 Production | |
query_plan_filter_push_down 1 0 Toggles a query-plan-level optimization which moves filters down in the execution plan.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_convert_outer_join_to_inner_join 1 0 Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values \N \N 0 Bool 1 0 Production | |
query_plan_merge_filter_into_join_condition 1 0 Allow to merge filter into JOIN condition and convert CROSS JOIN to INNER. \N \N 0 Bool 1 0 Production | |
query_plan_convert_join_to_in 0 0 Allow to convert JOIN to subquery with IN if output columns tied to only left table \N \N 0 Bool 0 0 Production | |
query_plan_optimize_prewhere 1 0 Allow to push down filter to PREWHERE expression for supported storages \N \N 0 Bool 1 0 Production | |
query_plan_execute_functions_after_sorting 1 0 Toggles a query-plan-level optimization which moves expressions after sorting steps.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_reuse_storage_ordering_for_window_functions 1 0 Toggles a query-plan-level optimization which uses storage sorting when sorting for window functions.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_lift_up_union 1 0 Toggles a query-plan-level optimization which moves larger subtrees of the query plan into union to enable further optimizations.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_read_in_order 1 0 Toggles the read in-order optimization query-plan-level optimization.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_aggregation_in_order 1 0 Toggles the aggregation in-order query-plan-level optimization.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_remove_redundant_sorting 1 0 Toggles a query-plan-level optimization which removes redundant sorting steps, e.g. in subqueries.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_remove_redundant_distinct 1 0 Toggles a query-plan-level optimization which removes redundant DISTINCT steps.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_try_use_vector_search 1 0 Toggles a query-plan-level optimization which tries to use the vector similarity index.\nOnly takes effect if setting [query_plan_enable_optimizations](#query_plan_enable_optimizations) is 1.\n\n:::note\nThis is an expert-level setting which should only be used for debugging by developers. The setting may change in future in backward-incompatible ways or be removed.\n:::\n\nPossible values:\n\n- 0 - Disable\n- 1 - Enable \N \N 0 Bool 1 0 Production | |
query_plan_enable_multithreading_after_window_functions 1 0 Enable multithreading after evaluating window functions to allow parallel stream processing \N \N 0 Bool 1 0 Production | |
query_plan_optimize_lazy_materialization 1 0 Use query plan for lazy materialization optimization \N \N 0 Bool 1 0 Production | |
query_plan_max_limit_for_lazy_materialization 10 0 Control maximum limit value that allows to use query plan for lazy materialization optimization. If zero, there is no limit \N \N 0 UInt64 10 0 Production | |
query_plan_use_new_logical_join_step 1 0 Use new logical join step in query plan \N \N 0 Bool 1 0 Production | |
serialize_query_plan 0 0 Serialize query plan for distributed processing \N \N 0 Bool 0 0 Production | |
regexp_max_matches_per_row 1000 0 Sets the maximum number of matches for a single regular expression per row. Use it to protect against memory overload when using greedy regular expression in the [extractAllGroupsHorizontal](/sql-reference/functions/string-search-functions#extractallgroupshorizontal) function.\n\nPossible values:\n\n- Positive integer. \N \N 0 UInt64 1000 0 Production | |
limit 0 0 Sets the maximum number of rows to get from the query result. It adjusts the value set by the [LIMIT](/sql-reference/statements/select/limit) clause, so that the limit, specified in the query, cannot exceed the limit, set by this setting.\n\nPossible values:\n\n- 0 — The number of rows is not limited.\n- Positive integer. \N \N 0 UInt64 0 0 Production | |
offset 0 0 Sets the number of rows to skip before starting to return rows from the query. It adjusts the offset set by the [OFFSET](/sql-reference/statements/select/offset) clause, so that these two values are summarized.\n\nPossible values:\n\n- 0 — No rows are skipped .\n- Positive integer.\n\n**Example**\n\nInput table:\n\n```sql\nCREATE TABLE test (i UInt64) ENGINE = MergeTree() ORDER BY i;\nINSERT INTO test SELECT number FROM numbers(500);\n```\n\nQuery:\n\n```sql\nSET limit = 5;\nSET offset = 7;\nSELECT * FROM test LIMIT 10 OFFSET 100;\n```\nResult:\n\n```text\n┌───i─┐\n│ 107 │\n│ 108 │\n│ 109 │\n└─────┘\n``` \N \N 0 UInt64 0 0 Production | |
function_range_max_elements_in_block 500000000 0 Sets the safety threshold for data volume generated by function [range](/sql-reference/functions/array-functions#rangeend-rangestart--end--step). Defines the maximum number of values generated by function per block of data (sum of array sizes for every row in a block).\n\nPossible values:\n\n- Positive integer.\n\n**See Also**\n\n- [max_block_size](#max_block_size)\n- [min_insert_block_size_rows](#min_insert_block_size_rows) \N \N 0 UInt64 500000000 0 Production | |
function_sleep_max_microseconds_per_block 3000000 0 Maximum number of microseconds the function `sleep` is allowed to sleep for each block. If a user called it with a larger value, it throws an exception. It is a safety threshold. \N \N 0 UInt64 3000000 0 Production | |
function_visible_width_behavior 1 0 The version of `visibleWidth` behavior. 0 - only count the number of code points; 1 - correctly count zero-width and combining characters, count full-width characters as two, estimate the tab width, count delete characters. \N \N 0 UInt64 1 0 Production | |
short_circuit_function_evaluation enable 0 Allows calculating the [if](../../sql-reference/functions/conditional-functions.md/#if), [multiIf](../../sql-reference/functions/conditional-functions.md/#multiif), [and](/sql-reference/functions/logical-functions#and), and [or](/sql-reference/functions/logical-functions#or) functions according to a [short scheme](https://en.wikipedia.org/wiki/Short-circuit_evaluation). This helps optimize the execution of complex expressions in these functions and prevent possible exceptions (such as division by zero when it is not expected).\n\nPossible values:\n\n- `enable` — Enables short-circuit function evaluation for functions that are suitable for it (can throw an exception or computationally heavy).\n- `force_enable` — Enables short-circuit function evaluation for all functions.\n- `disable` — Disables short-circuit function evaluation. \N \N 0 ShortCircuitFunctionEvaluation enable 0 Production | |
storage_file_read_method pread 0 Method of reading data from storage file, one of: `read`, `pread`, `mmap`. The mmap method does not apply to clickhouse-server (it\'s intended for clickhouse-local). \N \N 0 LocalFSReadMethod pread 0 Production | |
local_filesystem_read_method pread_threadpool 0 Method of reading data from local filesystem, one of: read, pread, mmap, io_uring, pread_threadpool. The \'io_uring\' method is experimental and does not work for Log, TinyLog, StripeLog, File, Set and Join, and other tables with append-able files in presence of concurrent reads and writes. \N \N 0 String pread_threadpool 0 Production | |
remote_filesystem_read_method threadpool 0 Method of reading data from remote filesystem, one of: read, threadpool. \N \N 0 String threadpool 0 Production | |
local_filesystem_read_prefetch 0 0 Should use prefetching when reading data from local filesystem. \N \N 0 Bool 0 0 Production | |
remote_filesystem_read_prefetch 1 0 Should use prefetching when reading data from remote filesystem. \N \N 0 Bool 1 0 Production | |
read_priority 0 0 Priority to read data from local filesystem or remote filesystem. Only supported for \'pread_threadpool\' method for local filesystem and for `threadpool` method for remote filesystem. \N \N 0 Int64 0 0 Production | |
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem 0 0 The minimum number of lines to read from one file before the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) engine can parallelize reading, when reading from remote filesystem. We do not recommend using this setting.\n\nPossible values:\n\n- Positive integer. \N \N 0 UInt64 0 0 Production | |
merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem 0 0 The minimum number of bytes to read from one file before [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) engine can parallelize reading, when reading from remote filesystem. We do not recommend using this setting.\n\nPossible values:\n\n- Positive integer. \N \N 0 UInt64 0 0 Production | |
remote_read_min_bytes_for_seek 4194304 0 Min bytes required for remote read (url, s3) to do seek, instead of read with ignore. \N \N 0 UInt64 4194304 0 Production | |
merge_tree_min_bytes_per_task_for_remote_reading 2097152 0 Min bytes to read per task. \N \N 0 UInt64 2097152 0 Production | |
filesystem_prefetch_min_bytes_for_single_read_task 2097152 0 Min bytes to read per task. \N \N 0 UInt64 2097152 merge_tree_min_bytes_per_task_for_remote_reading 0 Production | |
merge_tree_use_const_size_tasks_for_remote_reading 1 0 Whether to use constant size tasks for reading from a remote table. \N \N 0 Bool 1 0 Production | |
merge_tree_determine_task_size_by_prewhere_columns 1 0 Whether to use only prewhere columns size to determine reading task size. \N \N 0 Bool 1 0 Production | |
merge_tree_min_read_task_size 8 0 Hard lower limit on the task size (even when the number of granules is low and the number of available threads is high we won\'t allocate smaller tasks \N \N 0 UInt64 8 0 Production | |
merge_tree_compact_parts_min_granules_to_multibuffer_read 16 0 Only has an effect in ClickHouse Cloud. Number of granules in stripe of compact part of MergeTree tables to use multibuffer reader, which supports parallel reading and prefetch. In case of reading from remote fs using of multibuffer reader increases number of read request. \N \N 0 UInt64 16 0 Production | |
async_insert 0 0 If true, data from INSERT query is stored in queue and later flushed to table in background. If wait_for_async_insert is false, INSERT query is processed almost instantly, otherwise client will wait until data will be flushed to table \N \N 0 Bool 0 0 Production | |
wait_for_async_insert 1 0 If true wait for processing of asynchronous insertion \N \N 0 Bool 1 0 Production | |
wait_for_async_insert_timeout 120 0 Timeout for waiting for processing asynchronous insertion \N \N 0 Seconds 120 0 Production | |
async_insert_max_data_size 10485760 0 Maximum size in bytes of unparsed data collected per query before being inserted \N \N 0 UInt64 10485760 0 Production | |
async_insert_max_query_number 450 0 Maximum number of insert queries before being inserted \N \N 0 UInt64 450 0 Production | |
async_insert_poll_timeout_ms 10 0 Timeout for polling data from asynchronous insert queue \N \N 0 Milliseconds 10 0 Production | |
async_insert_use_adaptive_busy_timeout 1 0 If it is set to true, use adaptive busy timeout for asynchronous inserts \N \N 0 Bool 1 0 Production | |
async_insert_busy_timeout_min_ms 50 0 If auto-adjusting is enabled through async_insert_use_adaptive_busy_timeout, minimum time to wait before dumping collected data per query since the first data appeared. It also serves as the initial value for the adaptive algorithm \N \N 0 Milliseconds 50 0 Production | |
async_insert_busy_timeout_max_ms 200 0 Maximum time to wait before dumping collected data per query since the first data appeared. \N \N 0 Milliseconds 200 0 Production | |
async_insert_busy_timeout_ms 200 0 Maximum time to wait before dumping collected data per query since the first data appeared. \N \N 0 Milliseconds 200 async_insert_busy_timeout_max_ms 0 Production | |
async_insert_busy_timeout_increase_rate 0.2 0 The exponential growth rate at which the adaptive asynchronous insert timeout increases \N \N 0 Double 0.2 0 Production | |
async_insert_busy_timeout_decrease_rate 0.2 0 The exponential growth rate at which the adaptive asynchronous insert timeout decreases \N \N 0 Double 0.2 0 Production | |
remote_fs_read_max_backoff_ms 10000 0 Max wait time when trying to read data for remote disk \N \N 0 UInt64 10000 0 Production | |
remote_fs_read_backoff_max_tries 5 0 Max attempts to read with backoff \N \N 0 UInt64 5 0 Production | |
enable_filesystem_cache 1 0 Use cache for remote filesystem. This setting does not turn on/off cache for disks (must be done via disk config), but allows to bypass cache for some queries if intended \N \N 0 Bool 1 0 Production | |
filesystem_cache_name 0 Filesystem cache name to use for stateless table engines or data lakes \N \N 0 String 0 Production | |
enable_filesystem_cache_on_write_operations 0 0 Write into cache on write operations. To actually work this setting requires be added to disk config too \N \N 0 Bool 0 0 Production | |
enable_filesystem_cache_log 0 0 Allows to record the filesystem caching log for each query \N \N 0 Bool 0 0 Production | |
read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0 0 Allow to use the filesystem cache in passive mode - benefit from the existing cache entries, but don\'t put more entries into the cache. If you set this setting for heavy ad-hoc queries and leave it disabled for short real-time queries, this will allows to avoid cache threshing by too heavy queries and to improve the overall system efficiency. \N \N 0 Bool 0 0 Production | |
filesystem_cache_skip_download_if_exceeds_per_query_cache_write_limit 1 0 Skip download from remote filesystem if exceeds query cache size \N \N 0 Bool 1 0 Production | |
skip_download_if_exceeds_query_cache 1 0 Skip download from remote filesystem if exceeds query cache size \N \N 0 Bool 1 filesystem_cache_skip_download_if_exceeds_per_query_cache_write_limit 0 Production | |
filesystem_cache_max_download_size 137438953472 0 Max remote filesystem cache size that can be downloaded by a single query \N \N 0 UInt64 137438953472 0 Production | |
throw_on_error_from_cache_on_write_operations 0 0 Ignore error from cache when caching on write operations (INSERT, merges) \N \N 0 Bool 0 0 Production | |
filesystem_cache_segments_batch_size 20 0 Limit on size of a single batch of file segments that a read buffer can request from cache. Too low value will lead to excessive requests to cache, too large may slow down eviction from cache \N \N 0 UInt64 20 0 Production | |
filesystem_cache_reserve_space_wait_lock_timeout_milliseconds 1000 0 Wait time to lock cache for space reservation in filesystem cache \N \N 0 UInt64 1000 0 Production | |
filesystem_cache_prefer_bigger_buffer_size 1 0 Prefer bigger buffer size if filesystem cache is enabled to avoid writing small file segments which deteriorate cache performance. On the other hand, enabling this setting might increase memory usage. \N \N 0 Bool 1 0 Production | |
filesystem_cache_boundary_alignment 0 0 Filesystem cache boundary alignment. This setting is applied only for non-disk read (e.g. for cache of remote table engines / table functions, but not for storage configuration of MergeTree tables). Value 0 means no alignment. \N \N 0 UInt64 0 0 Production | |
temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds 600000 0 Wait time to lock cache for space reservation for temporary data in filesystem cache \N \N 0 UInt64 600000 0 Production | |
use_page_cache_for_disks_without_file_cache 0 0 Use userspace page cache for remote disks that don\'t have filesystem cache enabled. \N \N 0 Bool 0 0 Production | |
use_page_cache_with_distributed_cache 0 0 Use userspace page cache when distributed cache is used. \N \N 0 Bool 0 0 Production | |
read_from_page_cache_if_exists_otherwise_bypass_cache 0 0 Use userspace page cache in passive mode, similar to read_from_filesystem_cache_if_exists_otherwise_bypass_cache. \N \N 0 Bool 0 0 Production | |
page_cache_inject_eviction 0 0 Userspace page cache will sometimes invalidate some pages at random. Intended for testing. \N \N 0 Bool 0 0 Production | |
load_marks_asynchronously 0 0 Load MergeTree marks asynchronously \N \N 0 Bool 0 0 Production | |
enable_filesystem_read_prefetches_log 0 0 Log to system.filesystem prefetch_log during query. Should be used only for testing or debugging, not recommended to be turned on by default \N \N 0 Bool 0 0 Production | |
allow_prefetched_read_pool_for_remote_filesystem 1 0 Prefer prefetched threadpool if all parts are on remote filesystem \N \N 0 Bool 1 0 Production | |
allow_prefetched_read_pool_for_local_filesystem 0 0 Prefer prefetched threadpool if all parts are on local filesystem \N \N 0 Bool 0 0 Production | |
prefetch_buffer_size 1048576 0 The maximum size of the prefetch buffer to read from the filesystem. \N \N 0 UInt64 1048576 0 Production | |
filesystem_prefetch_step_bytes 0 0 Prefetch step in bytes. Zero means `auto` - approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task \N \N 0 UInt64 0 0 Production | |
filesystem_prefetch_step_marks 0 0 Prefetch step in marks. Zero means `auto` - approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task \N \N 0 UInt64 0 0 Production | |
filesystem_prefetch_max_memory_usage 1073741824 0 Maximum memory usage for prefetches. \N \N 0 UInt64 1073741824 0 Production | |
filesystem_prefetches_limit 200 0 Maximum number of prefetches. Zero means unlimited. A setting `filesystem_prefetches_max_memory_usage` is more recommended if you want to limit the number of prefetches \N \N 0 UInt64 200 0 Production | |
use_structure_from_insertion_table_in_table_functions 2 0 Use structure from insertion table instead of schema inference from data. Possible values: 0 - disabled, 1 - enabled, 2 - auto \N \N 0 UInt64 2 0 Production | |
http_max_tries 10 0 Max attempts to read via http. \N \N 0 UInt64 10 0 Production | |
http_retry_initial_backoff_ms 100 0 Min milliseconds for backoff, when retrying read via http \N \N 0 UInt64 100 0 Production | |
http_retry_max_backoff_ms 10000 0 Max milliseconds for backoff, when retrying read via http \N \N 0 UInt64 10000 0 Production | |
force_remove_data_recursively_on_drop 0 0 Recursively remove data on DROP query. Avoids \'Directory not empty\' error, but may silently remove detached data \N \N 0 Bool 0 0 Production | |
check_table_dependencies 1 0 Check that DDL query (such as DROP TABLE or RENAME) will not break dependencies \N \N 0 Bool 1 0 Production | |
check_referential_table_dependencies 0 0 Check that DDL query (such as DROP TABLE or RENAME) will not break referential dependencies \N \N 0 Bool 0 0 Production | |
allow_unrestricted_reads_from_keeper 0 0 Allow unrestricted (without condition on path) reads from system.zookeeper table, can be handy, but is not safe for zookeeper \N \N 0 Bool 0 0 Production | |
allow_deprecated_database_ordinary 0 0 Allow to create databases with deprecated Ordinary engine \N \N 0 Bool 0 0 Production | |
allow_deprecated_syntax_for_merge_tree 0 0 Allow to create *MergeTree tables with deprecated engine definition syntax \N \N 0 Bool 0 0 Production | |
allow_asynchronous_read_from_io_pool_for_merge_tree 0 0 Use background I/O pool to read from MergeTree tables. This setting may increase performance for I/O bound queries \N \N 0 Bool 0 0 Production | |
max_streams_for_merge_tree_reading 0 0 If is not zero, limit the number of reading streams for MergeTree table. \N \N 0 UInt64 0 0 Production | |
force_grouping_standard_compatibility 1 0 Make GROUPING function to return 1 when argument is not used as an aggregation key \N \N 0 Bool 1 0 Production | |
schema_inference_use_cache_for_file 1 0 Use cache in schema inference while using file table function \N \N 0 Bool 1 0 Production | |
schema_inference_use_cache_for_s3 1 0 Use cache in schema inference while using s3 table function \N \N 0 Bool 1 0 Production | |
schema_inference_use_cache_for_azure 1 0 Use cache in schema inference while using azure table function \N \N 0 Bool 1 0 Production | |
schema_inference_use_cache_for_hdfs 1 0 Use cache in schema inference while using hdfs table function \N \N 0 Bool 1 0 Production | |
schema_inference_use_cache_for_url 1 0 Use cache in schema inference while using url table function \N \N 0 Bool 1 0 Production | |
schema_inference_cache_require_modification_time_for_url 1 0 Use schema from cache for URL with last modification time validation (for URLs with Last-Modified header) \N \N 0 Bool 1 0 Production | |
compatibility 0 The `compatibility` setting causes ClickHouse to use the default settings of a previous version of ClickHouse, where the previous version is provided as the setting.\n\nIf settings are set to non-default values, then those settings are honored (only settings that have not been modified are affected by the `compatibility` setting).\n\nThis setting takes a ClickHouse version number as a string, like `22.3`, `22.8`. An empty value means that this setting is disabled.\n\nDisabled by default.\n\n:::note\nIn ClickHouse Cloud the compatibility setting must be set by ClickHouse Cloud support. Please [open a case](https://clickhouse.cloud/support) to have it set.\n::: \N \N 0 String 0 Production | |
additional_table_filters {} 0 An additional filter expression that is applied after reading\nfrom the specified table.\n\n**Example**\n\n```sql\nINSERT INTO table_1 VALUES (1, \'a\'), (2, \'bb\'), (3, \'ccc\'), (4, \'dddd\');\nSELECT * FROM table_1;\n```\n```response\n┌─x─┬─y────┐\n│ 1 │ a │\n│ 2 │ bb │\n│ 3 │ ccc │\n│ 4 │ dddd │\n└───┴──────┘\n```\n```sql\nSELECT *\nFROM table_1\nSETTINGS additional_table_filters = {\'table_1\': \'x != 2\'}\n```\n```response\n┌─x─┬─y────┐\n│ 1 │ a │\n│ 3 │ ccc │\n│ 4 │ dddd │\n└───┴──────┘\n``` \N \N 0 Map {} 0 Production | |
additional_result_filter 0 An additional filter expression to apply to the result of `SELECT` query.\nThis setting is not applied to any subquery.\n\n**Example**\n\n```sql\nINSERT INTO table_1 VALUES (1, \'a\'), (2, \'bb\'), (3, \'ccc\'), (4, \'dddd\');\nSElECT * FROM table_1;\n```\n```response\n┌─x─┬─y────┐\n│ 1 │ a │\n│ 2 │ bb │\n│ 3 │ ccc │\n│ 4 │ dddd │\n└───┴──────┘\n```\n```sql\nSELECT *\nFROM table_1\nSETTINGS additional_result_filter = \'x != 2\'\n```\n```response\n┌─x─┬─y────┐\n│ 1 │ a │\n│ 3 │ ccc │\n│ 4 │ dddd │\n└───┴──────┘\n``` \N \N 0 String 0 Production | |
workload default 0 Name of workload to be used to access resources \N \N 0 String default 0 Production | |
storage_system_stack_trace_pipe_read_timeout_ms 100 0 Maximum time to read from a pipe for receiving information from the threads when querying the `system.stack_trace` table. This setting is used for testing purposes and not meant to be changed by users. \N \N 0 Milliseconds 100 0 Production | |
rename_files_after_processing 0 - **Type:** String\n\n- **Default value:** Empty string\n\nThis setting allows to specify renaming pattern for files processed by `file` table function. When option is set, all files read by `file` table function will be renamed according to specified pattern with placeholders, only if files processing was successful.\n\n### Placeholders\n\n- `%a` — Full original filename (e.g., "sample.csv").\n- `%f` — Original filename without extension (e.g., "sample").\n- `%e` — Original file extension with dot (e.g., ".csv").\n- `%t` — Timestamp (in microseconds).\n- `%%` — Percentage sign ("%").\n\n### Example\n- Option: `--rename_files_after_processing="processed_%f_%t%e"`\n\n- Query: `SELECT * FROM file(\'sample.csv\')`\n\n\nIf reading `sample.csv` is successful, file will be renamed to `processed_sample_1683473210851438.csv` \N \N 0 String 0 Production | |
read_through_distributed_cache 0 0 Only has an effect in ClickHouse Cloud. Allow reading from distributed cache \N \N 0 Bool 0 0 Production | |
write_through_distributed_cache 0 0 Only has an effect in ClickHouse Cloud. Allow writing to distributed cache (writing to s3 will also be done by distributed cache) \N \N 0 Bool 0 0 Production | |
distributed_cache_throw_on_error 0 0 Only has an effect in ClickHouse Cloud. Rethrow exception happened during communication with distributed cache or exception received from distributed cache. Otherwise fallback to skipping distributed cache on error \N \N 0 Bool 0 0 Production | |
distributed_cache_log_mode on_error 0 Only has an effect in ClickHouse Cloud. Mode for writing to system.distributed_cache_log \N \N 0 DistributedCacheLogMode on_error 0 Production | |
distributed_cache_fetch_metrics_only_from_current_az 1 0 Only has an effect in ClickHouse Cloud. Fetch metrics only from current availability zone in system.distributed_cache_metrics, system.distributed_cache_events \N \N 0 Bool 1 0 Production | |
distributed_cache_connect_max_tries 20 0 Only has an effect in ClickHouse Cloud. Number of tries to connect to distributed cache if unsuccessful \N \N 0 UInt64 20 0 Production | |
distributed_cache_read_request_max_tries 20 0 Only has an effect in ClickHouse Cloud. Number of tries to do distributed cache request if unsuccessful \N \N 0 UInt64 20 0 Production | |
distributed_cache_receive_response_wait_milliseconds 60000 0 Only has an effect in ClickHouse Cloud. Wait time in milliseconds to receive data for request from distributed cache \N \N 0 UInt64 60000 0 Production | |
distributed_cache_receive_timeout_milliseconds 10000 0 Only has an effect in ClickHouse Cloud. Wait time in milliseconds to receive any kind of response from distributed cache \N \N 0 UInt64 10000 0 Production | |
distributed_cache_wait_connection_from_pool_milliseconds 100 0 Only has an effect in ClickHouse Cloud. Wait time in milliseconds to receive connection from connection pool if distributed_cache_pool_behaviour_on_limit is wait \N \N 0 UInt64 100 0 Production | |
distributed_cache_bypass_connection_pool 0 0 Only has an effect in ClickHouse Cloud. Allow to bypass distributed cache connection pool \N \N 0 Bool 0 0 Production | |
distributed_cache_pool_behaviour_on_limit wait 0 Only has an effect in ClickHouse Cloud. Identifies behaviour of distributed cache connection on pool limit reached \N \N 0 DistributedCachePoolBehaviourOnLimit wait 0 Production | |
distributed_cache_read_alignment 0 0 Only has an effect in ClickHouse Cloud. A setting for testing purposes, do not change it \N \N 0 UInt64 0 0 Production | |
distributed_cache_max_unacked_inflight_packets 10 0 Only has an effect in ClickHouse Cloud. A maximum number of unacknowledged in-flight packets in a single distributed cache read request \N \N 0 UInt64 10 0 Production | |
distributed_cache_data_packet_ack_window 5 0 Only has an effect in ClickHouse Cloud. A window for sending ACK for DataPacket sequence in a single distributed cache read request \N \N 0 UInt64 5 0 Production | |
distributed_cache_discard_connection_if_unread_data 1 0 Only has an effect in ClickHouse Cloud. Discard connection if some data is unread. \N \N 0 Bool 1 0 Production | |
distributed_cache_min_bytes_for_seek 0 0 Only has an effect in ClickHouse Cloud. Minimum number of bytes to do seek in distributed cache. \N \N 0 UInt64 0 0 Production | |
filesystem_cache_enable_background_download_for_metadata_files_in_packed_storage 1 0 Only has an effect in ClickHouse Cloud. Wait time to lock cache for space reservation in filesystem cache \N \N 0 Bool 1 0 Production | |
filesystem_cache_enable_background_download_during_fetch 1 0 Only has an effect in ClickHouse Cloud. Wait time to lock cache for space reservation in filesystem cache \N \N 0 Bool 1 0 Production | |
parallelize_output_from_storages 1 0 Parallelize output for reading step from storage. It allows parallelization of query processing right after reading from storage if possible \N \N 0 Bool 1 0 Production | |
insert_deduplication_token 0 The setting allows a user to provide own deduplication semantic in MergeTree/ReplicatedMergeTree\nFor example, by providing a unique value for the setting in each INSERT statement,\nuser can avoid the same inserted data being deduplicated.\n\n\nPossible values:\n\n- Any string\n\n`insert_deduplication_token` is used for deduplication _only_ when not empty.\n\nFor the replicated tables by default the only 100 of the most recent inserts for each partition are deduplicated (see [replicated_deduplication_window](merge-tree-settings.md/#replicated_deduplication_window), [replicated_deduplication_window_seconds](merge-tree-settings.md/#replicated_deduplication_window_seconds)).\nFor not replicated tables see [non_replicated_deduplication_window](merge-tree-settings.md/#non_replicated_deduplication_window).\n\n:::note\n`insert_deduplication_token` works on a partition level (the same as `insert_deduplication` checksum). Multiple partitions can have the same `insert_deduplication_token`.\n:::\n\nExample:\n\n```sql\nCREATE TABLE test_table\n( A Int64 )\nENGINE = MergeTree\nORDER BY A\nSETTINGS non_replicated_deduplication_window = 100;\n\nINSERT INTO test_table SETTINGS insert_deduplication_token = \'test\' VALUES (1);\n\n-- the next insert won\'t be deduplicated because insert_deduplication_token is different\nINSERT INTO test_table SETTINGS insert_deduplication_token = \'test1\' VALUES (1);\n\n-- the next insert will be deduplicated because insert_deduplication_token\n-- is the same as one of the previous\nINSERT INTO test_table SETTINGS insert_deduplication_token = \'test\' VALUES (2);\n\nSELECT * FROM test_table\n\n┌─A─┐\n│ 1 │\n└───┘\n┌─A─┐\n│ 1 │\n└───┘\n``` \N \N 0 String 0 Production | |
count_distinct_optimization 0 0 Rewrite count distinct to subquery of group by \N \N 0 Bool 0 0 Production | |
throw_if_no_data_to_insert 1 0 Allows or forbids empty INSERTs, enabled by default (throws an error on an empty insert). Only applies to INSERTs using [`clickhouse-client`](/interfaces/cli) or using the [gRPC interface](/interfaces/grpc). \N \N 0 Bool 1 0 Production | |
compatibility_ignore_auto_increment_in_create_table 0 0 Ignore AUTO_INCREMENT keyword in column declaration if true, otherwise return error. It simplifies migration from MySQL \N \N 0 Bool 0 0 Production | |
multiple_joins_try_to_keep_original_names 0 0 Do not add aliases to top level expression list on multiple joins rewrite \N \N 0 Bool 0 0 Production | |
optimize_sorting_by_input_stream_properties 1 0 Optimize sorting by sorting properties of input stream \N \N 0 Bool 1 0 Production | |
keeper_max_retries 10 0 Max retries for general keeper operations \N \N 0 UInt64 10 0 Production | |
keeper_retry_initial_backoff_ms 100 0 Initial backoff timeout for general keeper operations \N \N 0 UInt64 100 0 Production | |
keeper_retry_max_backoff_ms 5000 0 Max backoff timeout for general keeper operations \N \N 0 UInt64 5000 0 Production | |
insert_keeper_max_retries 20 0 The setting sets the maximum number of retries for ClickHouse Keeper (or ZooKeeper) requests during insert into replicated MergeTree. Only Keeper requests which failed due to network error, Keeper session timeout, or request timeout are considered for retries.\n\nPossible values:\n\n- Positive integer.\n- 0 — Retries are disabled\n\nCloud default value: `20`.\n\nKeeper request retries are done after some timeout. The timeout is controlled by the following settings: `insert_keeper_retry_initial_backoff_ms`, `insert_keeper_retry_max_backoff_ms`.\nThe first retry is done after `insert_keeper_retry_initial_backoff_ms` timeout. The consequent timeouts will be calculated as follows:\n```\ntimeout = min(insert_keeper_retry_max_backoff_ms, latest_timeout * 2)\n```\n\nFor example, if `insert_keeper_retry_initial_backoff_ms=100`, `insert_keeper_retry_max_backoff_ms=10000` and `insert_keeper_max_retries=8` then timeouts will be `100, 200, 400, 800, 1600, 3200, 6400, 10000`.\n\nApart from fault tolerance, the retries aim to provide a better user experience - they allow to avoid returning an error during INSERT execution if Keeper is restarted, for example, due to an upgrade. \N \N 0 UInt64 20 0 Production | |
insert_keeper_retry_initial_backoff_ms 100 0 Initial timeout(in milliseconds) to retry a failed Keeper request during INSERT query execution\n\nPossible values:\n\n- Positive integer.\n- 0 — No timeout \N \N 0 UInt64 100 0 Production | |
insert_keeper_retry_max_backoff_ms 10000 0 Maximum timeout (in milliseconds) to retry a failed Keeper request during INSERT query execution\n\nPossible values:\n\n- Positive integer.\n- 0 — Maximum timeout is not limited \N \N 0 UInt64 10000 0 Production | |
insert_keeper_fault_injection_probability 0 0 Approximate probability of failure for a keeper request during insert. Valid value is in interval [0.0f, 1.0f] \N \N 0 Float 0 0 Production | |
insert_keeper_fault_injection_seed 0 0 0 - random seed, otherwise the setting value \N \N 0 UInt64 0 0 Production | |
force_aggregation_in_order 0 0 The setting is used by the server itself to support distributed queries. Do not change it manually, because it will break normal operations. (Forces use of aggregation in order on remote nodes during distributed aggregation). \N \N 0 Bool 0 0 Production | |
http_max_request_param_data_size 10485760 0 Limit on size of request data used as a query parameter in predefined HTTP requests. \N \N 0 UInt64 10485760 0 Production | |
function_json_value_return_type_allow_nullable 0 0 Control whether allow to return `NULL` when value is not exist for JSON_VALUE function.\n\n```sql\nSELECT JSON_VALUE(\'{"hello":"world"}\', \'$.b\') settings function_json_value_return_type_allow_nullable=true;\n\n┌─JSON_VALUE(\'{"hello":"world"}\', \'$.b\')─┐\n│ ᴺᵁᴸᴸ │\n└────────────────────────────────────────┘\n\n1 row in set. Elapsed: 0.001 sec.\n```\n\nPossible values:\n\n- true — Allow.\n- false — Disallow. \N \N 0 Bool 0 0 Production | |
function_json_value_return_type_allow_complex 0 0 Control whether allow to return complex type (such as: struct, array, map) for json_value function.\n\n```sql\nSELECT JSON_VALUE(\'{"hello":{"world":"!"}}\', \'$.hello\') settings function_json_value_return_type_allow_complex=true\n\n┌─JSON_VALUE(\'{"hello":{"world":"!"}}\', \'$.hello\')─┐\n│ {"world":"!"} │\n└──────────────────────────────────────────────────┘\n\n1 row in set. Elapsed: 0.001 sec.\n```\n\nPossible values:\n\n- true — Allow.\n- false — Disallow. \N \N 0 Bool 0 0 Production | |
use_with_fill_by_sorting_prefix 1 0 Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently \N \N 0 Bool 1 0 Production | |
optimize_uniq_to_count 1 0 Rewrite uniq and its variants(except uniqUpTo) to count if subquery has distinct or group by clause. \N \N 0 Bool 1 0 Production | |
use_variant_as_common_type 0 0 Allows to use `Variant` type as a result type for [if](../../sql-reference/functions/conditional-functions.md/#if)/[multiIf](../../sql-reference/functions/conditional-functions.md/#multiif)/[array](../../sql-reference/functions/array-functions.md)/[map](../../sql-reference/functions/tuple-map-functions.md) functions when there is no common type for argument types.\n\nExample:\n\n```sql\nSET use_variant_as_common_type = 1;\nSELECT toTypeName(if(number % 2, number, range(number))) as variant_type FROM numbers(1);\nSELECT if(number % 2, number, range(number)) as variant FROM numbers(5);\n```\n\n```text\n┌─variant_type───────────────────┐\n│ Variant(Array(UInt64), UInt64) │\n└────────────────────────────────┘\n┌─variant───┐\n│ [] │\n│ 1 │\n│ [0,1] │\n│ 3 │\n│ [0,1,2,3] │\n└───────────┘\n```\n\n```sql\nSET use_variant_as_common_type = 1;\nSELECT toTypeName(multiIf((number % 4) = 0, 42, (number % 4) = 1, [1, 2, 3], (number % 4) = 2, \'Hello, World!\', NULL)) AS variant_type FROM numbers(1);\nSELECT multiIf((number % 4) = 0, 42, (number % 4) = 1, [1, 2, 3], (number % 4) = 2, \'Hello, World!\', NULL) AS variant FROM numbers(4);\n```\n\n```text\n─variant_type─────────────────────────┐\n│ Variant(Array(UInt8), String, UInt8) │\n└──────────────────────────────────────┘\n\n┌─variant───────┐\n│ 42 │\n│ [1,2,3] │\n│ Hello, World! │\n│ ᴺᵁᴸᴸ │\n└───────────────┘\n```\n\n```sql\nSET use_variant_as_common_type = 1;\nSELECT toTypeName(array(range(number), number, \'str_\' || toString(number))) as array_of_variants_type from numbers(1);\nSELECT array(range(number), number, \'str_\' || toString(number)) as array_of_variants FROM numbers(3);\n```\n\n```text\n┌─array_of_variants_type────────────────────────┐\n│ Array(Variant(Array(UInt64), String, UInt64)) │\n└───────────────────────────────────────────────┘\n\n┌─array_of_variants─┐\n│ [[],0,\'str_0\'] │\n│ [[0],1,\'str_1\'] │\n│ [[0,1],2,\'str_2\'] │\n└───────────────────┘\n```\n\n```sql\nSET use_variant_as_common_type = 1;\nSELECT toTypeName(map(\'a\', range(number), \'b\', number, \'c\', \'str_\' || toString(number))) as map_of_variants_type from numbers(1);\nSELECT map(\'a\', range(number), \'b\', number, \'c\', \'str_\' || toString(number)) as map_of_variants FROM numbers(3);\n```\n\n```text\n┌─map_of_variants_type────────────────────────────────┐\n│ Map(String, Variant(Array(UInt64), String, UInt64)) │\n└─────────────────────────────────────────────────────┘\n\n┌─map_of_variants───────────────┐\n│ {\'a\':[],\'b\':0,\'c\':\'str_0\'} │\n│ {\'a\':[0],\'b\':1,\'c\':\'str_1\'} │\n│ {\'a\':[0,1],\'b\':2,\'c\':\'str_2\'} │\n└───────────────────────────────┘\n``` \N \N 0 Bool 0 0 Production | |
enable_order_by_all 1 0 Enables or disables sorting with `ORDER BY ALL` syntax, see [ORDER BY](../../sql-reference/statements/select/order-by.md).\n\nPossible values:\n\n- 0 — Disable ORDER BY ALL.\n- 1 — Enable ORDER BY ALL.\n\n**Example**\n\nQuery:\n\n```sql\nCREATE TABLE TAB(C1 Int, C2 Int, ALL Int) ENGINE=Memory();\n\nINSERT INTO TAB VALUES (10, 20, 30), (20, 20, 10), (30, 10, 20);\n\nSELECT * FROM TAB ORDER BY ALL; -- returns an error that ALL is ambiguous\n\nSELECT * FROM TAB ORDER BY ALL SETTINGS enable_order_by_all = 0;\n```\n\nResult:\n\n```text\n┌─C1─┬─C2─┬─ALL─┐\n│ 20 │ 20 │ 10 │\n│ 30 │ 10 │ 20 │\n│ 10 │ 20 │ 30 │\n└────┴────┴─────┘\n``` \N \N 0 Bool 1 0 Production | |
ignore_drop_queries_probability 0 0 If enabled, server will ignore all DROP table queries with specified probability (for Memory and JOIN engines it will replcase DROP to TRUNCATE). Used for testing purposes \N \N 0 Float 0 0 Production | |
traverse_shadow_remote_data_paths 0 0 Traverse frozen data (shadow directory) in addition to actual table data when query system.remote_data_paths \N \N 0 Bool 0 0 Production | |
geo_distance_returns_float64_on_float64_arguments 1 0 If all four arguments to `geoDistance`, `greatCircleDistance`, `greatCircleAngle` functions are Float64, return Float64 and use double precision for internal calculations. In previous ClickHouse versions, the functions always returned Float32. \N \N 0 Bool 1 0 Production | |
allow_get_client_http_header 0 0 Allow to use the function `getClientHTTPHeader` which lets to obtain a value of an the current HTTP request\'s header. It is not enabled by default for security reasons, because some headers, such as `Cookie`, could contain sensitive info. Note that the `X-ClickHouse-*` and `Authentication` headers are always restricted and cannot be obtained with this function. \N \N 0 Bool 0 0 Production | |
cast_string_to_dynamic_use_inference 0 0 Use types inference during String to Dynamic conversion \N \N 0 Bool 0 0 Production | |
cast_string_to_variant_use_inference 1 0 Use types inference during String to Variant conversion. \N \N 0 Bool 1 0 Production | |
enable_blob_storage_log 1 0 Write information about blob storage operations to system.blob_storage_log table \N \N 0 Bool 1 0 Production | |
use_json_alias_for_old_object_type 0 0 When enabled, `JSON` data type alias will be used to create an old [Object(\'json\')](../../sql-reference/data-types/json.md) type instead of the new [JSON](../../sql-reference/data-types/newjson.md) type. \N \N 0 Bool 0 0 Production | |
allow_create_index_without_type 0 0 Allow CREATE INDEX query without TYPE. Query will be ignored. Made for SQL compatibility tests. \N \N 0 Bool 0 0 Production | |
create_index_ignore_unique 0 0 Ignore UNIQUE keyword in CREATE UNIQUE INDEX. Made for SQL compatibility tests. \N \N 0 Bool 0 0 Production | |
print_pretty_type_names 1 0 Allows to print deep-nested type names in a pretty way with indents in `DESCRIBE` query and in `toTypeName()` function.\n\nExample:\n\n```sql\nCREATE TABLE test (a Tuple(b String, c Tuple(d Nullable(UInt64), e Array(UInt32), f Array(Tuple(g String, h Map(String, Array(Tuple(i String, j UInt64))))), k Date), l Nullable(String))) ENGINE=Memory;\nDESCRIBE TABLE test FORMAT TSVRaw SETTINGS print_pretty_type_names=1;\n```\n\n```\na Tuple(\n b String,\n c Tuple(\n d Nullable(UInt64),\n e Array(UInt32),\n f Array(Tuple(\n g String,\n h Map(\n String,\n Array(Tuple(\n i String,\n j UInt64\n ))\n )\n )),\n k Date\n ),\n l Nullable(String)\n)\n``` \N \N 0 Bool 1 0 Production | |
create_table_empty_primary_key_by_default 0 0 Allow to create *MergeTree tables with empty primary key when ORDER BY and PRIMARY KEY not specified \N \N 0 Bool 0 0 Production | |
allow_named_collection_override_by_default 1 0 Allow named collections\' fields override by default. \N \N 0 Bool 1 0 Production | |
default_normal_view_sql_security INVOKER 0 Allows to set default `SQL SECURITY` option while creating a normal view. [More about SQL security](../../sql-reference/statements/create/view.md/#sql_security).\n\nThe default value is `INVOKER`. \N \N 0 SQLSecurityType INVOKER 0 Production | |
default_materialized_view_sql_security DEFINER 0 Allows to set a default value for SQL SECURITY option when creating a materialized view. [More about SQL security](../../sql-reference/statements/create/view.md/#sql_security).\n\nThe default value is `DEFINER`. \N \N 0 SQLSecurityType DEFINER 0 Production | |
default_view_definer CURRENT_USER 0 Allows to set default `DEFINER` option while creating a view. [More about SQL security](../../sql-reference/statements/create/view.md/#sql_security).\n\nThe default value is `CURRENT_USER`. \N \N 0 String CURRENT_USER 0 Production | |
cache_warmer_threads 4 0 Only has an effect in ClickHouse Cloud. Number of background threads for speculatively downloading new data parts into file cache, when [cache_populated_by_fetch](merge-tree-settings.md/#cache_populated_by_fetch) is enabled. Zero to disable. \N \N 0 UInt64 4 0 Production | |
use_async_executor_for_materialized_views 0 0 Use async and potentially multithreaded execution of materialized view query, can speedup views processing during INSERT, but also consume more memory. \N \N 0 Bool 0 0 Production | |
ignore_cold_parts_seconds 0 0 Only has an effect in ClickHouse Cloud. Exclude new data parts from SELECT queries until they\'re either pre-warmed (see [cache_populated_by_fetch](merge-tree-settings.md/#cache_populated_by_fetch)) or this many seconds old. Only for Replicated-/SharedMergeTree. \N \N 0 Int64 0 0 Production | |
short_circuit_function_evaluation_for_nulls 1 0 Optimizes evaluation of functions that return NULL when any argument is NULL. When the percentage of NULL values in the function\'s arguments exceeds the short_circuit_function_evaluation_for_nulls_threshold, the system skips evaluating the function row-by-row. Instead, it immediately returns NULL for all rows, avoiding unnecessary computation. \N \N 0 Bool 1 0 Production | |
short_circuit_function_evaluation_for_nulls_threshold 1 0 Ratio threshold of NULL values to execute functions with Nullable arguments only on rows with non-NULL values in all arguments. Applies when setting short_circuit_function_evaluation_for_nulls is enabled.\nWhen the ratio of rows containing NULL values to the total number of rows exceeds this threshold, these rows containing NULL values will not be evaluated. \N \N 0 Double 1 0 Production | |
prefer_warmed_unmerged_parts_seconds 0 0 Only has an effect in ClickHouse Cloud. If a merged part is less than this many seconds old and is not pre-warmed (see [cache_populated_by_fetch](merge-tree-settings.md/#cache_populated_by_fetch)), but all its source parts are available and pre-warmed, SELECT queries will read from those parts instead. Only for Replicated-/SharedMergeTree. Note that this only checks whether CacheWarmer processed the part; if the part was fetched into cache by something else, it\'ll still be considered cold until CacheWarmer gets to it; if it was warmed, then evicted from cache, it\'ll still be considered warm. \N \N 0 Int64 0 0 Production | |
iceberg_timestamp_ms 0 0 Query Iceberg table using the snapshot that was current at a specific timestamp. \N \N 0 Int64 0 0 Production | |
iceberg_snapshot_id 0 0 Query Iceberg table using the specific snapshot id. \N \N 0 Int64 0 0 Production | |
allow_deprecated_error_prone_window_functions 0 0 Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference) \N \N 0 Bool 0 0 Production | |
use_iceberg_partition_pruning 0 0 Use Iceberg partition pruning for Iceberg tables \N \N 0 Bool 0 0 Production | |
allow_deprecated_snowflake_conversion_functions 0 0 Functions `snowflakeToDateTime`, `snowflakeToDateTime64`, `dateTimeToSnowflake`, and `dateTime64ToSnowflake` are deprecated and disabled by default.\nPlease use functions `snowflakeIDToDateTime`, `snowflakeIDToDateTime64`, `dateTimeToSnowflakeID`, and `dateTime64ToSnowflakeID` instead.\n\nTo re-enable the deprecated functions (e.g., during a transition period), please set this setting to `true`. \N \N 0 Bool 0 0 Production | |
optimize_distinct_in_order 1 0 Enable DISTINCT optimization if some columns in DISTINCT form a prefix of sorting. For example, prefix of sorting key in merge tree or ORDER BY statement \N \N 0 Bool 1 0 Production | |
keeper_map_strict_mode 0 0 Enforce additional checks during operations on KeeperMap. E.g. throw an exception on an insert for already existing key \N \N 0 Bool 0 0 Production | |
extract_key_value_pairs_max_pairs_per_row 1000 0 Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory. \N \N 0 UInt64 1000 0 Production | |
extract_kvp_max_pairs_per_row 1000 0 Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory. \N \N 0 UInt64 1000 extract_key_value_pairs_max_pairs_per_row 0 Production | |
restore_replace_external_engines_to_null 0 0 For testing purposes. Replaces all external engines to Null to not initiate external connections. \N \N 0 Bool 0 0 Production | |
restore_replace_external_table_functions_to_null 0 0 For testing purposes. Replaces all external table functions to Null to not initiate external connections. \N \N 0 Bool 0 0 Production | |
restore_replace_external_dictionary_source_to_null 0 0 Replace external dictionary sources to Null on restore. Useful for testing purposes \N \N 0 Bool 0 0 Production | |
allow_experimental_parallel_reading_from_replicas 0 0 Use up to `max_parallel_replicas` the number of replicas from each shard for SELECT query execution. Reading is parallelized and coordinated dynamically. 0 - disabled, 1 - enabled, silently disable them in case of failure, 2 - enabled, throw an exception in case of failure \N \N 0 UInt64 0 0 Beta | |
enable_parallel_replicas 0 0 Use up to `max_parallel_replicas` the number of replicas from each shard for SELECT query execution. Reading is parallelized and coordinated dynamically. 0 - disabled, 1 - enabled, silently disable them in case of failure, 2 - enabled, throw an exception in case of failure \N \N 0 UInt64 0 allow_experimental_parallel_reading_from_replicas 0 Beta | |
max_parallel_replicas 1000 0 The maximum number of replicas for each shard when executing a query.\n\nPossible values:\n\n- Positive integer.\n\n**Additional Info**\n\nThis options will produce different results depending on the settings used.\n\n:::note\nThis setting will produce incorrect results when joins or subqueries are involved, and all tables don\'t meet certain requirements. See [Distributed Subqueries and max_parallel_replicas](/operations/settings/settings#max_parallel_replicas) for more details.\n:::\n\n### Parallel processing using `SAMPLE` key\n\nA query may be processed faster if it is executed on several servers in parallel. But the query performance may degrade in the following cases:\n\n- The position of the sampling key in the partitioning key does not allow efficient range scans.\n- Adding a sampling key to the table makes filtering by other columns less efficient.\n- The sampling key is an expression that is expensive to calculate.\n- The cluster latency distribution has a long tail, so that querying more servers increases the query overall latency.\n\n### Parallel processing using [parallel_replicas_custom_key](#parallel_replicas_custom_key)\n\nThis setting is useful for any replicated table. \N \N 0 NonZeroUInt64 1000 0 Production | |
parallel_replicas_mode read_tasks 0 Type of filter to use with custom key for parallel replicas. default - use modulo operation on the custom key, range - use range filter on custom key using all possible values for the value type of custom key. \N \N 0 ParallelReplicasMode read_tasks 0 Beta | |
parallel_replicas_count 0 0 This is internal setting that should not be used directly and represents an implementation detail of the \'parallel replicas\' mode. This setting will be automatically set up by the initiator server for distributed queries to the number of parallel replicas participating in query processing. \N \N 0 UInt64 0 0 Beta | |
parallel_replica_offset 0 0 This is internal setting that should not be used directly and represents an implementation detail of the \'parallel replicas\' mode. This setting will be automatically set up by the initiator server for distributed queries to the index of the replica participating in query processing among parallel replicas. \N \N 0 UInt64 0 0 Beta | |
parallel_replicas_custom_key 0 An arbitrary integer expression that can be used to split work between replicas for a specific table.\nThe value can be any integer expression.\n\nSimple expressions using primary keys are preferred.\n\nIf the setting is used on a cluster that consists of a single shard with multiple replicas, those replicas will be converted into virtual shards.\nOtherwise, it will behave same as for `SAMPLE` key, it will use multiple replicas of each shard. \N \N 0 String 0 Beta | |
parallel_replicas_custom_key_range_lower 0 0 Allows the filter type `range` to split the work evenly between replicas based on the custom range `[parallel_replicas_custom_key_range_lower, INT_MAX]`.\n\nWhen used in conjunction with [parallel_replicas_custom_key_range_upper](#parallel_replicas_custom_key_range_upper), it lets the filter evenly split the work over replicas for the range `[parallel_replicas_custom_key_range_lower, parallel_replicas_custom_key_range_upper]`.\n\nNote: This setting will not cause any additional data to be filtered during query processing, rather it changes the points at which the range filter breaks up the range `[0, INT_MAX]` for parallel processing. \N \N 0 UInt64 0 0 Beta | |
parallel_replicas_custom_key_range_upper 0 0 Allows the filter type `range` to split the work evenly between replicas based on the custom range `[0, parallel_replicas_custom_key_range_upper]`. A value of 0 disables the upper bound, setting it the max value of the custom key expression.\n\nWhen used in conjunction with [parallel_replicas_custom_key_range_lower](#parallel_replicas_custom_key_range_lower), it lets the filter evenly split the work over replicas for the range `[parallel_replicas_custom_key_range_lower, parallel_replicas_custom_key_range_upper]`.\n\nNote: This setting will not cause any additional data to be filtered during query processing, rather it changes the points at which the range filter breaks up the range `[0, INT_MAX]` for parallel processing \N \N 0 UInt64 0 0 Beta | |
cluster_for_parallel_replicas 0 Cluster for a shard in which current server is located \N \N 0 String 0 Beta | |
parallel_replicas_allow_in_with_subquery 1 0 If true, subquery for IN will be executed on every follower replica. \N \N 0 Bool 1 0 Beta | |
parallel_replicas_for_non_replicated_merge_tree 0 0 If true, ClickHouse will use parallel replicas algorithm also for non-replicated MergeTree tables \N \N 0 Bool 0 0 Beta | |
parallel_replicas_min_number_of_rows_per_replica 0 0 Limit the number of replicas used in a query to (estimated rows to read / min_number_of_rows_per_replica). The max is still limited by \'max_parallel_replicas\' \N \N 0 UInt64 0 0 Beta | |
parallel_replicas_prefer_local_join 1 0 If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN. \N \N 0 Bool 1 0 Beta | |
parallel_replicas_mark_segment_size 0 0 Parts virtually divided into segments to be distributed between replicas for parallel reading. This setting controls the size of these segments. Not recommended to change until you\'re absolutely sure in what you\'re doing. Value should be in range [128; 16384] \N \N 0 UInt64 0 0 Beta | |
parallel_replicas_local_plan 1 0 Build local plan for local replica \N \N 0 Bool 1 0 Beta | |
parallel_replicas_index_analysis_only_on_coordinator 1 0 Index analysis done only on replica-coordinator and skipped on other replicas. Effective only with enabled parallel_replicas_local_plan \N \N 0 Bool 1 0 Beta | |
parallel_replicas_only_with_analyzer 1 0 The analyzer should be enabled to use parallel replicas. With disabled analyzer query execution fallbacks to local execution, even if parallel reading from replicas is enabled. Using parallel replicas without the analyzer enabled is not supported \N \N 0 Bool 1 0 Beta | |
parallel_replicas_insert_select_local_pipeline 1 0 Use local pipeline during distributed INSERT SELECT with parallel replicas \N \N 0 Bool 1 0 Beta | |
parallel_replicas_for_cluster_engines 1 0 Replace table function engines with their -Cluster alternatives \N \N 0 Bool 1 0 Production | |
allow_experimental_analyzer 1 0 Allow new query analyzer. \N \N 0 Bool 1 0 Production | |
enable_analyzer 1 0 Allow new query analyzer. \N \N 0 Bool 1 allow_experimental_analyzer 0 Production | |
analyzer_compatibility_join_using_top_level_identifier 0 0 Force to resolve identifier in JOIN USING from projection (for example, in `SELECT a + 1 AS b FROM t1 JOIN t2 USING (b)` join will be performed by `t1.a + 1 = t2.b`, rather then `t1.b = t2.b`). \N \N 0 Bool 0 0 Production | |
session_timezone 0 Sets the implicit time zone of the current session or query.\nThe implicit time zone is the time zone applied to values of type DateTime/DateTime64 which have no explicitly specified time zone.\nThe setting takes precedence over the globally configured (server-level) implicit time zone.\nA value of \'\' (empty string) means that the implicit time zone of the current session or query is equal to the [server time zone](../server-configuration-parameters/settings.md/#timezone).\n\nYou can use functions `timeZone()` and `serverTimeZone()` to get the session time zone and server time zone.\n\nPossible values:\n\n- Any time zone name from `system.time_zones`, e.g. `Europe/Berlin`, `UTC` or `Zulu`\n\nExamples:\n\n```sql\nSELECT timeZone(), serverTimeZone() FORMAT CSV\n\n"Europe/Berlin","Europe/Berlin"\n```\n\n```sql\nSELECT timeZone(), serverTimeZone() SETTINGS session_timezone = \'Asia/Novosibirsk\' FORMAT CSV\n\n"Asia/Novosibirsk","Europe/Berlin"\n```\n\nAssign session time zone \'America/Denver\' to the inner DateTime without explicitly specified time zone:\n\n```sql\nSELECT toDateTime64(toDateTime64(\'1999-12-12 23:23:23.123\', 3), 3, \'Europe/Zurich\') SETTINGS session_timezone = \'America/Denver\' FORMAT TSV\n\n1999-12-13 07:23:23.123\n```\n\n:::warning\nNot all functions that parse DateTime/DateTime64 respect `session_timezone`. This can lead to subtle errors.\nSee the following example and explanation.\n:::\n\n```sql\nCREATE TABLE test_tz (`d` DateTime(\'UTC\')) ENGINE = Memory AS SELECT toDateTime(\'2000-01-01 00:00:00\', \'UTC\');\n\nSELECT *, timeZone() FROM test_tz WHERE d = toDateTime(\'2000-01-01 00:00:00\') SETTINGS session_timezone = \'Asia/Novosibirsk\'\n0 rows in set.\n\nSELECT *, timeZone() FROM test_tz WHERE d = \'2000-01-01 00:00:00\' SETTINGS session_timezone = \'Asia/Novosibirsk\'\n┌───────────────────d─┬─timeZone()───────┐\n│ 2000-01-01 00:00:00 │ Asia/Novosibirsk │\n└─────────────────────┴──────────────────┘\n```\n\nThis happens due to different parsing pipelines:\n\n- `toDateTime()` without explicitly given time zone used in the first `SELECT` query honors setting `session_timezone` and the global time zone.\n- In the second query, a DateTime is parsed from a String, and inherits the type and time zone of the existing column`d`. Thus, setting `session_timezone` and the global time zone are not honored.\n\n**See also**\n\n- [timezone](../server-configuration-parameters/settings.md/#timezone) \N \N 0 Timezone 0 Beta | |
create_if_not_exists 0 0 Enable `IF NOT EXISTS` for `CREATE` statement by default. If either this setting or `IF NOT EXISTS` is specified and a table with the provided name already exists, no exception will be thrown. \N \N 0 Bool 0 0 Production | |
enforce_strict_identifier_format 0 0 If enabled, only allow identifiers containing alphanumeric characters and underscores. \N \N 0 Bool 0 0 Production | |
mongodb_throw_on_unsupported_query 1 0 If enabled, MongoDB tables will return an error when a MongoDB query cannot be built. Otherwise, ClickHouse reads the full table and processes it locally. This option does not apply when \'allow_experimental_analyzer=0\'. \N \N 0 Bool 1 0 Production | |
implicit_select 0 0 Allow writing simple SELECT queries without the leading SELECT keyword, which makes it simple for calculator-style usage, e.g. `1 + 2` becomes a valid query.\n\nIn `clickhouse-local` it is enabled by default and can be explicitly disabled. \N \N 0 Bool 0 0 Production | |
optimize_extract_common_expressions 1 0 Allow extracting common expressions from disjunctions in WHERE, PREWHERE, ON, HAVING and QUALIFY expressions. A logical expression like `(A AND B) OR (A AND C)` can be rewritten to `A AND (B OR C)`, which might help to utilize:\n- indices in simple filtering expressions\n- cross to inner join optimization \N \N 0 Bool 1 0 Production | |
optimize_and_compare_chain 1 0 Populate constant comparison in AND chains to enhance filtering ability. Support operators `<`, `<=`, `>`, `>=`, `=` and mix of them. For example, `(a < b) AND (b < c) AND (c < 5)` would be `(a < b) AND (b < c) AND (c < 5) AND (b < 5) AND (a < 5)`. \N \N 0 Bool 1 0 Production | |
push_external_roles_in_interserver_queries 1 0 Enable pushing user roles from originator to other nodes while performing a query. \N \N 0 Bool 1 0 Production | |
shared_merge_tree_sync_parts_on_partition_operations 1 0 Automatically synchronize set of data parts after MOVE|REPLACE|ATTACH partition operations in SMT tables. Cloud only \N \N 0 Bool 1 0 Production | |
allow_experimental_variant_type 1 0 Allows creation of [Variant](../../sql-reference/data-types/variant.md) data type. \N \N 0 Bool 1 0 Production | |
enable_variant_type 1 0 Allows creation of [Variant](../../sql-reference/data-types/variant.md) data type. \N \N 0 Bool 1 allow_experimental_variant_type 0 Production | |
allow_experimental_dynamic_type 1 0 Allows creation of [Dynamic](../../sql-reference/data-types/dynamic.md) data type. \N \N 0 Bool 1 0 Production | |
enable_dynamic_type 1 0 Allows creation of [Dynamic](../../sql-reference/data-types/dynamic.md) data type. \N \N 0 Bool 1 allow_experimental_dynamic_type 0 Production | |
allow_experimental_json_type 1 0 Allows creation of [JSON](../../sql-reference/data-types/newjson.md) data type. \N \N 0 Bool 1 0 Production | |
enable_json_type 1 0 Allows creation of [JSON](../../sql-reference/data-types/newjson.md) data type. \N \N 0 Bool 1 allow_experimental_json_type 0 Production | |
allow_general_join_planning 1 0 Allows a more general join planning algorithm that can handle more complex conditions, but only works with hash join. If hash join is not enabled, then the usual join planning algorithm is used regardless of the value of this setting. \N \N 0 Bool 1 0 Production | |
merge_table_max_tables_to_look_for_schema_inference 1000 0 When creating a `Merge` table without an explicit schema or when using the `merge` table function, infer schema as a union of not more than the specified number of matching tables.\nIf there is a larger number of tables, the schema will be inferred from the first specified number of tables. \N \N 0 UInt64 1000 0 Production | |
validate_enum_literals_in_operators 0 0 If enabled, validate enum literals in operators like `IN`, `NOT IN`, `==`, `!=` against the enum type and throw an exception if the literal is not a valid enum value. \N \N 0 Bool 0 0 Production | |
max_autoincrement_series 1000 0 The limit on the number of series created by the `generateSeriesID` function.\n\nAs each series represents a node in Keeper, it is recommended to have no more than a couple of millions of them. \N \N 0 UInt64 1000 0 Production | |
use_hive_partitioning 1 0 When enabled, ClickHouse will detect Hive-style partitioning in path (`/name=value/`) in file-like table engines [File](/sql-reference/table-functions/file#hive-style-partitioning)/[S3](/sql-reference/table-functions/s3#hive-style-partitioning)/[URL](/sql-reference/table-functions/url#hive-style-partitioning)/[HDFS](/sql-reference/table-functions/hdfs#hive-style-partitioning)/[AzureBlobStorage](/sql-reference/table-functions/azureBlobStorage#hive-style-partitioning) and will allow to use partition columns as virtual columns in the query. These virtual columns will have the same names as in the partitioned path, but starting with `_`. \N \N 0 Bool 1 0 Production | |
parallel_hash_join_threshold 100000 0 When hash-based join algorithm is applied, this threshold helps to decide between using `hash` and `parallel_hash` (only if estimation of the right table size is available).\nThe former is used when we know that the right table size is below the threshold. \N \N 0 UInt64 100000 0 Production | |
apply_settings_from_server 1 0 Whether the client should accept settings from server.\n\nThis only affects operations performed on the client side, in particular parsing the INSERT input data and formatting the query result. Most of query execution happens on the server and is not affected by this setting.\n\nNormally this setting should be set in user profile (users.xml or queries like `ALTER USER`), not through the client (client command line arguments, `SET` query, or `SETTINGS` section of `SELECT` query). Through the client it can be changed to false, but can\'t be changed to true (because the server won\'t send the settings if user profile has `apply_settings_from_server = false`).\n\nNote that initially (24.12) there was a server setting (`send_settings_to_client`), but latter it got replaced with this client setting, for better usability. \N \N 0 Bool 1 0 Production | |
low_priority_query_wait_time_ms 1000 0 When the query prioritization mechanism is employed (see setting `priority`), low-priority queries wait for higher-priority queries to finish. This setting specifies the duration of waiting. \N \N 0 Milliseconds 1000 0 Beta | |
min_os_cpu_wait_time_ratio_to_throw 0 0 Min ratio between OS CPU wait (OSCPUWaitMicroseconds metric) and busy (OSCPUVirtualTimeMicroseconds metric) times to consider rejecting queries. Linear interpolation between min and max ratio is used to calculate the probability, the probability is 0 at this point. \N \N 0 Float 0 0 Production | |
max_os_cpu_wait_time_ratio_to_throw 0 0 Max ratio between OS CPU wait (OSCPUWaitMicroseconds metric) and busy (OSCPUVirtualTimeMicroseconds metric) times to consider rejecting queries. Linear interpolation between min and max ratio is used to calculate the probability, the probability is 1 at this point. \N \N 0 Float 0 0 Production | |
allow_experimental_materialized_postgresql_table 0 0 Allows to use the MaterializedPostgreSQL table engine. Disabled by default, because this feature is experimental \N \N 0 Bool 0 0 Experimental | |
allow_experimental_funnel_functions 0 0 Enable experimental functions for funnel analysis. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_nlp_functions 0 0 Enable experimental functions for natural language processing. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_hash_functions 0 0 Enable experimental hash functions \N \N 0 Bool 0 0 Experimental | |
allow_experimental_object_type 0 0 Allow the obsolete Object data type \N \N 0 Bool 0 0 Experimental | |
allow_experimental_time_series_table 0 0 Allows creation of tables with the [TimeSeries](../../engines/table-engines/integrations/time-series.md) table engine.\n\nPossible values:\n\n- 0 — the [TimeSeries](../../engines/table-engines/integrations/time-series.md) table engine is disabled.\n- 1 — the [TimeSeries](../../engines/table-engines/integrations/time-series.md) table engine is enabled. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_vector_similarity_index 0 0 Allow experimental vector similarity index \N \N 0 Bool 0 0 Experimental | |
allow_experimental_codecs 0 0 If it is set to true, allow to specify experimental compression codecs (but we don\'t have those yet and this option does nothing). \N \N 0 Bool 0 0 Experimental | |
max_limit_for_ann_queries 1000000 0 SELECT queries with LIMIT bigger than this setting cannot use vector similarity indices. Helps to prevent memory overflows in vector similarity indices. \N \N 0 UInt64 1000000 0 Experimental | |
hnsw_candidate_list_size_for_search 256 0 The size of the dynamic candidate list when searching the vector similarity index, also known as \'ef_search\'. \N \N 0 UInt64 256 0 Experimental | |
throw_on_unsupported_query_inside_transaction 1 0 Throw exception if unsupported query is used inside transaction \N \N 0 Bool 1 0 Experimental | |
wait_changes_become_visible_after_commit_mode wait_unknown 0 Wait for committed changes to become actually visible in the latest snapshot \N \N 0 TransactionsWaitCSNMode wait_unknown 0 Experimental | |
implicit_transaction 0 0 If enabled and not already inside a transaction, wraps the query inside a full transaction (begin + commit or rollback) \N \N 0 Bool 0 0 Experimental | |
grace_hash_join_initial_buckets 1 0 Initial number of grace hash join buckets \N \N 0 NonZeroUInt64 1 0 Experimental | |
grace_hash_join_max_buckets 1024 0 Limit on the number of grace hash join buckets \N \N 0 NonZeroUInt64 1024 0 Experimental | |
join_to_sort_minimum_perkey_rows 40 0 The lower limit of per-key average rows in the right table to determine whether to rerange the right table by key in left or inner join. This setting ensures that the optimization is not applied for sparse table keys \N \N 0 UInt64 40 0 Experimental | |
join_to_sort_maximum_table_rows 10000 0 The maximum number of rows in the right table to determine whether to rerange the right table by key in left or inner join. \N \N 0 UInt64 10000 0 Experimental | |
allow_experimental_join_right_table_sorting 0 0 If it is set to true, and the conditions of `join_to_sort_minimum_perkey_rows` and `join_to_sort_maximum_table_rows` are met, rerange the right table by key to improve the performance in left or inner hash join. \N \N 0 Bool 0 0 Experimental | |
allow_statistics_optimize 0 0 Allows using statistics to optimize queries \N \N 0 Bool 0 0 Experimental | |
allow_statistic_optimize 0 0 Allows using statistics to optimize queries \N \N 0 Bool 0 allow_statistics_optimize 0 Experimental | |
allow_experimental_statistics 0 0 Allows defining columns with [statistics](../../engines/table-engines/mergetree-family/mergetree.md/#table_engine-mergetree-creating-a-table) and [manipulate statistics](../../engines/table-engines/mergetree-family/mergetree.md/#column-statistics). \N \N 0 Bool 0 0 Experimental | |
allow_experimental_statistic 0 0 Allows defining columns with [statistics](../../engines/table-engines/mergetree-family/mergetree.md/#table_engine-mergetree-creating-a-table) and [manipulate statistics](../../engines/table-engines/mergetree-family/mergetree.md/#column-statistics). \N \N 0 Bool 0 allow_experimental_statistics 0 Experimental | |
allow_archive_path_syntax 1 0 File/S3 engines/table function will parse paths with \'::\' as `<archive> :: <file>\\` if archive has correct extension \N \N 0 Bool 1 0 Experimental | |
allow_experimental_inverted_index 0 0 If it is set to true, allow to use experimental inverted index. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_full_text_index 0 0 If it is set to true, allow to use experimental full-text index. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_join_condition 0 0 Support join with inequal conditions which involve columns from both left and right table. e.g. `t1.y < t2.y`. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_live_view 0 0 Allows creation of a deprecated LIVE VIEW.\n\nPossible values:\n\n- 0 — Working with live views is disabled.\n- 1 — Working with live views is enabled. \N \N 0 Bool 0 0 Experimental | |
live_view_heartbeat_interval 15 0 The heartbeat interval in seconds to indicate live query is alive. \N \N 0 Seconds 15 0 Experimental | |
max_live_view_insert_blocks_before_refresh 64 0 Limit maximum number of inserted blocks after which mergeable blocks are dropped and query is re-executed. \N \N 0 UInt64 64 0 Experimental | |
allow_experimental_window_view 0 0 Enable WINDOW VIEW. Not mature enough. \N \N 0 Bool 0 0 Experimental | |
window_view_clean_interval 60 0 The clean interval of window view in seconds to free outdated data. \N \N 0 Seconds 60 0 Experimental | |
window_view_heartbeat_interval 15 0 The heartbeat interval in seconds to indicate watch query is alive. \N \N 0 Seconds 15 0 Experimental | |
wait_for_window_view_fire_signal_timeout 10 0 Timeout for waiting for window view fire signal in event time processing \N \N 0 Seconds 10 0 Experimental | |
stop_refreshable_materialized_views_on_startup 0 0 On server startup, prevent scheduling of refreshable materialized views, as if with SYSTEM STOP VIEWS. You can manually start them with `SYSTEM START VIEWS` or `SYSTEM START VIEW <name>` afterwards. Also applies to newly created views. Has no effect on non-refreshable materialized views. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_database_materialized_postgresql 0 0 Allow to create database with Engine=MaterializedPostgreSQL(...). \N \N 0 Bool 0 0 Experimental | |
allow_experimental_query_deduplication 0 0 Experimental data deduplication for SELECT queries based on part UUIDs \N \N 0 Bool 0 0 Experimental | |
allow_experimental_database_iceberg 0 0 Allow experimental database engine DataLakeCatalog with catalog_type = \'iceberg\' \N \N 0 Bool 0 0 Experimental | |
allow_experimental_database_unity_catalog 0 0 Allow experimental database engine DataLakeCatalog with catalog_type = \'unity\' \N \N 0 Bool 0 0 Experimental | |
allow_experimental_database_glue_catalog 0 0 Allow experimental database engine DataLakeCatalog with catalog_type = \'glue\' \N \N 0 Bool 0 0 Experimental | |
allow_experimental_kusto_dialect 0 0 Enable Kusto Query Language (KQL) - an alternative to SQL. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_prql_dialect 0 0 Enable PRQL - an alternative to SQL. \N \N 0 Bool 0 0 Experimental | |
enable_adaptive_memory_spill_scheduler 0 0 Trigger processor to spill data into external storage adpatively. grace join is supported at present. \N \N 0 Bool 0 0 Experimental | |
allow_experimental_ts_to_grid_aggregate_function 0 0 Experimental tsToGrid aggregate function for Prometheus-like timeseries resampling. Cloud only \N \N 0 Bool 0 0 Experimental | |
update_insert_deduplication_token_in_dependent_materialized_views 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
max_memory_usage_for_all_queries 0 0 Obsolete setting, does nothing. \N \N 0 UInt64 0 1 Obsolete | |
multiple_joins_rewriter_version 0 0 Obsolete setting, does nothing. \N \N 0 UInt64 0 1 Obsolete | |
enable_debug_queries 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
allow_experimental_database_atomic 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_bigint_types 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_window_functions 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_geo_types 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_query_cache 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_alter_materialized_view_structure 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_shared_merge_tree 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_database_replicated 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_refreshable_materialized_view 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_bfloat16_type 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
async_insert_stale_timeout_ms 0 0 Obsolete setting, does nothing. \N \N 0 Milliseconds 0 1 Obsolete | |
handle_kafka_error_mode default 0 Obsolete setting, does nothing. \N \N 0 StreamingHandleErrorMode default 1 Obsolete | |
database_replicated_ddl_output 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
replication_alter_columns_timeout 60 0 Obsolete setting, does nothing. \N \N 0 UInt64 60 1 Obsolete | |
odbc_max_field_size 0 0 Obsolete setting, does nothing. \N \N 0 UInt64 0 1 Obsolete | |
allow_experimental_map_type 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
merge_tree_clear_old_temporary_directories_interval_seconds 60 0 Obsolete setting, does nothing. \N \N 0 UInt64 60 1 Obsolete | |
merge_tree_clear_old_parts_interval_seconds 1 0 Obsolete setting, does nothing. \N \N 0 UInt64 1 1 Obsolete | |
partial_merge_join_optimizations 0 0 Obsolete setting, does nothing. \N \N 0 UInt64 0 1 Obsolete | |
max_alter_threads \'auto(8)\' 0 Obsolete setting, does nothing. \N \N 0 MaxThreads \'auto(8)\' 1 Obsolete | |
use_mysql_types_in_show_columns 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
s3queue_allow_experimental_sharded_mode 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
lightweight_mutation_projection_mode throw 0 Obsolete setting, does nothing. \N \N 0 LightweightMutationProjectionMode throw 1 Obsolete | |
use_local_cache_for_remote_storage 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
background_buffer_flush_schedule_pool_size 16 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 16 1 Obsolete | |
background_pool_size 16 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 16 1 Obsolete | |
background_merges_mutations_concurrency_ratio 2 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 Float 2 1 Obsolete | |
background_move_pool_size 8 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 8 1 Obsolete | |
background_fetches_pool_size 8 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 8 1 Obsolete | |
background_common_pool_size 8 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 8 1 Obsolete | |
background_schedule_pool_size 128 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 128 1 Obsolete | |
background_message_broker_schedule_pool_size 16 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 16 1 Obsolete | |
background_distributed_schedule_pool_size 16 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 16 1 Obsolete | |
max_remote_read_network_bandwidth_for_server 0 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 0 1 Obsolete | |
max_remote_write_network_bandwidth_for_server 0 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 0 1 Obsolete | |
async_insert_threads 16 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 16 1 Obsolete | |
max_replicated_fetches_network_bandwidth_for_server 0 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 0 1 Obsolete | |
max_replicated_sends_network_bandwidth_for_server 0 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 0 1 Obsolete | |
max_entries_for_hash_table_stats 10000 0 User-level setting is deprecated, and it must be defined in the server configuration instead. \N \N 0 UInt64 10000 1 Obsolete | |
default_database_engine Atomic 0 Obsolete setting, does nothing. \N \N 0 DefaultDatabaseEngine Atomic 1 Obsolete | |
max_pipeline_depth 0 0 Obsolete setting, does nothing. \N \N 0 UInt64 0 1 Obsolete | |
temporary_live_view_timeout 1 0 Obsolete setting, does nothing. \N \N 0 Seconds 1 1 Obsolete | |
async_insert_cleanup_timeout_ms 1000 0 Obsolete setting, does nothing. \N \N 0 Milliseconds 1000 1 Obsolete | |
optimize_fuse_sum_count_avg 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
drain_timeout 3 0 Obsolete setting, does nothing. \N \N 0 Seconds 3 1 Obsolete | |
backup_threads 16 0 Obsolete setting, does nothing. \N \N 0 UInt64 16 1 Obsolete | |
restore_threads 16 0 Obsolete setting, does nothing. \N \N 0 UInt64 16 1 Obsolete | |
optimize_duplicate_order_by_and_distinct 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
parallel_replicas_min_number_of_granules_to_enable 0 0 Obsolete setting, does nothing. \N \N 0 UInt64 0 1 Obsolete | |
parallel_replicas_custom_key_filter_type default 0 Obsolete setting, does nothing. \N \N 0 ParallelReplicasCustomKeyFilterType default 1 Obsolete | |
query_plan_optimize_projection 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
query_cache_store_results_of_queries_with_nondeterministic_functions 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
allow_experimental_annoy_index 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
max_threads_for_annoy_index_creation 4 0 Obsolete setting, does nothing. \N \N 0 UInt64 4 1 Obsolete | |
annoy_index_search_k_nodes -1 0 Obsolete setting, does nothing. \N \N 0 Int64 -1 1 Obsolete | |
allow_experimental_usearch_index 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
optimize_move_functions_out_of_any 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
allow_experimental_undrop_table_query 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
allow_experimental_s3queue 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
query_plan_optimize_primary_key 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
optimize_monotonous_functions_in_order_by 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
http_max_chunk_size 107374182400 0 Obsolete setting, does nothing. \N \N 0 UInt64 107374182400 1 Obsolete | |
iceberg_engine_ignore_schema_evolution 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
parallel_replicas_single_task_marks_count_multiplier 2 0 Obsolete setting, does nothing. \N \N 0 Float 2 1 Obsolete | |
allow_experimental_database_materialized_mysql 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
allow_experimental_shared_set_join 1 0 Obsolete setting, does nothing. \N \N 0 Bool 1 1 Obsolete | |
min_external_sort_block_bytes 104857600 0 Obsolete setting, does nothing. \N \N 0 UInt64 104857600 1 Obsolete | |
format_csv_delimiter , 0 The character to be considered as a delimiter in CSV data. If setting with a string, a string has to have a length of 1. \N \N 0 Char , 0 Production | |
format_csv_allow_single_quotes 0 0 If it is set to true, allow strings in single quotes. \N \N 0 Bool 0 0 Production | |
format_csv_allow_double_quotes 1 0 If it is set to true, allow strings in double quotes. \N \N 0 Bool 1 0 Production | |
output_format_csv_serialize_tuple_into_separate_columns 1 0 If it set to true, then Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost) \N \N 0 Bool 1 0 Production | |
input_format_csv_deserialize_separate_columns_into_tuple 1 0 If it set to true, then separate columns written in CSV format can be deserialized to Tuple column. \N \N 0 Bool 1 0 Production | |
output_format_csv_crlf_end_of_line 0 0 If it is set true, end of line in CSV format will be \\\\r\\\\n instead of \\\\n. \N \N 0 Bool 0 0 Production | |
input_format_csv_allow_cr_end_of_line 0 0 If it is set true, \\\\r will be allowed at end of line not followed by \\\\n \N \N 0 Bool 0 0 Production | |
input_format_csv_enum_as_number 0 0 Treat inserted enum values in CSV formats as enum indices \N \N 0 Bool 0 0 Production | |
input_format_csv_arrays_as_nested_csv 0 0 When reading Array from CSV, expect that its elements were serialized in nested CSV and then put into string. Example: \\"[\\"\\"Hello\\"\\", \\"\\"world\\"\\", \\"\\"42\\"\\"\\"\\" TV\\"\\"]\\". Braces around array can be omitted. \N \N 0 Bool 0 0 Production | |
input_format_skip_unknown_fields 1 0 Enables or disables skipping insertion of extra data.\n\nWhen writing data, ClickHouse throws an exception if input data contain columns that do not exist in the target table. If skipping is enabled, ClickHouse does not insert extra data and does not throw an exception.\n\nSupported formats:\n\n- [JSONEachRow](/interfaces/formats/JSONEachRow) (and other JSON formats)\n- [BSONEachRow](/interfaces/formats/BSONEachRow) (and other JSON formats)\n- [TSKV](/interfaces/formats/TSKV)\n- All formats with suffixes WithNames/WithNamesAndTypes\n- [MySQLDump](/interfaces/formats/MySQLDump)\n- [Native](/interfaces/formats/Native)\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 1 0 Production | |
input_format_with_names_use_header 1 0 Enables or disables checking the column order when inserting data.\n\nTo improve insert performance, we recommend disabling this check if you are sure that the column order of the input data is the same as in the target table.\n\nSupported formats:\n\n- [CSVWithNames](/interfaces/formats/CSVWithNames)\n- [CSVWithNamesAndTypes](/interfaces/formats/CSVWithNamesAndTypes)\n- [TabSeparatedWithNames](/interfaces/formats/TabSeparatedWithNames)\n- [TabSeparatedWithNamesAndTypes](/interfaces/formats/TabSeparatedWithNamesAndTypes)\n- [JSONCompactEachRowWithNames](/interfaces/formats/JSONCompactEachRowWithNames)\n- [JSONCompactEachRowWithNamesAndTypes](/interfaces/formats/JSONCompactEachRowWithNamesAndTypes)\n- [JSONCompactStringsEachRowWithNames](/interfaces/formats/JSONCompactStringsEachRowWithNames)\n- [JSONCompactStringsEachRowWithNamesAndTypes](/interfaces/formats/JSONCompactStringsEachRowWithNamesAndTypes)\n- [RowBinaryWithNames](/interfaces/formats/RowBinaryWithNames)\n- [RowBinaryWithNamesAndTypes](/interfaces/formats/RowBinaryWithNamesAndTypes)\n- [CustomSeparatedWithNames](/interfaces/formats/CustomSeparatedWithNames)\n- [CustomSeparatedWithNamesAndTypes](/interfaces/formats/CustomSeparatedWithNamesAndTypes)\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 1 0 Production | |
input_format_with_types_use_header 1 0 Controls whether format parser should check if data types from the input data match data types from the target table.\n\nSupported formats:\n\n- [CSVWithNamesAndTypes](/interfaces/formats/CSVWithNamesAndTypes)\n- [TabSeparatedWithNamesAndTypes](/interfaces/formats/TabSeparatedWithNamesAndTypes)\n- [JSONCompactEachRowWithNamesAndTypes](/interfaces/formats/JSONCompactEachRowWithNamesAndTypes)\n- [JSONCompactStringsEachRowWithNamesAndTypes](/interfaces/formats/JSONCompactStringsEachRowWithNamesAndTypes)\n- [RowBinaryWithNamesAndTypes](/interfaces/formats/RowBinaryWithNamesAndTypes)\n- [CustomSeparatedWithNamesAndTypes](/interfaces/formats/CustomSeparatedWithNamesAndTypes)\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 1 0 Production | |
input_format_import_nested_json 0 0 Enables or disables the insertion of JSON data with nested objects.\n\nSupported formats:\n\n- [JSONEachRow](/interfaces/formats/JSONEachRow)\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\nSee also:\n\n- [Usage of Nested Structures](/integrations/data-formats/json/other-formats#accessing-nested-json-objects) with the `JSONEachRow` format. \N \N 0 Bool 0 0 Production | |
input_format_defaults_for_omitted_fields 1 0 When performing `INSERT` queries, replace omitted input column values with default values of the respective columns. This option applies to [JSONEachRow](/interfaces/formats/JSONEachRow) (and other JSON formats), [CSV](/interfaces/formats/CSV), [TabSeparated](/interfaces/formats/TabSeparated), [TSKV](/interfaces/formats/TSKV), [Parquet](/interfaces/formats/Parquet), [Arrow](/interfaces/formats/Arrow), [Avro](/interfaces/formats/Avro), [ORC](/interfaces/formats/ORC), [Native](/interfaces/formats/Native) formats and formats with `WithNames`/`WithNamesAndTypes` suffixes.\n\n:::note\nWhen this option is enabled, extended table metadata are sent from server to client. It consumes additional computing resources on the server and can reduce performance.\n:::\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 1 0 Production | |
input_format_csv_empty_as_default 1 0 Treat empty fields in CSV input as default values. \N \N 0 Bool 1 0 Production | |
input_format_tsv_empty_as_default 0 0 Treat empty fields in TSV input as default values. \N \N 0 Bool 0 0 Production | |
input_format_tsv_enum_as_number 0 0 Treat inserted enum values in TSV formats as enum indices. \N \N 0 Bool 0 0 Production | |
input_format_null_as_default 1 0 Enables or disables the initialization of [NULL](/sql-reference/syntax#literals) fields with [default values](/sql-reference/statements/create/table#default_values), if data type of these fields is not [nullable](/sql-reference/data-types/nullable).\nIf column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting.\n\nThis setting is applicable for most input formats.\n\nFor complex default expressions `input_format_defaults_for_omitted_fields` must be enabled too.\n\nPossible values:\n\n- 0 — Inserting `NULL` into a not nullable column causes an exception.\n- 1 — `NULL` fields are initialized with default column values. \N \N 0 Bool 1 0 Production | |
input_format_force_null_for_omitted_fields 0 0 Force initialize omitted fields with null values \N \N 0 Bool 0 0 Production | |
input_format_arrow_case_insensitive_column_matching 0 0 Ignore case when matching Arrow columns with CH columns. \N \N 0 Bool 0 0 Production | |
input_format_orc_row_batch_size 100000 0 Batch size when reading ORC stripes. \N \N 0 Int64 100000 0 Production | |
input_format_orc_case_insensitive_column_matching 0 0 Ignore case when matching ORC columns with CH columns. \N \N 0 Bool 0 0 Production | |
input_format_parquet_case_insensitive_column_matching 0 0 Ignore case when matching Parquet columns with CH columns. \N \N 0 Bool 0 0 Production | |
input_format_parquet_preserve_order 0 0 Avoid reordering rows when reading from Parquet files. Usually makes it much slower. \N \N 0 Bool 0 0 Production | |
input_format_parquet_filter_push_down 1 0 When reading Parquet files, skip whole row groups based on the WHERE/PREWHERE expressions and min/max statistics in the Parquet metadata. \N \N 0 Bool 1 0 Production | |
input_format_parquet_bloom_filter_push_down 0 0 When reading Parquet files, skip whole row groups based on the WHERE expressions and bloom filter in the Parquet metadata. \N \N 0 Bool 0 0 Production | |
input_format_parquet_use_native_reader 0 0 When reading Parquet files, to use native reader instead of arrow reader. \N \N 0 Bool 0 0 Production | |
input_format_allow_seeks 1 0 Allow seeks while reading in ORC/Parquet/Arrow input formats.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_orc_allow_missing_columns 1 0 Allow missing columns while reading ORC input formats \N \N 0 Bool 1 0 Production | |
input_format_orc_use_fast_decoder 1 0 Use a faster ORC decoder implementation. \N \N 0 Bool 1 0 Production | |
input_format_orc_filter_push_down 1 0 When reading ORC files, skip whole stripes or row groups based on the WHERE/PREWHERE expressions, min/max statistics or bloom filter in the ORC metadata. \N \N 0 Bool 1 0 Production | |
input_format_orc_reader_time_zone_name GMT 0 The time zone name for ORC row reader, the default ORC row reader\'s time zone is GMT. \N \N 0 String GMT 0 Production | |
input_format_orc_dictionary_as_low_cardinality 1 0 Treat ORC dictionary encoded columns as LowCardinality columns while reading ORC files. \N \N 0 Bool 1 0 Production | |
input_format_parquet_allow_missing_columns 1 0 Allow missing columns while reading Parquet input formats \N \N 0 Bool 1 0 Production | |
input_format_parquet_local_file_min_bytes_for_seek 8192 0 Min bytes required for local read (file) to do seek, instead of read with ignore in Parquet input format \N \N 0 UInt64 8192 0 Production | |
input_format_parquet_enable_row_group_prefetch 1 0 Enable row group prefetching during parquet parsing. Currently, only single-threaded parsing can prefetch. \N \N 0 Bool 1 0 Production | |
input_format_arrow_allow_missing_columns 1 0 Allow missing columns while reading Arrow input formats \N \N 0 Bool 1 0 Production | |
input_format_hive_text_fields_delimiter 0 Delimiter between fields in Hive Text File \N \N 0 Char 0 Production | |
input_format_hive_text_collection_items_delimiter 0 Delimiter between collection(array or map) items in Hive Text File \N \N 0 Char 0 Production | |
input_format_hive_text_map_keys_delimiter 0 Delimiter between a pair of map key/values in Hive Text File \N \N 0 Char 0 Production | |
input_format_hive_text_allow_variable_number_of_columns 1 0 Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values \N \N 0 Bool 1 0 Production | |
input_format_msgpack_number_of_columns 0 0 The number of columns in inserted MsgPack data. Used for automatic schema inference from data. \N \N 0 UInt64 0 0 Production | |
output_format_msgpack_uuid_representation ext 0 The way how to output UUID in MsgPack format. \N \N 0 MsgPackUUIDRepresentation ext 0 Production | |
input_format_max_rows_to_read_for_schema_inference 25000 0 The maximum rows of data to read for automatic schema inference. \N \N 0 UInt64 25000 0 Production | |
input_format_max_bytes_to_read_for_schema_inference 33554432 0 The maximum amount of data in bytes to read for automatic schema inference. \N \N 0 UInt64 33554432 0 Production | |
input_format_csv_use_best_effort_in_schema_inference 1 0 Use some tweaks and heuristics to infer schema in CSV format \N \N 0 Bool 1 0 Production | |
input_format_csv_try_infer_numbers_from_strings 0 0 If enabled, during schema inference ClickHouse will try to infer numbers from string fields.\nIt can be useful if CSV data contains quoted UInt64 numbers.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
input_format_csv_try_infer_strings_from_quoted_tuples 1 0 Interpret quoted tuples in the input data as a value of type String. \N \N 0 Bool 1 0 Production | |
input_format_tsv_use_best_effort_in_schema_inference 1 0 Use some tweaks and heuristics to infer schema in TSV format \N \N 0 Bool 1 0 Production | |
input_format_csv_detect_header 1 0 Automatically detect header with names and types in CSV format \N \N 0 Bool 1 0 Production | |
input_format_csv_allow_whitespace_or_tab_as_delimiter 0 0 Allow to use spaces and tabs(\\\\t) as field delimiter in the CSV strings \N \N 0 Bool 0 0 Production | |
input_format_csv_trim_whitespaces 1 0 Trims spaces and tabs (\\\\t) characters at the beginning and end in CSV strings \N \N 0 Bool 1 0 Production | |
input_format_csv_use_default_on_bad_values 0 0 Allow to set default value to column when CSV field deserialization failed on bad value \N \N 0 Bool 0 0 Production | |
input_format_csv_allow_variable_number_of_columns 0 0 Ignore extra columns in CSV input (if file has more columns than expected) and treat missing fields in CSV input as default values \N \N 0 Bool 0 0 Production | |
input_format_tsv_allow_variable_number_of_columns 0 0 Ignore extra columns in TSV input (if file has more columns than expected) and treat missing fields in TSV input as default values \N \N 0 Bool 0 0 Production | |
input_format_custom_allow_variable_number_of_columns 0 0 Ignore extra columns in CustomSeparated input (if file has more columns than expected) and treat missing fields in CustomSeparated input as default values \N \N 0 Bool 0 0 Production | |
input_format_json_compact_allow_variable_number_of_columns 0 0 Allow variable number of columns in rows in JSONCompact/JSONCompactEachRow input formats.\nIgnore extra columns in rows with more columns than expected and treat missing columns as default values.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
input_format_tsv_detect_header 1 0 Automatically detect header with names and types in TSV format \N \N 0 Bool 1 0 Production | |
input_format_custom_detect_header 1 0 Automatically detect header with names and types in CustomSeparated format \N \N 0 Bool 1 0 Production | |
input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference 0 0 Skip columns with unsupported types while schema inference for format Parquet \N \N 0 Bool 0 0 Production | |
input_format_parquet_max_block_size 65409 0 Max block size for parquet reader. \N \N 0 UInt64 65409 0 Production | |
input_format_parquet_prefer_block_bytes 16744704 0 Average block bytes output by parquet reader \N \N 0 UInt64 16744704 0 Production | |
input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference 0 0 Skip fields with unsupported types while schema inference for format Protobuf \N \N 0 Bool 0 0 Production | |
input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference 0 0 Skip columns with unsupported types while schema inference for format CapnProto \N \N 0 Bool 0 0 Production | |
input_format_orc_skip_columns_with_unsupported_types_in_schema_inference 0 0 Skip columns with unsupported types while schema inference for format ORC \N \N 0 Bool 0 0 Production | |
input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference 0 0 Skip columns with unsupported types while schema inference for format Arrow \N \N 0 Bool 0 0 Production | |
column_names_for_schema_inference 0 The list of column names to use in schema inference for formats without column names. The format: \'column1,column2,column3,...\' \N \N 0 String 0 Production | |
schema_inference_hints 0 The list of column names and types to use as hints in schema inference for formats without schema.\n\nExample:\n\nQuery:\n```sql\ndesc format(JSONEachRow, \'{"x" : 1, "y" : "String", "z" : "0.0.0.0" }\') settings schema_inference_hints=\'x UInt8, z IPv4\';\n```\n\nResult:\n```sql\nx UInt8\ny Nullable(String)\nz IPv4\n```\n\n:::note\nIf the `schema_inference_hints` is not formatted properly, or if there is a typo or a wrong datatype, etc... the whole schema_inference_hints will be ignored.\n::: \N \N 0 String 0 Production | |
schema_inference_mode default 0 Mode of schema inference. \'default\' - assume that all files have the same schema and schema can be inferred from any file, \'union\' - files can have different schemas and the resulting schema should be the a union of schemas of all files \N \N 0 SchemaInferenceMode default 0 Production | |
schema_inference_make_columns_nullable 1 0 Controls making inferred types `Nullable` in schema inference.\nIf the setting is enabled, all inferred type will be `Nullable`, if disabled, the inferred type will never be `Nullable`, if set to `auto`, the inferred type will be `Nullable` only if the column contains `NULL` in a sample that is parsed during schema inference or file metadata contains information about column nullability. \N \N 0 UInt64Auto 1 0 Production | |
schema_inference_make_json_columns_nullable 0 0 Controls making inferred JSON types `Nullable` in schema inference.\nIf this setting is enabled together with schema_inference_make_columns_nullable, inferred JSON type will be `Nullable`. \N \N 0 Bool 0 0 Production | |
input_format_json_read_bools_as_numbers 1 0 Allow parsing bools as numbers in JSON input formats.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_read_bools_as_strings 1 0 Allow parsing bools as strings in JSON input formats.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_try_infer_numbers_from_strings 0 0 If enabled, during schema inference ClickHouse will try to infer numbers from string fields.\nIt can be useful if JSON data contains quoted UInt64 numbers.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
input_format_json_validate_types_from_metadata 1 0 For JSON/JSONCompact/JSONColumnsWithMetadata input formats, if this setting is set to 1,\nthe types from metadata in input data will be compared with the types of the corresponding columns from the table.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_read_numbers_as_strings 1 0 Allow parsing numbers as strings in JSON input formats.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_read_objects_as_strings 1 0 Allow parsing JSON objects as strings in JSON input formats.\n\nExample:\n\n```sql\nSET input_format_json_read_objects_as_strings = 1;\nCREATE TABLE test (id UInt64, obj String, date Date) ENGINE=Memory();\nINSERT INTO test FORMAT JSONEachRow {"id" : 1, "obj" : {"a" : 1, "b" : "Hello"}, "date" : "2020-01-01"};\nSELECT * FROM test;\n```\n\nResult:\n\n```\n┌─id─┬─obj──────────────────────┬───────date─┐\n│ 1 │ {"a" : 1, "b" : "Hello"} │ 2020-01-01 │\n└────┴──────────────────────────┴────────────┘\n```\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_read_arrays_as_strings 1 0 Allow parsing JSON arrays as strings in JSON input formats.\n\nExample:\n\n```sql\nSET input_format_json_read_arrays_as_strings = 1;\nSELECT arr, toTypeName(arr), JSONExtractArrayRaw(arr)[3] from format(JSONEachRow, \'arr String\', \'{"arr" : [1, "Hello", [1,2,3]]}\');\n```\n\nResult:\n```\n┌─arr───────────────────┬─toTypeName(arr)─┬─arrayElement(JSONExtractArrayRaw(arr), 3)─┐\n│ [1, "Hello", [1,2,3]] │ String │ [1,2,3] │\n└───────────────────────┴─────────────────┴───────────────────────────────────────────┘\n```\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_try_infer_named_tuples_from_objects 1 0 If enabled, during schema inference ClickHouse will try to infer named Tuple from JSON objects.\nThe resulting named Tuple will contain all elements from all corresponding JSON objects from sample data.\n\nExample:\n\n```sql\nSET input_format_json_try_infer_named_tuples_from_objects = 1;\nDESC format(JSONEachRow, \'{"obj" : {"a" : 42, "b" : "Hello"}}, {"obj" : {"a" : 43, "c" : [1, 2, 3]}}, {"obj" : {"d" : {"e" : 42}}}\')\n```\n\nResult:\n\n```\n┌─name─┬─type───────────────────────────────────────────────────────────────────────────────────────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐\n│ obj │ Tuple(a Nullable(Int64), b Nullable(String), c Array(Nullable(Int64)), d Tuple(e Nullable(Int64))) │ │ │ │ │ │\n└──────┴────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘\n```\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects 0 0 Use String type instead of an exception in case of ambiguous paths in JSON objects during named tuples inference \N \N 0 Bool 0 0 Production | |
input_format_json_infer_incomplete_types_as_strings 1 0 Allow to use String type for JSON keys that contain only `Null`/`{}`/`[]` in data sample during schema inference.\nIn JSON formats any value can be read as String, and we can avoid errors like `Cannot determine type for column \'column_name\' by first 25000 rows of data, most likely this column contains only Nulls or empty Arrays/Maps` during schema inference\nby using String type for keys with unknown types.\n\nExample:\n\n```sql\nSET input_format_json_infer_incomplete_types_as_strings = 1, input_format_json_try_infer_named_tuples_from_objects = 1;\nDESCRIBE format(JSONEachRow, \'{"obj" : {"a" : [1,2,3], "b" : "hello", "c" : null, "d" : {}, "e" : []}}\');\nSELECT * FROM format(JSONEachRow, \'{"obj" : {"a" : [1,2,3], "b" : "hello", "c" : null, "d" : {}, "e" : []}}\');\n```\n\nResult:\n```\n┌─name─┬─type───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐\n│ obj │ Tuple(a Array(Nullable(Int64)), b Nullable(String), c Nullable(String), d Nullable(String), e Array(Nullable(String))) │ │ │ │ │ │\n└──────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘\n\n┌─obj────────────────────────────┐\n│ ([1,2,3],\'hello\',NULL,\'{}\',[]) │\n└────────────────────────────────┘\n```\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_named_tuples_as_objects 1 0 Parse named tuple columns as JSON objects.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_ignore_unknown_keys_in_named_tuple 1 0 Ignore unknown keys in json object for named tuples.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_defaults_for_missing_elements_in_named_tuple 1 0 Insert default values for missing elements in JSON object while parsing named tuple.\nThis setting works only when setting `input_format_json_named_tuples_as_objects` is enabled.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_throw_on_bad_escape_sequence 1 0 Throw an exception if JSON string contains bad escape sequence in JSON input formats. If disabled, bad escape sequences will remain as is in the data.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_json_ignore_unnecessary_fields 1 0 Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields \N \N 0 Bool 1 0 Production | |
input_format_try_infer_variants 0 0 If enabled, ClickHouse will try to infer type [`Variant`](../../sql-reference/data-types/variant.md) in schema inference for text formats when there is more than one possible type for column/array elements.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 0 0 Production | |
type_json_skip_duplicated_paths 0 0 When enabled, during parsing JSON object into JSON type duplicated paths will be ignored and only the first one will be inserted instead of an exception \N \N 0 Bool 0 0 Production | |
input_format_json_max_depth 1000 0 Maximum depth of a field in JSON. This is not a strict limit, it does not have to be applied precisely. \N \N 0 UInt64 1000 0 Production | |
input_format_json_empty_as_default 0 0 When enabled, replace empty input fields in JSON with default values. For complex default expressions `input_format_defaults_for_omitted_fields` must be enabled too.\n\nPossible values:\n\n+ 0 — Disable.\n+ 1 — Enable. \N \N 0 Bool 0 0 Production | |
input_format_try_infer_integers 1 0 If enabled, ClickHouse will try to infer integers instead of floats in schema inference for text formats. If all numbers in the column from input data are integers, the result type will be `Int64`, if at least one number is float, the result type will be `Float64`.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_try_infer_dates 1 0 If enabled, ClickHouse will try to infer type `Date` from string fields in schema inference for text formats. If all fields from a column in input data were successfully parsed as dates, the result type will be `Date`, if at least one field was not parsed as date, the result type will be `String`.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_try_infer_datetimes 1 0 If enabled, ClickHouse will try to infer type `DateTime64` from string fields in schema inference for text formats. If all fields from a column in input data were successfully parsed as datetimes, the result type will be `DateTime64`, if at least one field was not parsed as datetime, the result type will be `String`.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
input_format_try_infer_datetimes_only_datetime64 0 0 When input_format_try_infer_datetimes is enabled, infer only DateTime64 but not DateTime types \N \N 0 Bool 0 0 Production | |
input_format_try_infer_exponent_floats 0 0 Try to infer floats in exponential notation while schema inference in text formats (except JSON, where exponent numbers are always inferred) \N \N 0 Bool 0 0 Production | |
output_format_markdown_escape_special_characters 0 0 When enabled, escape special characters in Markdown.\n\n[Common Mark](https://spec.commonmark.org/0.30/#example-12) defines the following special characters that can be escaped by \\:\n\n```\n! " # $ % & \' ( ) * + , - . / : ; < = > ? @ [ \\ ] ^ _ ` { | } ~\n```\n\nPossible values:\n\n+ 0 — Disable.\n+ 1 — Enable. \N \N 0 Bool 0 0 Production | |
input_format_protobuf_flatten_google_wrappers 0 0 Enable Google wrappers for regular non-nested columns, e.g. google.protobuf.StringValue \'str\' for String column \'str\'. For Nullable columns empty wrappers are recognized as defaults, and missing as nulls \N \N 0 Bool 0 0 Production | |
output_format_protobuf_nullables_with_google_wrappers 0 0 When serializing Nullable columns with Google wrappers, serialize default values as empty wrappers. If turned off, default and null values are not serialized \N \N 0 Bool 0 0 Production | |
input_format_csv_skip_first_lines 0 0 Skip specified number of lines at the beginning of data in CSV format \N \N 0 UInt64 0 0 Production | |
input_format_tsv_skip_first_lines 0 0 Skip specified number of lines at the beginning of data in TSV format \N \N 0 UInt64 0 0 Production | |
input_format_csv_skip_trailing_empty_lines 0 0 Skip trailing empty lines in CSV format \N \N 0 Bool 0 0 Production | |
input_format_tsv_skip_trailing_empty_lines 0 0 Skip trailing empty lines in TSV format \N \N 0 Bool 0 0 Production | |
input_format_custom_skip_trailing_empty_lines 0 0 Skip trailing empty lines in CustomSeparated format \N \N 0 Bool 0 0 Production | |
input_format_tsv_crlf_end_of_line 0 0 If it is set true, file function will read TSV format with \\\\r\\\\n instead of \\\\n. \N \N 0 Bool 0 0 Production | |
input_format_native_allow_types_conversion 1 0 Allow data types conversion in Native input format \N \N 0 Bool 1 0 Production | |
input_format_native_decode_types_in_binary_format 0 0 Read data types in binary format instead of type names in Native input format \N \N 0 Bool 0 0 Production | |
output_format_native_encode_types_in_binary_format 0 0 Write data types in binary format instead of type names in Native output format \N \N 0 Bool 0 0 Production | |
output_format_native_write_json_as_string 0 0 Write data of [JSON](../../sql-reference/data-types/newjson.md) column as [String](../../sql-reference/data-types/string.md) column containing JSON strings instead of default native JSON serialization. \N \N 0 Bool 0 0 Production | |
date_time_input_format basic 0 Allows choosing a parser of the text representation of date and time.\n\nThe setting does not apply to [date and time functions](../../sql-reference/functions/date-time-functions.md).\n\nPossible values:\n\n- `\'best_effort\'` — Enables extended parsing.\n\n ClickHouse can parse the basic `YYYY-MM-DD HH:MM:SS` format and all [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) date and time formats. For example, `\'2018-06-08T01:02:03.000Z\'`.\n\n- `\'basic\'` — Use basic parser.\n\n ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` or `YYYY-MM-DD` format. For example, `2019-08-20 10:18:56` or `2019-08-20`.\n\nCloud default value: `\'best_effort\'`.\n\nSee also:\n\n- [DateTime data type.](../../sql-reference/data-types/datetime.md)\n- [Functions for working with dates and times.](../../sql-reference/functions/date-time-functions.md) \N \N 0 DateTimeInputFormat basic 0 Production | |
date_time_output_format simple 0 Allows choosing different output formats of the text representation of date and time.\n\nPossible values:\n\n- `simple` - Simple output format.\n\n ClickHouse output date and time `YYYY-MM-DD hh:mm:ss` format. For example, `2019-08-20 10:18:56`. The calculation is performed according to the data type\'s time zone (if present) or server time zone.\n\n- `iso` - ISO output format.\n\n ClickHouse output date and time in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) `YYYY-MM-DDThh:mm:ssZ` format. For example, `2019-08-20T10:18:56Z`. Note that output is in UTC (`Z` means UTC).\n\n- `unix_timestamp` - Unix timestamp output format.\n\n ClickHouse output date and time in [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) format. For example `1566285536`.\n\nSee also:\n\n- [DateTime data type.](../../sql-reference/data-types/datetime.md)\n- [Functions for working with dates and times.](../../sql-reference/functions/date-time-functions.md) \N \N 0 DateTimeOutputFormat simple 0 Production | |
interval_output_format numeric 0 Allows choosing different output formats of the text representation of interval types.\n\nPossible values:\n\n- `kusto` - KQL-style output format.\n\n ClickHouse outputs intervals in [KQL format](https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-timespan-format-strings#the-constant-c-format-specifier). For example, `toIntervalDay(2)` would be formatted as `2.00:00:00`. Please note that for interval types of varying length (ie. `IntervalMonth` and `IntervalYear`) the average number of seconds per interval is taken into account.\n\n- `numeric` - Numeric output format.\n\n ClickHouse outputs intervals as their underlying numeric representation. For example, `toIntervalDay(2)` would be formatted as `2`.\n\nSee also:\n\n- [Interval](../../sql-reference/data-types/special-data-types/interval.md) \N \N 0 IntervalOutputFormat numeric 0 Production | |
date_time_64_output_format_cut_trailing_zeros_align_to_groups_of_thousands 0 0 Dynamically trim the trailing zeros of datetime64 values to adjust the output scale to [0, 3, 6],\ncorresponding to \'seconds\', \'milliseconds\', and \'microseconds\' \N \N 0 Bool 0 0 Production | |
input_format_ipv4_default_on_conversion_error 0 0 Deserialization of IPv4 will use default values instead of throwing exception on conversion error.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
input_format_ipv6_default_on_conversion_error 0 0 Deserialization of IPV6 will use default values instead of throwing exception on conversion error.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
bool_true_representation true 0 Text to represent true bool value in TSV/CSV/Vertical/Pretty formats. \N \N 0 String true 0 Production | |
bool_false_representation false 0 Text to represent false bool value in TSV/CSV/Vertical/Pretty formats. \N \N 0 String false 0 Production | |
allow_special_bool_values_inside_variant 0 0 Allows to parse Bool values inside Variant type from special text bool values like "on", "off", "enable", "disable", etc. \N \N 0 Bool 0 0 Production | |
input_format_values_interpret_expressions 1 0 For Values format: if the field could not be parsed by streaming parser, run SQL parser and try to interpret it as SQL expression. \N \N 0 Bool 1 0 Production | |
input_format_values_deduce_templates_of_expressions 1 0 For Values format: if the field could not be parsed by streaming parser, run SQL parser, deduce template of the SQL expression, try to parse all rows using template and then interpret expression for all rows. \N \N 0 Bool 1 0 Production | |
input_format_values_accurate_types_of_literals 1 0 For Values format: when parsing and interpreting expressions using template, check actual type of literal to avoid possible overflow and precision issues. \N \N 0 Bool 1 0 Production | |
input_format_avro_allow_missing_fields 0 0 For Avro/AvroConfluent format: when field is not found in schema use default value instead of error \N \N 0 Bool 0 0 Production | |
input_format_avro_null_as_default 0 0 For Avro/AvroConfluent format: insert default in case of null and non Nullable column \N \N 0 Bool 0 0 Production | |
format_binary_max_string_size 1073741824 0 The maximum allowed size for String in RowBinary format. It prevents allocating large amount of memory in case of corrupted data. 0 means there is no limit \N \N 0 UInt64 1073741824 0 Production | |
format_binary_max_array_size 1073741824 0 The maximum allowed size for Array in RowBinary format. It prevents allocating large amount of memory in case of corrupted data. 0 means there is no limit \N \N 0 UInt64 1073741824 0 Production | |
input_format_binary_decode_types_in_binary_format 0 0 Read data types in binary format instead of type names in RowBinaryWithNamesAndTypes input format \N \N 0 Bool 0 0 Production | |
output_format_binary_encode_types_in_binary_format 0 0 Write data types in binary format instead of type names in RowBinaryWithNamesAndTypes output format \N \N 0 Bool 0 0 Production | |
format_avro_schema_registry_url 0 For AvroConfluent format: Confluent Schema Registry URL. \N \N 0 URI 0 Production | |
input_format_binary_read_json_as_string 0 0 Read values of [JSON](../../sql-reference/data-types/newjson.md) data type as JSON [String](../../sql-reference/data-types/string.md) values in RowBinary input format. \N \N 0 Bool 0 0 Production | |
output_format_binary_write_json_as_string 0 0 Write values of [JSON](../../sql-reference/data-types/newjson.md) data type as JSON [String](../../sql-reference/data-types/string.md) values in RowBinary output format. \N \N 0 Bool 0 0 Production | |
output_format_json_quote_64bit_integers 1 0 Controls quoting of 64-bit or bigger [integers](../../sql-reference/data-types/int-uint.md) (like `UInt64` or `Int128`) when they are output in a [JSON](/interfaces/formats/JSON) format.\nSuch integers are enclosed in quotes by default. This behavior is compatible with most JavaScript implementations.\n\nPossible values:\n\n- 0 — Integers are output without quotes.\n- 1 — Integers are enclosed in quotes. \N \N 0 Bool 1 0 Production | |
output_format_json_quote_denormals 0 0 Enables `+nan`, `-nan`, `+inf`, `-inf` outputs in [JSON](/interfaces/formats/JSON) output format.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled.\n\n**Example**\n\nConsider the following table `account_orders`:\n\n```text\n┌─id─┬─name───┬─duration─┬─period─┬─area─┐\n│ 1 │ Andrew │ 20 │ 0 │ 400 │\n│ 2 │ John │ 40 │ 0 │ 0 │\n│ 3 │ Bob │ 15 │ 0 │ -100 │\n└────┴────────┴──────────┴────────┴──────┘\n```\n\nWhen `output_format_json_quote_denormals = 0`, the query returns `null` values in output:\n\n```sql\nSELECT area/period FROM account_orders FORMAT JSON;\n```\n\n```json\n{\n "meta":\n [\n {\n "name": "divide(area, period)",\n "type": "Float64"\n }\n ],\n\n "data":\n [\n {\n "divide(area, period)": null\n },\n {\n "divide(area, period)": null\n },\n {\n "divide(area, period)": null\n }\n ],\n\n "rows": 3,\n\n "statistics":\n {\n "elapsed": 0.003648093,\n "rows_read": 3,\n "bytes_read": 24\n }\n}\n```\n\nWhen `output_format_json_quote_denormals = 1`, the query returns:\n\n```json\n{\n "meta":\n [\n {\n "name": "divide(area, period)",\n "type": "Float64"\n }\n ],\n\n "data":\n [\n {\n "divide(area, period)": "inf"\n },\n {\n "divide(area, period)": "-nan"\n },\n {\n "divide(area, period)": "-inf"\n }\n ],\n\n "rows": 3,\n\n "statistics":\n {\n "elapsed": 0.000070241,\n "rows_read": 3,\n "bytes_read": 24\n }\n}\n``` \N \N 0 Bool 0 0 Production | |
output_format_json_quote_decimals 0 0 Controls quoting of decimals in JSON output formats.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
output_format_json_quote_64bit_floats 0 0 Controls quoting of 64-bit [floats](../../sql-reference/data-types/float.md) when they are output in JSON* formats.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
output_format_json_escape_forward_slashes 1 0 Controls escaping forward slashes for string outputs in JSON output format. This is intended for compatibility with JavaScript. Don\'t confuse with backslashes that are always escaped.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
output_format_json_named_tuples_as_objects 1 0 Serialize named tuple columns as JSON objects.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
output_format_json_skip_null_value_in_named_tuples 0 0 Skip key value pairs with null value when serialize named tuple columns as JSON objects. It is only valid when output_format_json_named_tuples_as_objects is true. \N \N 0 Bool 0 0 Production | |
output_format_json_array_of_rows 0 0 Enables the ability to output all rows as a JSON array in the [JSONEachRow](/interfaces/formats/JSONEachRow) format.\n\nPossible values:\n\n- 1 — ClickHouse outputs all rows as an array, each row in the `JSONEachRow` format.\n- 0 — ClickHouse outputs each row separately in the `JSONEachRow` format.\n\n**Example of a query with the enabled setting**\n\nQuery:\n\n```sql\nSET output_format_json_array_of_rows = 1;\nSELECT number FROM numbers(3) FORMAT JSONEachRow;\n```\n\nResult:\n\n```text\n[\n{"number":"0"},\n{"number":"1"},\n{"number":"2"}\n]\n```\n\n**Example of a query with the disabled setting**\n\nQuery:\n\n```sql\nSET output_format_json_array_of_rows = 0;\nSELECT number FROM numbers(3) FORMAT JSONEachRow;\n```\n\nResult:\n\n```text\n{"number":"0"}\n{"number":"1"}\n{"number":"2"}\n``` \N \N 0 Bool 0 0 Production | |
output_format_json_validate_utf8 0 0 Controls validation of UTF-8 sequences in JSON output formats, doesn\'t impact formats JSON/JSONCompact/JSONColumnsWithMetadata, they always validate UTF-8.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
output_format_json_pretty_print 1 0 When enabled, values of complex data types like Tuple/Array/Map in JSON output format in \'data\' section will be printed in pretty format.\n\nEnabled by default. \N \N 0 Bool 1 0 Production | |
format_json_object_each_row_column_for_object_name 0 The name of column that will be used for storing/writing object names in [JSONObjectEachRow](/interfaces/formats/JSONObjectEachRow) format.\nColumn type should be String. If value is empty, default names `row_{i}`will be used for object names. \N \N 0 String 0 Production | |
output_format_pretty_max_rows 1000 0 Rows limit for Pretty formats. \N \N 0 UInt64 1000 0 Production | |
output_format_pretty_max_column_pad_width 250 0 Maximum width to pad all values in a column in Pretty formats. \N \N 0 UInt64 250 0 Production | |
output_format_pretty_max_column_name_width_cut_to 24 0 If the column name is too long, cut it to this length.\nThe column will be cut if it is longer than `output_format_pretty_max_column_name_width_cut_to` plus `output_format_pretty_max_column_name_width_min_chars_to_cut`. \N \N 0 UInt64 24 0 Production | |
output_format_pretty_max_column_name_width_min_chars_to_cut 4 0 Minimum characters to cut if the column name is too long.\nThe column will be cut if it is longer than `output_format_pretty_max_column_name_width_cut_to` plus `output_format_pretty_max_column_name_width_min_chars_to_cut`. \N \N 0 UInt64 4 0 Production | |
output_format_pretty_max_value_width 10000 0 Maximum width of value to display in Pretty formats. If greater - it will be cut.\nThe value 0 means - never cut. \N \N 0 UInt64 10000 0 Production | |
output_format_pretty_max_value_width_apply_for_single_value 0 0 Only cut values (see the `output_format_pretty_max_value_width` setting) when it is not a single value in a block. Otherwise output it entirely, which is useful for the `SHOW CREATE TABLE` query. \N \N 0 UInt64 0 0 Production | |
output_format_pretty_squash_consecutive_ms 50 0 Wait for the next block for up to specified number of milliseconds and squash it to the previous before writing.\nThis avoids frequent output of too small blocks, but still allows to display data in a streaming fashion. \N \N 0 UInt64 50 0 Production | |
output_format_pretty_squash_max_wait_ms 1000 0 Output the pending block in pretty formats if more than the specified number of milliseconds has passed since the previous output. \N \N 0 UInt64 1000 0 Production | |
output_format_pretty_color 0 1 Use ANSI escape sequences in Pretty formats. 0 - disabled, 1 - enabled, \'auto\' - enabled if a terminal. \N \N 0 UInt64Auto auto 0 Production | |
output_format_pretty_grid_charset ASCII 1 Charset for printing grid borders. Available charsets: ASCII, UTF-8 (default one). \N \N 0 String UTF-8 0 Production | |
output_format_pretty_display_footer_column_names 1 0 Display column names in the footer if there are many table rows.\n\nPossible values:\n\n- 0 — No column names are displayed in the footer.\n- 1 — Column names are displayed in the footer if row count is greater than or equal to the threshold value set by [output_format_pretty_display_footer_column_names_min_rows](#output_format_pretty_display_footer_column_names_min_rows) (50 by default).\n\n**Example**\n\nQuery:\n\n```sql\nSELECT *, toTypeName(*) FROM (SELECT * FROM system.numbers LIMIT 1000);\n```\n\nResult:\n\n```response\n ┌─number─┬─toTypeName(number)─┐\n 1. │ 0 │ UInt64 │\n 2. │ 1 │ UInt64 │\n 3. │ 2 │ UInt64 │\n ...\n 999. │ 998 │ UInt64 │\n1000. │ 999 │ UInt64 │\n └─number─┴─toTypeName(number)─┘\n``` \N \N 0 UInt64 1 0 Production | |
output_format_pretty_display_footer_column_names_min_rows 50 0 Sets the minimum number of rows for which a footer with column names will be displayed if setting [output_format_pretty_display_footer_column_names](#output_format_pretty_display_footer_column_names) is enabled. \N \N 0 UInt64 50 0 Production | |
output_format_parquet_row_group_size 1000000 0 Target row group size in rows. \N \N 0 UInt64 1000000 0 Production | |
output_format_parquet_row_group_size_bytes 536870912 0 Target row group size in bytes, before compression. \N \N 0 UInt64 536870912 0 Production | |
output_format_parquet_string_as_string 1 0 Use Parquet String type instead of Binary for String columns. \N \N 0 Bool 1 0 Production | |
output_format_parquet_fixed_string_as_fixed_byte_array 1 0 Use Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary for FixedString columns. \N \N 0 Bool 1 0 Production | |
output_format_parquet_version 2.latest 0 Parquet format version for output format. Supported versions: 1.0, 2.4, 2.6 and 2.latest (default) \N \N 0 ParquetVersion 2.latest 0 Production | |
output_format_parquet_compression_method zstd 0 Compression method for Parquet output format. Supported codecs: snappy, lz4, brotli, zstd, gzip, none (uncompressed) \N \N 0 ParquetCompression zstd 0 Production | |
output_format_parquet_compliant_nested_types 1 0 In parquet file schema, use name \'element\' instead of \'item\' for list elements. This is a historical artifact of Arrow library implementation. Generally increases compatibility, except perhaps with some old versions of Arrow. \N \N 0 Bool 1 0 Production | |
output_format_parquet_use_custom_encoder 1 0 Use a faster Parquet encoder implementation. \N \N 0 Bool 1 0 Production | |
output_format_parquet_parallel_encoding 1 0 Do Parquet encoding in multiple threads. Requires output_format_parquet_use_custom_encoder. \N \N 0 Bool 1 0 Production | |
output_format_parquet_data_page_size 1048576 0 Target page size in bytes, before compression. \N \N 0 UInt64 1048576 0 Production | |
output_format_parquet_batch_size 1024 0 Check page size every this many rows. Consider decreasing if you have columns with average values size above a few KBs. \N \N 0 UInt64 1024 0 Production | |
output_format_parquet_write_page_index 1 0 Write column index and offset index (i.e. statistics about each data page, which may be used for filter pushdown on read) into parquet files. \N \N 0 Bool 1 0 Production | |
output_format_parquet_write_bloom_filter 1 0 Write bloom filters in parquet files. Requires output_format_parquet_use_custom_encoder = true. \N \N 0 Bool 1 0 Production | |
output_format_parquet_bloom_filter_bits_per_value 10.5 0 Approximate number of bits to use for each distinct value in parquet bloom filters. Estimated false positive rates:\n * 6 bits - 10%\n * 10.5 bits - 1%\n * 16.9 bits - 0.1%\n * 26.4 bits - 0.01%\n * 41 bits - 0.001% \N \N 0 Double 10.5 0 Production | |
output_format_parquet_bloom_filter_flush_threshold_bytes 134217728 0 Where in the parquet file to place the bloom filters. Bloom filters will be written in groups of approximately this size. In particular:\n * if 0, each row group\'s bloom filters are written immediately after the row group,\n * if greater than the total size of all bloom filters, bloom filters for all row groups will be accumulated in memory, then written together near the end of the file,\n * otherwise, bloom filters will be accumulated in memory and written out whenever their total size goes above this value. \N \N 0 UInt64 134217728 0 Production | |
output_format_parquet_datetime_as_uint32 0 0 Write DateTime values as raw unix timestamp (read back as UInt32), instead of converting to milliseconds (read back as DateTime64(3)). \N \N 0 Bool 0 0 Production | |
output_format_avro_codec 0 Compression codec used for output. Possible values: \'null\', \'deflate\', \'snappy\', \'zstd\'. \N \N 0 String 0 Production | |
output_format_avro_sync_interval 16384 0 Sync interval in bytes. \N \N 0 UInt64 16384 0 Production | |
output_format_avro_string_column_pattern 0 For Avro format: regexp of String columns to select as AVRO string. \N \N 0 String 0 Production | |
output_format_avro_rows_in_file 1 0 Max rows in a file (if permitted by storage) \N \N 0 UInt64 1 0 Production | |
output_format_tsv_crlf_end_of_line 0 0 If it is set true, end of line in TSV format will be \\\\r\\\\n instead of \\\\n. \N \N 0 Bool 0 0 Production | |
format_csv_null_representation \\N 0 Custom NULL representation in CSV format \N \N 0 String \\N 0 Production | |
format_tsv_null_representation \\N 0 Custom NULL representation in TSV format \N \N 0 String \\N 0 Production | |
output_format_decimal_trailing_zeros 0 0 Output trailing zeros when printing Decimal values. E.g. 1.230000 instead of 1.23.\n\nDisabled by default. \N \N 0 Bool 0 0 Production | |
input_format_allow_errors_num 0 0 Sets the maximum number of acceptable errors when reading from text formats (CSV, TSV, etc.).\n\nThe default value is 0.\n\nAlways pair it with `input_format_allow_errors_ratio`.\n\nIf an error occurred while reading rows but the error counter is still less than `input_format_allow_errors_num`, ClickHouse ignores the row and moves on to the next one.\n\nIf both `input_format_allow_errors_num` and `input_format_allow_errors_ratio` are exceeded, ClickHouse throws an exception. \N \N 0 UInt64 0 0 Production | |
input_format_allow_errors_ratio 0 0 Sets the maximum percentage of errors allowed when reading from text formats (CSV, TSV, etc.).\nThe percentage of errors is set as a floating-point number between 0 and 1.\n\nThe default value is 0.\n\nAlways pair it with `input_format_allow_errors_num`.\n\nIf an error occurred while reading rows but the error counter is still less than `input_format_allow_errors_ratio`, ClickHouse ignores the row and moves on to the next one.\n\nIf both `input_format_allow_errors_num` and `input_format_allow_errors_ratio` are exceeded, ClickHouse throws an exception. \N \N 0 Float 0 0 Production | |
input_format_record_errors_file_path 0 Path of the file used to record errors while reading text formats (CSV, TSV). \N \N 0 String 0 Production | |
errors_output_format CSV 0 Method to write Errors to text output. \N \N 0 String CSV 0 Production | |
format_schema 0 This parameter is useful when you are using formats that require a schema definition, such as [Cap\'n Proto](https://capnproto.org/) or [Protobuf](https://developers.google.com/protocol-buffers/). The value depends on the format. \N \N 0 String 0 Production | |
format_template_resultset 0 Path to file which contains format string for result set (for Template format) \N \N 0 String 0 Production | |
format_template_row 0 Path to file which contains format string for rows (for Template format) \N \N 0 String 0 Production | |
format_template_row_format 0 Format string for rows (for Template format) \N \N 0 String 0 Production | |
format_template_resultset_format 0 Format string for result set (for Template format) \N \N 0 String 0 Production | |
format_template_rows_between_delimiter \n 0 Delimiter between rows (for Template format) \N \N 0 String \n 0 Production | |
format_custom_escaping_rule Escaped 0 Field escaping rule (for CustomSeparated format) \N \N 0 EscapingRule Escaped 0 Production | |
format_custom_field_delimiter \t 0 Delimiter between fields (for CustomSeparated format) \N \N 0 String \t 0 Production | |
format_custom_row_before_delimiter 0 Delimiter before field of the first column (for CustomSeparated format) \N \N 0 String 0 Production | |
format_custom_row_after_delimiter \n 0 Delimiter after field of the last column (for CustomSeparated format) \N \N 0 String \n 0 Production | |
format_custom_row_between_delimiter 0 Delimiter between rows (for CustomSeparated format) \N \N 0 String 0 Production | |
format_custom_result_before_delimiter 0 Prefix before result set (for CustomSeparated format) \N \N 0 String 0 Production | |
format_custom_result_after_delimiter 0 Suffix after result set (for CustomSeparated format) \N \N 0 String 0 Production | |
format_regexp 0 Regular expression (for Regexp format) \N \N 0 String 0 Production | |
format_regexp_escaping_rule Raw 0 Field escaping rule (for Regexp format) \N \N 0 EscapingRule Raw 0 Production | |
format_regexp_skip_unmatched 0 0 Skip lines unmatched by regular expression (for Regexp format) \N \N 0 Bool 0 0 Production | |
output_format_write_statistics 1 0 Write statistics about read rows, bytes, time elapsed in suitable output formats.\n\nEnabled by default \N \N 0 Bool 1 0 Production | |
output_format_pretty_row_numbers 1 0 Add row numbers before each row for pretty output format \N \N 0 Bool 1 0 Production | |
output_format_pretty_highlight_digit_groups 1 0 If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline. \N \N 0 Bool 1 0 Production | |
output_format_pretty_single_large_number_tip_threshold 1000000 0 Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0) \N \N 0 UInt64 1000000 0 Production | |
output_format_pretty_highlight_trailing_spaces 1 0 If enabled and if output is a terminal, highlight trailing spaces with a gray color and underline. \N \N 0 Bool 1 0 Production | |
output_format_pretty_multiline_fields 1 0 If enabled, Pretty formats will render multi-line fields inside table cell, so the table\'s outline will be preserved.\nIf not, they will be rendered as is, potentially deforming the table (one upside of keeping it off is that copy-pasting multi-line values will be easier). \N \N 0 Bool 1 0 Production | |
output_format_pretty_fallback_to_vertical 1 0 If enabled, and the table is wide but short, the Pretty format will output it as the Vertical format does.\nSee `output_format_pretty_fallback_to_vertical_max_rows_per_chunk` and `output_format_pretty_fallback_to_vertical_min_table_width` for detailed tuning of this behavior. \N \N 0 Bool 1 0 Production | |
output_format_pretty_fallback_to_vertical_max_rows_per_chunk 10 0 The fallback to Vertical format (see `output_format_pretty_fallback_to_vertical`) will be activated only if the number of records in a chunk is not more than the specified value. \N \N 0 UInt64 10 0 Production | |
output_format_pretty_fallback_to_vertical_min_table_width 250 0 The fallback to Vertical format (see `output_format_pretty_fallback_to_vertical`) will be activated only if the sum of lengths of columns in a table is at least the specified value, or if at least one value contains a newline character. \N \N 0 UInt64 250 0 Production | |
output_format_pretty_fallback_to_vertical_min_columns 5 0 The fallback to Vertical format (see `output_format_pretty_fallback_to_vertical`) will be activated only if the number of columns is greater than the specified value. \N \N 0 UInt64 5 0 Production | |
insert_distributed_one_random_shard 0 0 Enables or disables random shard insertion into a [Distributed](/engines/table-engines/special/distributed) table when there is no distributed key.\n\nBy default, when inserting data into a `Distributed` table with more than one shard, the ClickHouse server will reject any insertion request if there is no distributed key. When `insert_distributed_one_random_shard = 1`, insertions are allowed and data is forwarded randomly among all shards.\n\nPossible values:\n\n- 0 — Insertion is rejected if there are multiple shards and no distributed key is given.\n- 1 — Insertion is done randomly among all available shards when no distributed key is given. \N \N 0 Bool 0 0 Production | |
exact_rows_before_limit 0 0 When enabled, ClickHouse will provide exact value for rows_before_limit_at_least statistic, but with the cost that the data before limit will have to be read completely \N \N 0 Bool 0 0 Production | |
rows_before_aggregation 0 0 When enabled, ClickHouse will provide exact value for rows_before_aggregation statistic, represents the number of rows read before aggregation \N \N 0 Bool 0 0 Production | |
cross_to_inner_join_rewrite 1 0 Use inner join instead of comma/cross join if there are joining expressions in the WHERE section. Values: 0 - no rewrite, 1 - apply if possible for comma/cross, 2 - force rewrite all comma joins, cross - if possible \N \N 0 UInt64 1 0 Production | |
output_format_arrow_low_cardinality_as_dictionary 0 0 Enable output LowCardinality type as Dictionary Arrow type \N \N 0 Bool 0 0 Production | |
output_format_arrow_use_signed_indexes_for_dictionary 1 0 Use signed integers for dictionary indexes in Arrow format \N \N 0 Bool 1 0 Production | |
output_format_arrow_use_64_bit_indexes_for_dictionary 0 0 Always use 64 bit integers for dictionary indexes in Arrow format \N \N 0 Bool 0 0 Production | |
output_format_arrow_string_as_string 1 0 Use Arrow String type instead of Binary for String columns \N \N 0 Bool 1 0 Production | |
output_format_arrow_fixed_string_as_fixed_byte_array 1 0 Use Arrow FIXED_SIZE_BINARY type instead of Binary for FixedString columns. \N \N 0 Bool 1 0 Production | |
output_format_arrow_compression_method lz4_frame 0 Compression method for Arrow output format. Supported codecs: lz4_frame, zstd, none (uncompressed) \N \N 0 ArrowCompression lz4_frame 0 Production | |
output_format_orc_string_as_string 1 0 Use ORC String type instead of Binary for String columns \N \N 0 Bool 1 0 Production | |
output_format_orc_compression_method zstd 0 Compression method for ORC output format. Supported codecs: lz4, snappy, zlib, zstd, none (uncompressed) \N \N 0 ORCCompression zstd 0 Production | |
output_format_orc_row_index_stride 10000 0 Target row index stride in ORC output format \N \N 0 UInt64 10000 0 Production | |
output_format_orc_dictionary_key_size_threshold 0 0 For a string column in ORC output format, if the number of distinct values is greater than this fraction of the total number of non-null rows, turn off dictionary encoding. Otherwise dictionary encoding is enabled \N \N 0 Double 0 0 Production | |
output_format_orc_writer_time_zone_name GMT 0 The time zone name for ORC writer, the default ORC writer\'s time zone is GMT. \N \N 0 String GMT 0 Production | |
format_capn_proto_enum_comparising_mode by_values 0 How to map ClickHouse Enum and CapnProto Enum \N \N 0 CapnProtoEnumComparingMode by_values 0 Production | |
format_capn_proto_use_autogenerated_schema 1 0 Use autogenerated CapnProto schema when format_schema is not set \N \N 0 Bool 1 0 Production | |
format_protobuf_use_autogenerated_schema 1 0 Use autogenerated Protobuf when format_schema is not set \N \N 0 Bool 1 0 Production | |
output_format_schema 0 The path to the file where the automatically generated schema will be saved in [Cap\'n Proto](/interfaces/formats/CapnProto) or [Protobuf](/interfaces/formats/Protobuf) formats. \N \N 0 String 0 Production | |
input_format_mysql_dump_table_name 0 Name of the table in MySQL dump from which to read data \N \N 0 String 0 Production | |
input_format_mysql_dump_map_column_names 1 0 Match columns from table in MySQL dump and columns from ClickHouse table by names \N \N 0 Bool 1 0 Production | |
output_format_sql_insert_max_batch_size 65409 0 The maximum number of rows in one INSERT statement. \N \N 0 UInt64 65409 0 Production | |
output_format_sql_insert_table_name table 0 The name of table in the output INSERT query \N \N 0 String table 0 Production | |
output_format_sql_insert_include_column_names 1 0 Include column names in INSERT query \N \N 0 Bool 1 0 Production | |
output_format_sql_insert_use_replace 0 0 Use REPLACE statement instead of INSERT \N \N 0 Bool 0 0 Production | |
output_format_sql_insert_quote_names 1 0 Quote column names with \'`\' characters \N \N 0 Bool 1 0 Production | |
output_format_values_escape_quote_with_quote 0 0 If true escape \' with \'\', otherwise quoted with \\\\\' \N \N 0 Bool 0 0 Production | |
output_format_bson_string_as_string 0 0 Use BSON String type instead of Binary for String columns. \N \N 0 Bool 0 0 Production | |
input_format_bson_skip_fields_with_unsupported_types_in_schema_inference 0 0 Skip fields with unsupported types while schema inference for format BSON. \N \N 0 Bool 0 0 Production | |
format_display_secrets_in_show_and_select 0 0 Enables or disables showing secrets in `SHOW` and `SELECT` queries for tables, databases,\ntable functions, and dictionaries.\n\nUser wishing to see secrets must also have\n[`display_secrets_in_show_and_select` server setting](../server-configuration-parameters/settings#display_secrets_in_show_and_select)\nturned on and a\n[`displaySecretsInShowAndSelect`](/sql-reference/statements/grant#displaysecretsinshowandselect) privilege.\n\nPossible values:\n\n- 0 — Disabled.\n- 1 — Enabled. \N \N 0 Bool 0 0 Production | |
regexp_dict_allow_hyperscan 1 0 Allow regexp_tree dictionary using Hyperscan library. \N \N 0 Bool 1 0 Production | |
regexp_dict_flag_case_insensitive 0 0 Use case-insensitive matching for a regexp_tree dictionary. Can be overridden in individual expressions with (?i) and (?-i). \N \N 0 Bool 0 0 Production | |
regexp_dict_flag_dotall 0 0 Allow \'.\' to match newline characters for a regexp_tree dictionary. \N \N 0 Bool 0 0 Production | |
dictionary_use_async_executor 0 0 Execute a pipeline for reading dictionary source in several threads. It\'s supported only by dictionaries with local CLICKHOUSE source. \N \N 0 Bool 0 0 Production | |
precise_float_parsing 0 0 Prefer more precise (but slower) float parsing algorithm \N \N 0 Bool 0 0 Production | |
date_time_overflow_behavior ignore 0 Defines the behavior when [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md), [DateTime64](../../sql-reference/data-types/datetime64.md) or integers are converted into Date, Date32, DateTime or DateTime64 but the value cannot be represented in the result type.\n\nPossible values:\n\n- `ignore` — Silently ignore overflows. Result are undefined.\n- `throw` — Throw an exception in case of overflow.\n- `saturate` — Saturate the result. If the value is smaller than the smallest value that can be represented by the target type, the result is chosen as the smallest representable value. If the value is bigger than the largest value that can be represented by the target type, the result is chosen as the largest representable value.\n\nDefault value: `ignore`. \N \N 0 DateTimeOverflowBehavior ignore 0 Production | |
validate_experimental_and_suspicious_types_inside_nested_types 1 0 Validate usage of experimental and suspicious types inside nested types like Array/Map/Tuple \N \N 0 Bool 1 0 Production | |
show_create_query_identifier_quoting_rule when_necessary 0 Set the quoting rule for identifiers in SHOW CREATE query \N \N 0 IdentifierQuotingRule when_necessary 0 Production | |
show_create_query_identifier_quoting_style Backticks 0 Set the quoting style for identifiers in SHOW CREATE query \N \N 0 IdentifierQuotingStyle Backticks 0 Production | |
input_format_arrow_import_nested 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
input_format_parquet_import_nested 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
input_format_orc_import_nested 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete | |
output_format_enable_streaming 0 0 Obsolete setting, does nothing. \N \N 0 Bool 0 1 Obsolete |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment