Skip to content

Instantly share code, notes, and snippets.

@Slach
Last active August 11, 2020 17:13
Show Gist options
  • Save Slach/2d6ced1980689703b033ab192a14a233 to your computer and use it in GitHub Desktop.
Save Slach/2d6ced1980689703b033ab192a14a233 to your computer and use it in GitHub Desktop.
Reproduce fail dictionary loading on server startup in ClickHouse 20.6

steps to reproduce

docker-compose down
docker-compose run clickhouse
2020.08.11 12:29:03.636355 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 12:29:03.636877 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 12:29:03.670320 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:03.671434 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:03.672211 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:03.673014 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:05.843468 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. ? @ 0xf241e19 in /usr/bin/clickhouse
17. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
18. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
19. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
20. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
21. ? @ 0xeffd6c8 in /usr/bin/clickhouse
22. ? @ 0xeffe082 in /usr/bin/clickhouse
23. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
24. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
25. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
26. ? @ 0xa26f4a3 in /usr/bin/clickhouse
27. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
28. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 12:29:06.857716 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 12:36:20.594743 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 12:36:20.595033 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 12:36:20.617097 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:20.618066 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:20.618894 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:20.619809 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:22.867770 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. ? @ 0xf241e19 in /usr/bin/clickhouse
17. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
18. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
19. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
20. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
21. ? @ 0xeffd6c8 in /usr/bin/clickhouse
22. ? @ 0xeffe082 in /usr/bin/clickhouse
23. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
24. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
25. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
26. ? @ 0xa26f4a3 in /usr/bin/clickhouse
27. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
28. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 12:36:23.881242 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:09:44.888484 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 13:09:44.888804 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 13:09:44.913830 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:44.914553 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:44.915319 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:44.916463 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:47.132105 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 13:09:48.160468 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:10:03.691691 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 13:10:03.692378 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 13:10:03.717050 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:03.717963 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:03.719207 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:03.720454 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:05.898975 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 13:10:06.911275 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:17:59.862677 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.1.16.120 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:17:59.863795 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.1.16.120 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:17:59.864814 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.1.16.120 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:18:01.946409 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: Cannot attach table '`table`' from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT CAST(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))), 'String')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. 0x102b97a0 Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) in /usr/bin/clickhouse
1. 0x8e885cd DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) in /usr/bin/clickhouse
2. 0xcf09c99 ? in /usr/bin/clickhouse
3. 0xcf14f63 std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const in /usr/bin/clickhouse
4. 0x92e8933 DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
5. 0x90f7825 DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
6. 0x911a9fa DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
7. 0x911adf2 DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
8. 0x912117c DB::FunctionOverloadResolverAdaptor::build(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
9. 0xcf6a86e DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) in /usr/bin/clickhouse
10. 0xcf6aa7d DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) in /usr/bin/clickhouse
11. 0xcf4d821 DB::ScopeStack::addAction(DB::ExpressionAction const&) in /usr/bin/clickhouse
12. 0xcf533c2 DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /usr/bin/clickhouse
13. 0xcf521b6 DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /usr/bin/clickhouse
14. 0xcf521b6 DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /usr/bin/clickhouse
15. 0xcf3f5a9 DB::InDepthNodeVisitor<DB::ActionsMatcher, true, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) in /usr/bin/clickhouse
16. 0xcf34bb3 ? in /usr/bin/clickhouse
17. 0xcf36ee5 DB::ExpressionAnalyzer::getActions(bool, bool) in /usr/bin/clickhouse
18. 0xcf25d2b DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&) in /usr/bin/clickhouse
19. 0xcfb7592 DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) in /usr/bin/clickhouse
20. 0xcfad8b5 ? in /usr/bin/clickhouse
21. 0xcfae00b ? in /usr/bin/clickhouse
22. 0x8eaca87 ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) in /usr/bin/clickhouse
23. 0x8ead0e8 ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const in /usr/bin/clickhouse
24. 0x8eabf97 ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) in /usr/bin/clickhouse
25. 0x8eaa3a3 ? in /usr/bin/clickhouse
26. 0x76db start_thread in /lib/x86_64-linux-gnu/libpthread-2.27.so
27. 0x12188f clone in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.1.16.120 (official build))
2020.08.11 13:18:01.951451 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: Cannot attach table '`table`' from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT CAST(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))), 'String')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:23:20.333870 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:20.334766 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:20.335747 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:20.336621 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:22.561346 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0xd076011 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd081333 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x93f2561 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x93f4cd3 in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x91da895 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x91fe4da in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x91fe8d2 in /usr/bin/clickhouse
9. DB::FunctionOverloadResolverAdaptor::build(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x9204c5c in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13aaae in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13acad in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xd3623ed in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd369b1a in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
15. ? @ 0xd34bd1e in /usr/bin/clickhouse
16. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xd34e85f in /usr/bin/clickhouse
17. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xd6a8c4a in /usr/bin/clickhouse
18. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&) @ 0xd092ba3 in /usr/bin/clickhouse
19. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xd0ae74a in /usr/bin/clickhouse
20. ? @ 0xd0a4def in /usr/bin/clickhouse
21. ? @ 0xd0a55d5 in /usr/bin/clickhouse
22. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
23. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
24. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
25. ? @ 0x8fb97d3 in /usr/bin/clickhouse
26. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
27. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 13:23:23.579588 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 14:04:29.013680 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 14:04:29.019352 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 14:04:29.056556 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:29.058760 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:29.059961 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:29.061235 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:31.311394 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 14:04:32.380566 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 14:04:47.645478 [ 40 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 14:04:47.646364 [ 40 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 14:04:47.693327 [ 40 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:47.694812 [ 40 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:47.696546 [ 40 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:47.698746 [ 40 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:50.009146 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 14:04:51.029587 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 15:39:17.621152 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 15:39:17.621803 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 15:39:17.641775 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:17.642667 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:17.643922 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:17.645196 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.853914 [ 1 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.854917 [ 1 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.856006 [ 1 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.856885 [ 1 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.675908 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.677752 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.679218 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.680714 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.823513 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.825088 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.826332 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.827508 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.591639 [ 38 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.592750 [ 38 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.593603 [ 38 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.594468 [ 38 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.787804 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.789192 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.790170 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.790935 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.854945 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.855710 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.856446 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.857088 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:00.972641 [ 74 ] {0dd99465-edc3-4e6a-aab4-33568873127c} <Error> executeQuery: Code: 36, e.displayText() = DB::Exception: external dictionary 'wister.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible. (version 20.3.16.165 (official build)) (from 127.0.0.1:44168) (in query: CREATE TABLE IF NOT EXISTS default.raw_data (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `event_type` Enum8('transaction' = 0, 'session' = 1) DEFAULT CAST('transaction', 'Enum8(\'transaction\' = 0, \'session\' = 1)'), `import_date` DateTime DEFAULT toDateTime(now()), `uid` String DEFAULT '', `session2_id` UInt64, `date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `datefin` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `wid` Int64 DEFAULT CAST(0, 'Int64'), `famillewap` Int32 DEFAULT CAST(0, 'Int32'), `ratio` Float64 DEFAULT CAST(0, 'Float64'), `Status` UInt8 DEFAULT 0, `periode` UInt16 DEFAULT CAST(0, 'UInt16'), `uacore` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `langue` String DEFAULT '', `ml` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `video` LowCardinality(Nullable(String)), `xhtml` LowCardinality(String) DEFAULT CAST('non', 'LowCardinality(String)'), `telechargement` UInt8 DEFAULT 0, `uaextension` LowCardinality(Nullable(String)), `multiobject` UInt8 DEFAULT 0, `mms` UInt8 DEFAULT 0, `best_ml` LowCardinality(String) DEFAULT CAST('wml', 'LowCardinality(String)'), `3g` FixedString(1) DEFAULT CAST('0', 'FixedString(1)'), `age` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `login` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `famille` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `created` Date DEFAULT toDate('0000-00-00'), `modified` Date DEFAULT toDate('0000-00-00'), `familledld` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `chatvalidation` UInt8 DEFAULT 0, `https` UInt8 DEFAULT 0, `motclef` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `bgcolor` Enum8('0' = 0, '1' = 1) DEFAULT CAST('0', 'Enum8(\'0\' = 0, \'1\' = 1)'), `http_x_nokia_bearer` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stream` UInt8 DEFAULT 0, `ip` LowCardinality(Nullable(String)), `tactile` Enum8('0' = 0, '1' = 1) DEFAULT CAST('0', 'Enum8(\'0\' = 0, \'1\' = 1)') COMMENT '1 Le terminal est tactile, 0 sinon', `familledldvideo` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)') COMMENT 'Format vidéo pour le terminal', `transaction_id` UInt32, `trxidpartenaire` Nullable(String), `date_achat` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `idoffre` Nullable(UInt32), `offre` LowCardinality(Nullable(String)), `price_point_code` LowCardinality(String) DEFAULT CAST('MISC', 'LowCardinality(String)'), `price_point` LowCardinality(String) DEFAULT CAST('Various', 'LowCardinality(String)'), `typeachat` FixedString(1) DEFAULT CAST('', 'FixedString(1)'), `type` LowCardinality(Nullable(String)), `offer_type` LowCardinality(Nullable(String)), `groupe` LowCardinality(Nullable(String)), `distributeur` LowCardinality(Nullable(String)), `affilie` LowCardinality(Nullable(String)), `ope_factu` LowCardinality(Nullable(String)) DEFAULT CAST('NULL', 'LowCardinality(Nullable(String))'), `booster` UInt8 DEFAULT 0, `abo_id` Nullable(UInt32), `prix` Nullable(Float64), `cawister` Nullable(Float64), `castats` Nullable(Float64), `ca` Nullable(Float64), `sessionid` String DEFAULT '', `device_family` LowCardinality(String) DEFAULT CAST('Other', 'LowCardinality(String)'), `code_service` LowCardinality(String), `nom_service` LowCardinality(Nullable(String)), `opco` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `nom_operateur` LowCardinality(Nullable(String)), `origine` LowCardinality(String), `pays` LowCardinality(String) DEFAULT CAST('France', 'LowCardinality(String)') COMMENT 'pays de provenance de la session', `pays_code` LowCardinality(String) DEFAULT CAST('FRA', 'LowCardinality(String)'), `ope_telecom` LowCardinality(String) DEFAULT CAST('FRA_WISTER', 'LowCardinality(String)') COMMENT 'operateur telecom de la session', `ope_mobile` LowCardinality(String) DEFAULT CAST('INC', 'LowCardinality(String)'), `crm` Nullable(UInt32), `stamp` LowCardinality(Nullable(String)), `stat_mktg_tracker` LowCardinality(Nullable(String)), `stat_crm_tracker` LowCardinality(Nullable(String)), `optin` LowCardinality(String) DEFAULT CAST('INC', 'LowCardinality(String)'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `id_site` Nullable(UInt32), `partco` LowCardinality(String), `is_bot` UInt8, `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_ad_id` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_id'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'adid')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'adid'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_app_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_app_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_banner_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_banner_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_bid` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_bid_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_blp_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_blp_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_browser` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'browser')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'browser'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_campaign_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'campaign_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'campaign_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_carrier` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'carrier')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'carrier'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_category` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_category_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_click_id` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'click_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'click_id'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'clickId')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'clickId'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom1` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom1')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom1'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom2` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom2')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom2'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom_deux` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_deux')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_deux'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom_un` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_un')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_un'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_country` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'country')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'country'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_device` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'device')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'device'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_lp` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'MabForcedSolutionId')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'MabForcedSolutionId'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_lp_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'lp_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'lp_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_os` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'os')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'os'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_pricing_model` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_model')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_model'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_mod')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_mod'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_pub_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pub_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pub_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_publisher_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'publisher_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'publisher_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_site_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_site_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_stat_tracker` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'stat_tracker')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'stat_tracker'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_target_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'target_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'target_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_timestamp` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'timestamp')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'timestamp'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_zone` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_zone_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_ad_type` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_type')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_type'))]), '+', ' '), ''), 'LowCardinality(String)'), `stat_tracker` LowCardinality(String) DEFAULT CAST(if(coalesce(stamp, '') LIKE '%_MB:%', substr(stamp, 1, position(stamp, '_MB:') - 1), coalesce(stamp, '')), 'LowCardinality(String)')) ENGINE = MergeTree() PARTITION BY toYYYYMM(event_date) ORDER BY (toDate(event_date), event_type, code_affilie, device_family, code_service) SETTINGS index_granularity = 8192), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0xd076011 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd081333 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x93f2561 in /usr/bin/clickhouse
5. DB::FunctionDictGet<DB::DataTypeNumber<int>, DB::NameDictGetInt32>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0x9423bcf in /usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x91ff218 in /usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x91ff492 in /usr/bin/clickhouse
8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x9200201 in /usr/bin/clickhouse
9. DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd139cb5 in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13a833 in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13acad in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xd3623ed in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd369b1a in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
16. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
17. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
18. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
19. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
20. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
21. ? @ 0xd34bd1e in /usr/bin/clickhouse
22. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xd34e85f in /usr/bin/clickhouse
23. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xd6a8c4a in /usr/bin/clickhouse
24. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&) @ 0xd092ba3 in /usr/bin/clickhouse
25. DB::InterpreterCreateQuery::setProperties(DB::ASTCreateQuery&) const @ 0xd094e02 in /usr/bin/clickhouse
26. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0xd09640d in /usr/bin/clickhouse
27. DB::InterpreterCreateQuery::execute() @ 0xd098721 in /usr/bin/clickhouse
28. ? @ 0xd5aa698 in /usr/bin/clickhouse
29. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5ad2d1 in /usr/bin/clickhouse
30. DB::TCPHandler::runImpl() @ 0x90794f9 in /usr/bin/clickhouse
31. DB::TCPHandler::run() @ 0x907a4e0 in /usr/bin/clickhouse
2020.08.11 16:01:02.023906 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:02.024673 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:02.025282 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:02.025927 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.291597 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.292529 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.293442 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.294601 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.524742 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.525617 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.526625 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.527494 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.996789 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.997622 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.998392 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.999122 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.755383 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.756147 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.756853 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.757485 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.929528 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.930225 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.930785 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.931365 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.011595 [ 38 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.012378 [ 38 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.013009 [ 38 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.013695 [ 38 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.202709 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.203605 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.204446 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.205312 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.463918 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.464829 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.465624 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.466521 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:35.509639 [ 88 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:28:35.511432 [ 87 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:28:40: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:28:35.513599 [ 74 ] {333e8f74-e580-4ce1-b33f-aa896f0d2b1f} <Error> executeQuery: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
: default expression and column type are incompatible. (version 20.3.16.165 (official build)) (from 127.0.0.1:44196) (in query: CREATE TABLE IF NOT EXISTS default.table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)')) ENGINE = MergeTree() ORDER BY tuple()), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
2020.08.11 16:28:37.599848 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:37.600776 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:37.601531 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:37.602385 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:42.092122 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:42.093007 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:42.093516 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:42.094167 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:43.150078 [ 88 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:38:43.151697 [ 87 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:38:49: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:38:43.153379 [ 74 ] {abe03996-f9e6-4296-8036-275f784f63f8} <Error> executeQuery: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
: default expression and column type are incompatible. (version 20.3.16.165 (official build)) (from 127.0.0.1:44202) (in query: CREATE TABLE IF NOT EXISTS default.table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)')) ENGINE = MergeTree() ORDER BY tuple()), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
2020.08.11 16:38:45.207075 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:45.208033 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:45.208739 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:45.209460 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:38:56.837564 [ 170 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:38:56.839307 [ 169 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:39:01: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:38:56.841913 [ 170 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:38:56.843354 [ 171 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:39:02: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:05.209876 [ 170 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:05.210013 [ 172 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:05.211205 [ 171 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:39:12: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:05.211747 [ 174 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:39:10: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:10.222276 [ 175 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:10.225806 [ 176 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:39:18: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:15.222867 [ 173 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:15.225774 [ 177 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:39:24: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:20.224916 [ 171 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:20.226474 [ 172 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:39:30: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:25.228716 [ 175 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:25.230728 [ 169 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:39:34: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:30.230606 [ 177 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:30.232010 [ 173 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:39:45: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:35.241731 [ 172 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:35.243880 [ 171 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:39:43: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:45.247679 [ 170 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:45.247728 [ 174 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:45.248573 [ 169 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:39:50: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:45.248942 [ 175 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:39:57: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:55.247665 [ 171 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:55.249362 [ 172 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:40:26: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:39:59.519292 [ 146 ] {143e32c2-f536-4213-8bef-1cdab2f12022} <Error> executeQuery: Code: 60, e.displayText() = DB::Exception: Table default.dict_prod_partner_affiliate_links doesn't exist. (version 20.3.16.165 (official build)) (from 127.0.0.1:44204) (in query: SELECT * FROM dict_prod_partner_affiliate_links), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. DB::Context::getTableImpl(DB::StorageID const&, std::__1::optional<DB::Exception>*) const @ 0xd034d64 in /usr/bin/clickhouse
3. DB::Context::getTable(DB::StorageID const&) const @ 0xd034efb in /usr/bin/clickhouse
4. DB::Context::getTable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd034fbd in /usr/bin/clickhouse
5. DB::JoinedTables::getLeftTableStorage() @ 0xd4aef22 in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd18ffb1 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd190ef9 in /usr/bin/clickhouse
8. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd39bb06 in /usr/bin/clickhouse
9. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::QueryProcessingStage::Enum) @ 0xd0e4af4 in /usr/bin/clickhouse
10. ? @ 0xd5aa4e5 in /usr/bin/clickhouse
11. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5ad2d1 in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() @ 0x90794f9 in /usr/bin/clickhouse
13. DB::TCPHandler::run() @ 0x907a4e0 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() @ 0xe40deab in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() @ 0xe40e32d in /usr/bin/clickhouse
16. Poco::PooledThread::run() @ 0x106348d7 in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) @ 0x106306dc in /usr/bin/clickhouse
18. ? @ 0x1063207d in /usr/bin/clickhouse
19. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
20. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
2020.08.11 16:40:00.248739 [ 170 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:40:00.251812 [ 177 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:40:29: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:40:12.163069 [ 146 ] {e9e9a1d8-ee13-48fb-8213-abc904e94f01} <Error> executeQuery: Code: 60, e.displayText() = DB::Exception: Table default.dict_prod_partner_affiliate_links doesn't exist. (version 20.3.16.165 (official build)) (from 127.0.0.1:44204) (in query: SELECT * FROM default.dict_prod_partner_affiliate_links), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. DB::Context::getTableImpl(DB::StorageID const&, std::__1::optional<DB::Exception>*) const @ 0xd034d64 in /usr/bin/clickhouse
3. DB::Context::getTable(DB::StorageID const&) const @ 0xd034efb in /usr/bin/clickhouse
4. DB::Context::getTable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd034fbd in /usr/bin/clickhouse
5. DB::JoinedTables::getLeftTableStorage() @ 0xd4aef22 in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd18ffb1 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd190ef9 in /usr/bin/clickhouse
8. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd39bb06 in /usr/bin/clickhouse
9. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::QueryProcessingStage::Enum) @ 0xd0e4af4 in /usr/bin/clickhouse
10. ? @ 0xd5aa4e5 in /usr/bin/clickhouse
11. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5ad2d1 in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() @ 0x90794f9 in /usr/bin/clickhouse
13. DB::TCPHandler::run() @ 0x907a4e0 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() @ 0xe40deab in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() @ 0xe40e32d in /usr/bin/clickhouse
16. Poco::PooledThread::run() @ 0x106348d7 in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) @ 0x106306dc in /usr/bin/clickhouse
18. ? @ 0x1063207d in /usr/bin/clickhouse
19. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
20. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
2020.08.11 16:40:30.274697 [ 171 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:40:30.274735 [ 173 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:40:30.276368 [ 176 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:41:40: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:40:30.277215 [ 175 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:40:57: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:41:00.295953 [ 177 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:41:00.298852 [ 170 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_partner_affiliate_links', next update is scheduled at 2020-08-11 16:42:11: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:41:40.316972 [ 171 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:41:40.319104 [ 172 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:42:41: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:42:32.263521 [ 146 ] {c83f9d4a-bd9a-4c38-9e34-c82708bfae50} <Error> executeQuery: Code: 60, e.displayText() = DB::Exception: Table default.dict_prod_mb2_params doesn't exist. (version 20.3.16.165 (official build)) (from 127.0.0.1:44208) (in query: SELECT * FROM default.dict_prod_mb2_params), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. DB::Context::getTableImpl(DB::StorageID const&, std::__1::optional<DB::Exception>*) const @ 0xd034d64 in /usr/bin/clickhouse
3. DB::Context::getTable(DB::StorageID const&) const @ 0xd034efb in /usr/bin/clickhouse
4. DB::Context::getTable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd034fbd in /usr/bin/clickhouse
5. DB::JoinedTables::getLeftTableStorage() @ 0xd4aef22 in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd18ffb1 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd190ef9 in /usr/bin/clickhouse
8. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd39bb06 in /usr/bin/clickhouse
9. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::QueryProcessingStage::Enum) @ 0xd0e4af4 in /usr/bin/clickhouse
10. ? @ 0xd5aa4e5 in /usr/bin/clickhouse
11. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5ad2d1 in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() @ 0x90794f9 in /usr/bin/clickhouse
13. DB::TCPHandler::run() @ 0x907a4e0 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() @ 0xe40deab in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() @ 0xe40e32d in /usr/bin/clickhouse
16. Poco::PooledThread::run() @ 0x106348d7 in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) @ 0x106306dc in /usr/bin/clickhouse
18. ? @ 0x1063207d in /usr/bin/clickhouse
19. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
20. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
2020.08.11 16:42:45.373730 [ 176 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:42:45.377427 [ 171 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'default.dict_prod_mb2_params', next update is scheduled at 2020-08-11 16:48:18: Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: name\torder\tdisplay_name\t1: (at row 1)
Row 1:
Column 0, name: partner_id, type: UInt32, parsed text: "1"
Column 1, name: display_name, type: String, parsed text: "1"
Column 2, name: order, type: UInt8, ERROR: text "name<TAB>order" is not like UInt8
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
13. ? @ 0x8fb97d3 in /usr/bin/clickhouse
14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
15. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 16:42:46.545609 [ 146 ] {33a2b33a-a2af-4539-b5e9-e648fdb30b83} <Error> executeQuery: Code: 60, e.displayText() = DB::Exception: Table default.dict_prod_mb2_params doesn't exist. (version 20.3.16.165 (official build)) (from 127.0.0.1:44208) (in query: SELECT * FROM default.dict_prod_mb2_params), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. DB::Context::getTableImpl(DB::StorageID const&, std::__1::optional<DB::Exception>*) const @ 0xd034d64 in /usr/bin/clickhouse
3. DB::Context::getTable(DB::StorageID const&) const @ 0xd034efb in /usr/bin/clickhouse
4. DB::Context::getTable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd034fbd in /usr/bin/clickhouse
5. DB::JoinedTables::getLeftTableStorage() @ 0xd4aef22 in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd18ffb1 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd190ef9 in /usr/bin/clickhouse
8. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd39bb06 in /usr/bin/clickhouse
9. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::QueryProcessingStage::Enum) @ 0xd0e4af4 in /usr/bin/clickhouse
10. ? @ 0xd5aa4e5 in /usr/bin/clickhouse
11. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5ad2d1 in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() @ 0x90794f9 in /usr/bin/clickhouse
13. DB::TCPHandler::run() @ 0x907a4e0 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() @ 0xe40deab in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() @ 0xe40e32d in /usr/bin/clickhouse
16. Poco::PooledThread::run() @ 0x106348d7 in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) @ 0x106306dc in /usr/bin/clickhouse
18. ? @ 0x1063207d in /usr/bin/clickhouse
19. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
20. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
2020.08.11 16:45:30.722221 [ 146 ] {bea2f4a1-79c4-4011-8ded-e0f73b255293} <Error> executeQuery: Code: 60, e.displayText() = DB::Exception: Table default.dict_prod_mb2_params doesn't exist. (version 20.3.16.165 (official build)) (from 127.0.0.1:44212) (in query: SELECT * FROM default.dict_prod_mb2_params), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. DB::Context::getTableImpl(DB::StorageID const&, std::__1::optional<DB::Exception>*) const @ 0xd034d64 in /usr/bin/clickhouse
3. DB::Context::getTable(DB::StorageID const&) const @ 0xd034efb in /usr/bin/clickhouse
4. DB::Context::getTable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd034fbd in /usr/bin/clickhouse
5. DB::JoinedTables::getLeftTableStorage() @ 0xd4aef22 in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd18ffb1 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd190ef9 in /usr/bin/clickhouse
8. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xd39bb06 in /usr/bin/clickhouse
9. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, DB::Context&, DB::QueryProcessingStage::Enum) @ 0xd0e4af4 in /usr/bin/clickhouse
10. ? @ 0xd5aa4e5 in /usr/bin/clickhouse
11. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5ad2d1 in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() @ 0x90794f9 in /usr/bin/clickhouse
13. DB::TCPHandler::run() @ 0x907a4e0 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() @ 0xe40deab in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() @ 0xe40e32d in /usr/bin/clickhouse
16. Poco::PooledThread::run() @ 0x106348d7 in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) @ 0x106306dc in /usr/bin/clickhouse
18. ? @ 0x1063207d in /usr/bin/clickhouse
19. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
20. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
2020.08.11 16:46:08.749355 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:46:08.751467 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:46:08.752267 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:46:08.753259 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:46:10.963115 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0xd076011 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd081333 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x93f2561 in /usr/bin/clickhouse
5. DB::FunctionDictGet<DB::DataTypeNumber<int>, DB::NameDictGetInt32>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0x9423bcf in /usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x91ff218 in /usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x91ff492 in /usr/bin/clickhouse
8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x9200201 in /usr/bin/clickhouse
9. DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd139cb5 in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13a833 in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13acad in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xd3623ed in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd369b1a in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
16. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
17. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
18. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
19. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
20. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
21. ? @ 0xd34bd1e in /usr/bin/clickhouse
22. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xd34e85f in /usr/bin/clickhouse
23. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xd6a8c4a in /usr/bin/clickhouse
24. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&) @ 0xd092ba3 in /usr/bin/clickhouse
25. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xd0ae74a in /usr/bin/clickhouse
26. ? @ 0xd0a4def in /usr/bin/clickhouse
27. ? @ 0xd0a55d5 in /usr/bin/clickhouse
28. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
29. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
30. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
31. ? @ 0x8fb97d3 in /usr/bin/clickhouse
(version 20.3.16.165 (official build))
2020.08.11 16:46:11.975345 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 16:47:10.352296 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 16:47:10.352722 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 16:47:10.387191 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:47:10.388998 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:47:10.391181 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:47:10.394100 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:47:12.569994 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGet<DB::DataTypeNumber<int>, DB::NameDictGetInt32>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xb0b918f in /usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cb96 in /usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cdb2 in /usr/bin/clickhouse
8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6d645 in /usr/bin/clickhouse
9. DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4405 in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4fc5 in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
17. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
18. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
19. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
20. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
21. ? @ 0xf241e19 in /usr/bin/clickhouse
22. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
23. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
24. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
25. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
26. ? @ 0xeffd6c8 in /usr/bin/clickhouse
27. ? @ 0xeffe082 in /usr/bin/clickhouse
28. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
29. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
30. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
31. ? @ 0xa26f4a3 in /usr/bin/clickhouse
(version 20.6.3.28 (official build))
2020.08.11 16:47:13.585817 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 17:08:43.776148 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 17:08:43.776498 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 17:08:43.803347 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:08:43.804004 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:08:43.804609 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:08:43.805237 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:08:45.983107 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, 'then', 'else')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGet<DB::DataTypeNumber<int>, DB::NameDictGetInt32>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xb0b918f in /usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cb96 in /usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cdb2 in /usr/bin/clickhouse
8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6d645 in /usr/bin/clickhouse
9. DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4405 in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4fc5 in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
17. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
18. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
19. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
20. ? @ 0xf241e19 in /usr/bin/clickhouse
21. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
22. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
23. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
24. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
25. ? @ 0xeffd6c8 in /usr/bin/clickhouse
26. ? @ 0xeffe082 in /usr/bin/clickhouse
27. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
28. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
29. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
30. ? @ 0xa26f4a3 in /usr/bin/clickhouse
31. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 17:08:47.014239 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, 'then', 'else')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 17:10:37.856182 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 17:10:37.856826 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 17:10:37.888529 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:10:37.889772 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:10:37.890723 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:10:37.892067 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:10:40.067371 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, toLowCardinality('then'), toLowCardinality('else'))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGet<DB::DataTypeNumber<int>, DB::NameDictGetInt32>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xb0b918f in /usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cb96 in /usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cdb2 in /usr/bin/clickhouse
8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6d645 in /usr/bin/clickhouse
9. DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4405 in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4fc5 in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
17. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
18. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
19. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
20. ? @ 0xf241e19 in /usr/bin/clickhouse
21. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
22. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
23. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
24. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
25. ? @ 0xeffd6c8 in /usr/bin/clickhouse
26. ? @ 0xeffe082 in /usr/bin/clickhouse
27. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
28. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
29. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
30. ? @ 0xa26f4a3 in /usr/bin/clickhouse
31. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 17:10:41.087212 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, toLowCardinality('then'), toLowCardinality('else'))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 17:12:34.955257 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 17:12:34.955799 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 17:12:34.993219 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:12:34.994328 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:12:34.995199 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:12:34.996078 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 17:12:37.144907 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT toLowCardinality(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, toLowCardinality('then'), toLowCardinality('else')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGet<DB::DataTypeNumber<int>, DB::NameDictGetInt32>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0xb0b918f in /usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cb96 in /usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6cdb2 in /usr/bin/clickhouse
8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0xae6d645 in /usr/bin/clickhouse
9. DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4405 in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b4fc5 in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
17. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
18. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
19. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
20. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
21. ? @ 0xf241e19 in /usr/bin/clickhouse
22. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
23. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
24. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
25. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
26. ? @ 0xeffd6c8 in /usr/bin/clickhouse
27. ? @ 0xeffe082 in /usr/bin/clickhouse
28. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
29. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
30. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
31. ? @ 0xa26f4a3 in /usr/bin/clickhouse
(version 20.6.3.28 (official build))
2020.08.11 17:12:38.202427 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT toLowCardinality(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, toLowCardinality('then'), toLowCardinality('else')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
This file has been truncated, but you can view the full file.
2020.08.11 12:29:03.572646 [ 39 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 12:29:03.576161 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 12:29:03.617636 [ 39 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 39
2020.08.11 12:29:03.618130 [ 39 ] {} <Information> Application: starting up
2020.08.11 12:29:03.626381 [ 39 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 12:29:03.627080 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 12:29:03.627400 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 12:29:03.627776 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 12:29:03.628371 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 12:29:03.630275 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'dd0c1068a6de' as replica host.
2020.08.11 12:29:03.632014 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 12:29:03.634132 [ 39 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 12:29:03.635778 [ 39 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 12:29:03.636355 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 12:29:03.636877 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 12:29:03.637780 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:29:03.638101 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:29:03.638351 [ 39 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 12:29:03.638658 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 12:29:03.644043 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 12:29:03.644419 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 12:29:03.644723 [ 39 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 12:29:03.647963 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 12:29:03.648407 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 12:29:03.648985 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 12:29:03.670320 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:03.671434 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:03.672211 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:03.673014 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:29:03.673576 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 12:29:03.673917 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 12:29:03.674216 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 12:29:03.675108 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 12:29:03.675507 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 12:29:03.675715 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 12:29:03.786180 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 12:29:03.787177 [ 58 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 12:29:03.787337 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 12:29:03.787590 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 12:29:03.787437 [ 58 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 12:29:04.039054 [ 74 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41540, User-Agent: Wget/1.20.3 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 12:29:04.074735 [ 75 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:43994
2020.08.11 12:29:04.075241 [ 75 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.6.0, revision: 54436, user: default.
2020.08.11 12:29:04.075620 [ 75 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2020.08.11 12:29:04.075852 [ 75 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, SYSTEM, dictGet, SOURCES ON *.*
2020.08.11 12:29:04.082193 [ 75 ] {4a058d12-99eb-4fbf-92c8-45360c2e0098} <Debug> executeQuery: (from 127.0.0.1:43994) CREATE DICTIONARY IF NOT EXISTS default.dict ( key String, value UInt32 ) PRIMARY KEY "key" LAYOUT(COMPLEX_KEY_HASHED()) SOURCE(FILE(path '/var/lib/clickhouse/user_files/dict.txt' format 'TabSeparated')) LIFETIME(MIN 300 MAX 600);
2020.08.11 12:29:04.082770 [ 75 ] {4a058d12-99eb-4fbf-92c8-45360c2e0098} <Trace> ContextAccess (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 12:29:04.084664 [ 75 ] {4a058d12-99eb-4fbf-92c8-45360c2e0098} <Trace> ExternalDictionariesLoader: Loading config file '/var/lib/clickhouse/metadata/default/dict.sql.tmp'.
2020.08.11 12:29:04.085274 [ 75 ] {4a058d12-99eb-4fbf-92c8-45360c2e0098} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 12:29:04.085945 [ 75 ] {4a058d12-99eb-4fbf-92c8-45360c2e0098} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 12:29:04.086550 [ 75 ] {} <Information> TCPHandler: Processed in 0.004642908 sec.
2020.08.11 12:29:04.088785 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Debug> executeQuery: (from 127.0.0.1:43994) CREATE TABLE IF NOT EXISTS default.table ( site_id UInt32, stamp LowCardinality(Nullable(String)), md_ad_format LowCardinality(String) DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll( decodeURLComponent(arrayElement( extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]'), dictGet('default.dict', 'value',tuple( stamp )) )), '+', ' '), '') ) ENGINE MergeTree() ORDER BY tuple();
2020.08.11 12:29:04.089352 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.table
2020.08.11 12:29:04.089933 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Trace> ExternalDictionariesLoader: Will load the object 'default.dict' in background, force = false, loading_id = 1
2020.08.11 12:29:04.090427 [ 88 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'default.dict'
2020.08.11 12:29:04.090865 [ 88 ] {} <Trace> DictionaryFactory: Created dictionary source 'File: /var/lib/clickhouse/user_files/dict.txt TabSeparated' for dictionary 'default.dict'
2020.08.11 12:29:04.091246 [ 88 ] {} <Trace> FileDictionary: loadAll File: /var/lib/clickhouse/user_files/dict.txt TabSeparated
2020.08.11 12:29:04.093877 [ 88 ] {} <Trace> ExternalDictionariesLoader: Supposed update time for 'default.dict' is 2020-08-11 12:35:12 (loaded, lifetime [300, 600], no errors)
2020.08.11 12:29:04.094196 [ 88 ] {} <Trace> ExternalDictionariesLoader: Next update time for 'default.dict' was set to 2020-08-11 12:35:12
2020.08.11 12:29:04.094504 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 12:29:04.095877 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 12:29:04.098083 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 12:29:04.100011 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Debug> default.table: Loading data parts
2020.08.11 12:29:04.101159 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 12:29:04.112929 [ 75 ] {ea7359b9-c1ae-490a-855c-b40979b5ddc1} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 12:29:04.113690 [ 75 ] {} <Information> TCPHandler: Processed in 0.025658087 sec.
2020.08.11 12:29:04.114189 [ 75 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 12:29:04.125855 [ 48 ] {} <Trace> BaseDaemon: Received signal 15
2020.08.11 12:29:04.126246 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 12:29:04.126592 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 12:29:04.126798 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 12:29:04.799301 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 12:29:04.799629 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 12:29:04.801218 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 12:29:04.801683 [ 51 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 4 entries to flush
2020.08.11 12:29:04.801980 [ 51 ] {} <Debug> SystemLog (system.query_log): Creating new table system.query_log for QueryLog
2020.08.11 12:29:04.804647 [ 51 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 12:29:04.805185 [ 51 ] {} <Debug> system.query_log: Loaded data parts (0 items)
2020.08.11 12:29:04.808097 [ 51 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:29:04.809499 [ 51 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:29:04.809925 [ 51 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2020.08.11 12:29:04.810090 [ 51 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 12:29:04.810568 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 2 entries to flush
2020.08.11 12:29:04.810754 [ 53 ] {} <Debug> SystemLog (system.query_thread_log): Creating new table system.query_thread_log for QueryThreadLog
2020.08.11 12:29:04.812731 [ 53 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 12:29:04.813640 [ 53 ] {} <Debug> system.query_thread_log: Loaded data parts (0 items)
2020.08.11 12:29:04.817119 [ 53 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:29:04.818152 [ 53 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:29:04.818966 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2020.08.11 12:29:04.819191 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 12:29:04.819741 [ 55 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 12:29:04.819974 [ 55 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 12:29:04.821464 [ 55 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 12:29:04.822095 [ 55 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 12:29:04.824407 [ 55 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:29:04.825288 [ 55 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:29:04.825647 [ 55 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 12:29:04.825907 [ 55 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 12:29:05.646057 [ 52 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 2 entries to flush
2020.08.11 12:29:05.646784 [ 52 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 12:29:05.658977 [ 52 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 12:29:05.660090 [ 52 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 12:29:05.670755 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:29:05.674246 [ 52 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:29:05.675613 [ 52 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 12:29:05.675952 [ 52 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 12:29:05.676496 [ 50 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 12:29:05.676917 [ 39 ] {} <Trace> ExternalDictionariesLoader: Unloading 'default.dict' because its configuration has been removed or detached
2020.08.11 12:29:05.679821 [ 39 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 12:29:05.680836 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 12:29:05.682184 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 12:29:05.682978 [ 39 ] {} <Information> Application: shutting down
2020.08.11 12:29:05.683408 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 12:29:05.683773 [ 48 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 12:29:05.684021 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 12:29:05.734807 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 12:29:05.738213 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 12:29:05.782933 [ 1 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 1
2020.08.11 12:29:05.783433 [ 1 ] {} <Information> Application: starting up
2020.08.11 12:29:05.792333 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 12:29:05.792900 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 12:29:05.793213 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 12:29:05.793454 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 12:29:05.793700 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 12:29:05.794394 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'dd0c1068a6de' as replica host.
2020.08.11 12:29:05.795942 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 12:29:05.798145 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 12:29:05.799353 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 12:29:05.799792 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:29:05.800086 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:29:05.800431 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 12:29:05.800685 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 12:29:05.815171 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 4 tables and 0 dictionaries.
2020.08.11 12:29:05.817046 [ 118 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 12:29:05.819709 [ 118 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 12:29:05.819808 [ 117 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 12:29:05.819847 [ 115 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 12:29:05.823507 [ 116 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 12:29:05.823702 [ 115 ] {} <Debug> system.query_thread_log: Loaded data parts (1 items)
2020.08.11 12:29:05.823721 [ 118 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 12:29:05.824654 [ 117 ] {} <Debug> system.query_log: Loaded data parts (1 items)
2020.08.11 12:29:05.827160 [ 116 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 12:29:05.836432 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 12:29:05.839845 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 12:29:05.843468 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. ? @ 0xf241e19 in /usr/bin/clickhouse
17. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
18. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
19. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
20. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
21. ? @ 0xeffd6c8 in /usr/bin/clickhouse
22. ? @ 0xeffe082 in /usr/bin/clickhouse
23. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
24. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
25. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
26. ? @ 0xa26f4a3 in /usr/bin/clickhouse
27. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
28. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 12:29:05.844681 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 12:29:05.845411 [ 136 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 12:29:05.846184 [ 126 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 12:29:05.847493 [ 139 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 12:29:06.840290 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 1 entries to flush
2020.08.11 12:29:06.841378 [ 137 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 12:29:06.845840 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:29:06.849079 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 12:29:06.850617 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 12:29:06.850964 [ 137 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 12:29:06.851427 [ 145 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 12:29:06.854813 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 12:29:06.856859 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 12:29:06.857716 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 12:29:06.858305 [ 1 ] {} <Information> Application: shutting down
2020.08.11 12:29:06.858555 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 12:29:06.858946 [ 113 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 12:29:06.859499 [ 113 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 12:36:20.529880 [ 39 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 12:36:20.533497 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 12:36:20.577737 [ 39 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 39
2020.08.11 12:36:20.578211 [ 39 ] {} <Information> Application: starting up
2020.08.11 12:36:20.587003 [ 39 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 12:36:20.587516 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 12:36:20.587801 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 12:36:20.588015 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 12:36:20.588294 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 12:36:20.589109 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '28063e300aaf' as replica host.
2020.08.11 12:36:20.590350 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 12:36:20.592885 [ 39 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 12:36:20.594234 [ 39 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 12:36:20.594743 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 12:36:20.595033 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 12:36:20.596092 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:36:20.596471 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:36:20.596838 [ 39 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 12:36:20.597054 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 12:36:20.602985 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 12:36:20.603280 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 12:36:20.603677 [ 39 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 12:36:20.607056 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 12:36:20.607357 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 12:36:20.608395 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 12:36:20.617097 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:20.618066 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:20.618894 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:20.619809 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 12:36:20.620320 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 12:36:20.620659 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 12:36:20.621345 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 12:36:20.622522 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 12:36:20.622944 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 12:36:20.623159 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 12:36:20.704903 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 12:36:20.705878 [ 56 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 12:36:20.706243 [ 56 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 12:36:20.706534 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 12:36:20.706815 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 12:36:21.512212 [ 74 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41546, User-Agent: Wget/1.20.3 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 12:36:21.545081 [ 75 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44000
2020.08.11 12:36:21.545549 [ 75 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.6.0, revision: 54436, user: default.
2020.08.11 12:36:21.545985 [ 75 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2020.08.11 12:36:21.546265 [ 75 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, SYSTEM, dictGet, SOURCES ON *.*
2020.08.11 12:36:21.551442 [ 75 ] {1c83307b-cc21-44dd-8727-c9ecd55316b5} <Debug> executeQuery: (from 127.0.0.1:44000) CREATE DICTIONARY IF NOT EXISTS default.dict ( key String, value UInt32 ) PRIMARY KEY "key" LAYOUT(COMPLEX_KEY_HASHED()) SOURCE(FILE(path '/var/lib/clickhouse/user_files/dict.txt' format 'TabSeparated')) LIFETIME(MIN 300 MAX 600);
2020.08.11 12:36:21.551849 [ 75 ] {1c83307b-cc21-44dd-8727-c9ecd55316b5} <Trace> ContextAccess (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 12:36:21.553876 [ 75 ] {1c83307b-cc21-44dd-8727-c9ecd55316b5} <Trace> ExternalDictionariesLoader: Loading config file '/var/lib/clickhouse/metadata/default/dict.sql.tmp'.
2020.08.11 12:36:21.554605 [ 75 ] {1c83307b-cc21-44dd-8727-c9ecd55316b5} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 12:36:21.555308 [ 75 ] {1c83307b-cc21-44dd-8727-c9ecd55316b5} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 12:36:21.555754 [ 75 ] {} <Information> TCPHandler: Processed in 0.004643622 sec.
2020.08.11 12:36:21.558932 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Debug> executeQuery: (from 127.0.0.1:44000) CREATE TABLE IF NOT EXISTS default.table ( site_id UInt32, stamp LowCardinality(Nullable(String)), md_ad_format String DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll( decodeURLComponent(arrayElement( extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]'), dictGet('default.dict', 'value',tuple( stamp )) )), '+', ' '), '') ) ENGINE MergeTree() ORDER BY tuple();
2020.08.11 12:36:21.559558 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.table
2020.08.11 12:36:21.560305 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Trace> ExternalDictionariesLoader: Will load the object 'default.dict' in background, force = false, loading_id = 1
2020.08.11 12:36:21.560885 [ 88 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'default.dict'
2020.08.11 12:36:21.561396 [ 88 ] {} <Trace> DictionaryFactory: Created dictionary source 'File: /var/lib/clickhouse/user_files/dict.txt TabSeparated' for dictionary 'default.dict'
2020.08.11 12:36:21.561680 [ 88 ] {} <Trace> FileDictionary: loadAll File: /var/lib/clickhouse/user_files/dict.txt TabSeparated
2020.08.11 12:36:21.564261 [ 88 ] {} <Trace> ExternalDictionariesLoader: Supposed update time for 'default.dict' is 2020-08-11 12:43:18 (loaded, lifetime [300, 600], no errors)
2020.08.11 12:36:21.564564 [ 88 ] {} <Trace> ExternalDictionariesLoader: Next update time for 'default.dict' was set to 2020-08-11 12:43:18
2020.08.11 12:36:21.564939 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 12:36:21.565629 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 12:36:21.567085 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 12:36:21.569612 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Debug> default.table: Loading data parts
2020.08.11 12:36:21.570887 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 12:36:21.583695 [ 75 ] {3e1b47a8-9ab2-418b-8a40-c4f89536fe5b} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 12:36:21.584175 [ 75 ] {} <Information> TCPHandler: Processed in 0.02704312 sec.
2020.08.11 12:36:21.584645 [ 75 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 12:36:21.589102 [ 48 ] {} <Trace> BaseDaemon: Received signal 15
2020.08.11 12:36:21.589452 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 12:36:21.589663 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 12:36:21.589990 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 12:36:21.966582 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 12:36:21.966861 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 12:36:21.968628 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 12:36:21.969493 [ 50 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 4 entries to flush
2020.08.11 12:36:21.969867 [ 50 ] {} <Debug> SystemLog (system.query_log): Creating new table system.query_log for QueryLog
2020.08.11 12:36:21.972464 [ 50 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 12:36:21.973183 [ 50 ] {} <Debug> system.query_log: Loaded data parts (0 items)
2020.08.11 12:36:21.976667 [ 50 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:36:21.978002 [ 50 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:36:21.978438 [ 50 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2020.08.11 12:36:21.978745 [ 50 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 12:36:21.979376 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 2 entries to flush
2020.08.11 12:36:21.979633 [ 53 ] {} <Debug> SystemLog (system.query_thread_log): Creating new table system.query_thread_log for QueryThreadLog
2020.08.11 12:36:21.982126 [ 53 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 12:36:21.982661 [ 53 ] {} <Debug> system.query_thread_log: Loaded data parts (0 items)
2020.08.11 12:36:21.985974 [ 53 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:36:21.987231 [ 53 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:36:21.987682 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2020.08.11 12:36:21.987885 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 12:36:21.988323 [ 52 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 12:36:21.988598 [ 52 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 12:36:21.989809 [ 52 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 12:36:21.990536 [ 52 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 12:36:21.994331 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:36:21.995637 [ 52 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:36:21.996003 [ 52 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 12:36:21.996223 [ 52 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 12:36:22.602066 [ 51 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 2 entries to flush
2020.08.11 12:36:22.602475 [ 51 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 12:36:22.607813 [ 51 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 12:36:22.608480 [ 51 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 12:36:22.615007 [ 51 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:36:22.617849 [ 51 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 12:36:22.618863 [ 51 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 12:36:22.619135 [ 51 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 12:36:22.619411 [ 55 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 12:36:22.619714 [ 39 ] {} <Trace> ExternalDictionariesLoader: Unloading 'default.dict' because its configuration has been removed or detached
2020.08.11 12:36:22.622639 [ 39 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 12:36:22.623229 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 12:36:22.626417 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 12:36:22.627395 [ 39 ] {} <Information> Application: shutting down
2020.08.11 12:36:22.627616 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 12:36:22.627804 [ 48 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 12:36:22.627981 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 12:36:22.736800 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 12:36:22.743268 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 12:36:22.801065 [ 1 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 1
2020.08.11 12:36:22.801452 [ 1 ] {} <Information> Application: starting up
2020.08.11 12:36:22.814280 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 12:36:22.814693 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 12:36:22.814898 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 12:36:22.815066 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 12:36:22.815266 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 12:36:22.815886 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '28063e300aaf' as replica host.
2020.08.11 12:36:22.817811 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 12:36:22.820155 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 12:36:22.821864 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 12:36:22.822400 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:36:22.822686 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 12:36:22.822944 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 12:36:22.823220 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 12:36:22.829801 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 4 tables and 0 dictionaries.
2020.08.11 12:36:22.832060 [ 115 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 12:36:22.835236 [ 115 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 12:36:22.835267 [ 116 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 12:36:22.835378 [ 118 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 12:36:22.837919 [ 117 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 12:36:22.839782 [ 118 ] {} <Debug> system.query_thread_log: Loaded data parts (1 items)
2020.08.11 12:36:22.842356 [ 115 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 12:36:22.846822 [ 117 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 12:36:22.846926 [ 116 ] {} <Debug> system.query_log: Loaded data parts (1 items)
2020.08.11 12:36:22.857588 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 12:36:22.863418 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 12:36:22.867770 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
16. ? @ 0xf241e19 in /usr/bin/clickhouse
17. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
18. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
19. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
20. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
21. ? @ 0xeffd6c8 in /usr/bin/clickhouse
22. ? @ 0xeffe082 in /usr/bin/clickhouse
23. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
24. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
25. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
26. ? @ 0xa26f4a3 in /usr/bin/clickhouse
27. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
28. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 12:36:22.868797 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 12:36:22.869346 [ 140 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 12:36:22.870449 [ 137 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 12:36:22.870997 [ 135 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 12:36:23.865102 [ 145 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 1 entries to flush
2020.08.11 12:36:23.866245 [ 145 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 12:36:23.870116 [ 145 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 12:36:23.873550 [ 145 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 12:36:23.874953 [ 145 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 12:36:23.875462 [ 145 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 12:36:23.876087 [ 124 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 12:36:23.878863 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 12:36:23.880550 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 12:36:23.881242 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT if(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGet('default.dict', 'value', tuple(stamp))]), '+', ' '), '')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 12:36:23.881689 [ 1 ] {} <Information> Application: shutting down
2020.08.11 12:36:23.881873 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 12:36:23.882637 [ 113 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 12:36:23.882961 [ 113 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:09:44.824649 [ 38 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 13:09:44.828312 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:09:44.871539 [ 38 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 38
2020.08.11 13:09:44.871862 [ 38 ] {} <Information> Application: starting up
2020.08.11 13:09:44.880714 [ 38 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:09:44.881167 [ 38 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:09:44.881337 [ 38 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:09:44.881552 [ 38 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 13:09:44.881922 [ 38 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 13:09:44.883116 [ 38 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'c5897444846e' as replica host.
2020.08.11 13:09:44.884908 [ 38 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:09:44.886996 [ 38 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 13:09:44.888136 [ 38 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 13:09:44.888484 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 13:09:44.888804 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 13:09:44.889725 [ 38 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:09:44.889925 [ 38 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:09:44.890183 [ 38 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 13:09:44.890363 [ 38 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:09:44.895749 [ 38 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 13:09:44.895985 [ 38 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 13:09:44.896315 [ 38 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 13:09:44.898689 [ 38 ] {} <Debug> Application: Loaded metadata.
2020.08.11 13:09:44.898976 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:09:44.899478 [ 38 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:09:44.913830 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:44.914553 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:44.915319 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:44.916463 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:09:44.917733 [ 38 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 13:09:44.917994 [ 38 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 13:09:44.918412 [ 38 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 13:09:44.919432 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 13:09:44.919854 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 13:09:44.920037 [ 38 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 13:09:45.041437 [ 38 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 13:09:45.042540 [ 56 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 13:09:45.042742 [ 56 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 13:09:45.042895 [ 38 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 13:09:45.043050 [ 38 ] {} <Information> Application: Ready for connections.
2020.08.11 13:09:45.806753 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41552, User-Agent: Wget/1.20.3 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 13:09:45.827976 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44006
2020.08.11 13:09:45.828418 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.6.0, revision: 54436, user: default.
2020.08.11 13:09:45.829170 [ 74 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2020.08.11 13:09:45.829437 [ 74 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, SYSTEM, dictGet, SOURCES ON *.*
2020.08.11 13:09:45.835899 [ 74 ] {f08c59e7-cbe0-4d08-bd07-d8b4c5ad9639} <Debug> executeQuery: (from 127.0.0.1:44006) CREATE DICTIONARY IF NOT EXISTS default.dict ( key String, value UInt32 ) PRIMARY KEY "key" LAYOUT(COMPLEX_KEY_HASHED()) SOURCE(FILE(path '/var/lib/clickhouse/user_files/dict.txt' format 'TabSeparated')) LIFETIME(MIN 300 MAX 600);
2020.08.11 13:09:45.836738 [ 74 ] {f08c59e7-cbe0-4d08-bd07-d8b4c5ad9639} <Trace> ContextAccess (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 13:09:45.838805 [ 74 ] {f08c59e7-cbe0-4d08-bd07-d8b4c5ad9639} <Trace> ExternalDictionariesLoader: Loading config file '/var/lib/clickhouse/metadata/default/dict.sql.tmp'.
2020.08.11 13:09:45.839377 [ 74 ] {f08c59e7-cbe0-4d08-bd07-d8b4c5ad9639} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 13:09:45.840070 [ 74 ] {f08c59e7-cbe0-4d08-bd07-d8b4c5ad9639} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:09:45.840498 [ 74 ] {} <Information> TCPHandler: Processed in 0.004878648 sec.
2020.08.11 13:09:45.842067 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Debug> executeQuery: (from 127.0.0.1:44006) CREATE TABLE IF NOT EXISTS default.table ( site_id UInt32, stamp LowCardinality(Nullable(String)), md_ad_format String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) ) ENGINE MergeTree() ORDER BY tuple();
2020.08.11 13:09:45.842830 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.table
2020.08.11 13:09:45.843456 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Trace> ExternalDictionariesLoader: Will load the object 'default.dict' in background, force = false, loading_id = 1
2020.08.11 13:09:45.843995 [ 87 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'default.dict'
2020.08.11 13:09:45.844423 [ 87 ] {} <Trace> DictionaryFactory: Created dictionary source 'File: /var/lib/clickhouse/user_files/dict.txt TabSeparated' for dictionary 'default.dict'
2020.08.11 13:09:45.844702 [ 87 ] {} <Trace> FileDictionary: loadAll File: /var/lib/clickhouse/user_files/dict.txt TabSeparated
2020.08.11 13:09:45.846929 [ 87 ] {} <Trace> ExternalDictionariesLoader: Supposed update time for 'default.dict' is 2020-08-11 13:17:25 (loaded, lifetime [300, 600], no errors)
2020.08.11 13:09:45.847197 [ 87 ] {} <Trace> ExternalDictionariesLoader: Next update time for 'default.dict' was set to 2020-08-11 13:17:25
2020.08.11 13:09:45.847701 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 13:09:45.849068 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 13:09:45.851825 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Debug> default.table: Loading data parts
2020.08.11 13:09:45.853323 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 13:09:45.869731 [ 74 ] {38469df2-5409-438d-8d86-7c1f3f530b76} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:09:45.870280 [ 74 ] {} <Information> TCPHandler: Processed in 0.028626086 sec.
2020.08.11 13:09:45.870591 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 13:09:45.875067 [ 47 ] {} <Trace> BaseDaemon: Received signal 15
2020.08.11 13:09:45.875310 [ 47 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 13:09:45.875675 [ 38 ] {} <Debug> Application: Received termination signal.
2020.08.11 13:09:45.876249 [ 38 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 13:09:46.559768 [ 38 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 13:09:46.560101 [ 38 ] {} <Information> Application: Closed connections.
2020.08.11 13:09:46.561842 [ 38 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:09:46.562360 [ 51 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 4 entries to flush
2020.08.11 13:09:46.562823 [ 51 ] {} <Debug> SystemLog (system.query_log): Creating new table system.query_log for QueryLog
2020.08.11 13:09:46.565497 [ 51 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 13:09:46.566534 [ 51 ] {} <Debug> system.query_log: Loaded data parts (0 items)
2020.08.11 13:09:46.570713 [ 51 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:09:46.571938 [ 51 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:09:46.572422 [ 51 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2020.08.11 13:09:46.572754 [ 51 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 13:09:46.573370 [ 49 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 2 entries to flush
2020.08.11 13:09:46.573705 [ 49 ] {} <Debug> SystemLog (system.query_thread_log): Creating new table system.query_thread_log for QueryThreadLog
2020.08.11 13:09:46.576023 [ 49 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 13:09:46.576747 [ 49 ] {} <Debug> system.query_thread_log: Loaded data parts (0 items)
2020.08.11 13:09:46.580476 [ 49 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:09:46.582270 [ 49 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:09:46.582789 [ 49 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2020.08.11 13:09:46.583074 [ 49 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 13:09:46.583537 [ 52 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 13:09:46.583934 [ 52 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 13:09:46.586400 [ 52 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 13:09:46.587010 [ 52 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 13:09:46.590174 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:09:46.591314 [ 52 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:09:46.591692 [ 52 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 13:09:46.592166 [ 52 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 13:09:46.901576 [ 50 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 2 entries to flush
2020.08.11 13:09:46.902502 [ 50 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 13:09:46.914228 [ 50 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 13:09:46.917377 [ 50 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 13:09:46.929674 [ 50 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:09:46.932625 [ 50 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:09:46.934241 [ 50 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 13:09:46.934752 [ 50 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 13:09:46.935562 [ 53 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 13:09:46.936075 [ 38 ] {} <Trace> ExternalDictionariesLoader: Unloading 'default.dict' because its configuration has been removed or detached
2020.08.11 13:09:46.939404 [ 38 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 13:09:46.940204 [ 38 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:09:46.941194 [ 38 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:09:46.941754 [ 38 ] {} <Information> Application: shutting down
2020.08.11 13:09:46.942248 [ 38 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:09:46.942628 [ 47 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 13:09:46.943005 [ 47 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:09:47.031725 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 13:09:47.036236 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:09:47.080711 [ 1 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 1
2020.08.11 13:09:47.081188 [ 1 ] {} <Information> Application: starting up
2020.08.11 13:09:47.089758 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:09:47.090266 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:09:47.090571 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:09:47.090960 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 13:09:47.091340 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 13:09:47.092304 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'c5897444846e' as replica host.
2020.08.11 13:09:47.093500 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:09:47.095365 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 13:09:47.096464 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 13:09:47.096972 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:09:47.097342 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:09:47.097690 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 13:09:47.097940 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:09:47.102864 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 4 tables and 0 dictionaries.
2020.08.11 13:09:47.104577 [ 114 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 13:09:47.107370 [ 114 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 13:09:47.107400 [ 115 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 13:09:47.107456 [ 116 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 13:09:47.110604 [ 116 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 13:09:47.122393 [ 117 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 13:09:47.122616 [ 114 ] {} <Debug> system.query_thread_log: Loaded data parts (1 items)
2020.08.11 13:09:47.123637 [ 115 ] {} <Debug> system.query_log: Loaded data parts (1 items)
2020.08.11 13:09:47.124688 [ 117 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 13:09:47.125511 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 13:09:47.129240 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 13:09:47.132105 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 13:09:47.133513 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:09:47.133906 [ 138 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 13:09:47.135020 [ 135 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 13:09:47.135634 [ 134 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 13:09:48.130878 [ 139 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 1 entries to flush
2020.08.11 13:09:48.133005 [ 139 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 13:09:48.141301 [ 139 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:09:48.152797 [ 139 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 13:09:48.154377 [ 139 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 13:09:48.154747 [ 139 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 13:09:48.155436 [ 140 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 13:09:48.158426 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:09:48.159663 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:09:48.160468 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:09:48.161202 [ 1 ] {} <Information> Application: shutting down
2020.08.11 13:09:48.161425 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:09:48.161941 [ 112 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 13:09:48.162542 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:10:03.587400 [ 38 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 13:10:03.593178 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:10:03.666674 [ 38 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 38
2020.08.11 13:10:03.667212 [ 38 ] {} <Information> Application: starting up
2020.08.11 13:10:03.681128 [ 38 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:10:03.681581 [ 38 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:10:03.681934 [ 38 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:10:03.682316 [ 38 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 13:10:03.682581 [ 38 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 13:10:03.684047 [ 38 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '70c2fbcc218d' as replica host.
2020.08.11 13:10:03.685720 [ 38 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:10:03.688515 [ 38 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 13:10:03.691043 [ 38 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 13:10:03.691691 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 13:10:03.692378 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 13:10:03.693621 [ 38 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:10:03.693969 [ 38 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:10:03.694445 [ 38 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 13:10:03.694741 [ 38 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:10:03.702221 [ 38 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 13:10:03.702588 [ 38 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 13:10:03.702996 [ 38 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 13:10:03.707697 [ 38 ] {} <Debug> Application: Loaded metadata.
2020.08.11 13:10:03.708190 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:10:03.708730 [ 38 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:10:03.717050 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:03.717963 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:03.719207 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:03.720454 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:10:03.721146 [ 38 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 13:10:03.721860 [ 38 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 13:10:03.723289 [ 38 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 13:10:03.725251 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 13:10:03.725932 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 13:10:03.726577 [ 38 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 13:10:03.861714 [ 38 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 13:10:03.863271 [ 57 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 13:10:03.863602 [ 57 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 13:10:03.863677 [ 38 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 13:10:03.864296 [ 38 ] {} <Information> Application: Ready for connections.
2020.08.11 13:10:04.561165 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41558, User-Agent: Wget/1.20.3 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 13:10:04.596932 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44012
2020.08.11 13:10:04.597542 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.6.0, revision: 54436, user: default.
2020.08.11 13:10:04.598461 [ 74 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2020.08.11 13:10:04.598826 [ 74 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, SYSTEM, dictGet, SOURCES ON *.*
2020.08.11 13:10:04.607445 [ 74 ] {bd7f11a4-2dc8-4c7d-9297-bcd7be183e08} <Debug> executeQuery: (from 127.0.0.1:44012) CREATE DICTIONARY IF NOT EXISTS default.dict ( key String, value UInt32 ) PRIMARY KEY "key" LAYOUT(COMPLEX_KEY_HASHED()) SOURCE(FILE(path '/var/lib/clickhouse/user_files/dict.txt' format 'TabSeparated')) LIFETIME(MIN 300 MAX 600);
2020.08.11 13:10:04.608118 [ 74 ] {bd7f11a4-2dc8-4c7d-9297-bcd7be183e08} <Trace> ContextAccess (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 13:10:04.610544 [ 74 ] {bd7f11a4-2dc8-4c7d-9297-bcd7be183e08} <Trace> ExternalDictionariesLoader: Loading config file '/var/lib/clickhouse/metadata/default/dict.sql.tmp'.
2020.08.11 13:10:04.611265 [ 74 ] {bd7f11a4-2dc8-4c7d-9297-bcd7be183e08} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 13:10:04.612328 [ 74 ] {bd7f11a4-2dc8-4c7d-9297-bcd7be183e08} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:10:04.612839 [ 74 ] {} <Information> TCPHandler: Processed in 0.005873592 sec.
2020.08.11 13:10:04.615061 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Debug> executeQuery: (from 127.0.0.1:44012) CREATE TABLE IF NOT EXISTS default.table ( site_id UInt32, stamp LowCardinality(Nullable(String)), md_ad_format String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) ) ENGINE MergeTree() ORDER BY tuple();
2020.08.11 13:10:04.615478 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.table
2020.08.11 13:10:04.616156 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Trace> ExternalDictionariesLoader: Will load the object 'default.dict' in background, force = false, loading_id = 1
2020.08.11 13:10:04.616661 [ 87 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'default.dict'
2020.08.11 13:10:04.617397 [ 87 ] {} <Trace> DictionaryFactory: Created dictionary source 'File: /var/lib/clickhouse/user_files/dict.txt TabSeparated' for dictionary 'default.dict'
2020.08.11 13:10:04.617873 [ 87 ] {} <Trace> FileDictionary: loadAll File: /var/lib/clickhouse/user_files/dict.txt TabSeparated
2020.08.11 13:10:04.621551 [ 87 ] {} <Trace> ExternalDictionariesLoader: Supposed update time for 'default.dict' is 2020-08-11 13:15:57 (loaded, lifetime [300, 600], no errors)
2020.08.11 13:10:04.621912 [ 87 ] {} <Trace> ExternalDictionariesLoader: Next update time for 'default.dict' was set to 2020-08-11 13:15:57
2020.08.11 13:10:04.622352 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 13:10:04.625151 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 13:10:04.627973 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Debug> default.table: Loading data parts
2020.08.11 13:10:04.629402 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 13:10:04.642357 [ 74 ] {e4325a36-cb4a-46ec-9da2-bf7afd74ff87} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:10:04.642698 [ 74 ] {} <Information> TCPHandler: Processed in 0.028144859 sec.
2020.08.11 13:10:04.643454 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 13:10:04.651202 [ 47 ] {} <Trace> BaseDaemon: Received signal 15
2020.08.11 13:10:04.651678 [ 47 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 13:10:04.652249 [ 38 ] {} <Debug> Application: Received termination signal.
2020.08.11 13:10:04.652469 [ 38 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 13:10:05.118760 [ 38 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 13:10:05.119041 [ 38 ] {} <Information> Application: Closed connections.
2020.08.11 13:10:05.120922 [ 38 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:10:05.121381 [ 49 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 4 entries to flush
2020.08.11 13:10:05.121704 [ 49 ] {} <Debug> SystemLog (system.query_log): Creating new table system.query_log for QueryLog
2020.08.11 13:10:05.124812 [ 49 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 13:10:05.125511 [ 49 ] {} <Debug> system.query_log: Loaded data parts (0 items)
2020.08.11 13:10:05.129215 [ 49 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:10:05.130703 [ 49 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:10:05.131241 [ 49 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2020.08.11 13:10:05.131464 [ 49 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 13:10:05.132027 [ 52 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 2 entries to flush
2020.08.11 13:10:05.132421 [ 52 ] {} <Debug> SystemLog (system.query_thread_log): Creating new table system.query_thread_log for QueryThreadLog
2020.08.11 13:10:05.134628 [ 52 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 13:10:05.135268 [ 52 ] {} <Debug> system.query_thread_log: Loaded data parts (0 items)
2020.08.11 13:10:05.138476 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:10:05.140508 [ 52 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:10:05.141472 [ 52 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2020.08.11 13:10:05.141895 [ 52 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 13:10:05.142525 [ 54 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 13:10:05.142810 [ 54 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 13:10:05.144732 [ 54 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 13:10:05.145621 [ 54 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 13:10:05.149318 [ 54 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:10:05.151014 [ 54 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:10:05.151429 [ 54 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 13:10:05.151770 [ 54 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 13:10:05.701751 [ 50 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 2 entries to flush
2020.08.11 13:10:05.702226 [ 50 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 13:10:05.707782 [ 50 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 13:10:05.710045 [ 50 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 13:10:05.718913 [ 50 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:10:05.721873 [ 50 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:10:05.723319 [ 50 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 13:10:05.723965 [ 50 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 13:10:05.724532 [ 51 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 13:10:05.725071 [ 38 ] {} <Trace> ExternalDictionariesLoader: Unloading 'default.dict' because its configuration has been removed or detached
2020.08.11 13:10:05.728213 [ 38 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 13:10:05.728745 [ 38 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:10:05.729926 [ 38 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:10:05.730652 [ 38 ] {} <Information> Application: shutting down
2020.08.11 13:10:05.730890 [ 38 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:10:05.731345 [ 47 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 13:10:05.731677 [ 47 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:10:05.787930 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 13:10:05.792616 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:10:05.844640 [ 1 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 1
2020.08.11 13:10:05.845125 [ 1 ] {} <Information> Application: starting up
2020.08.11 13:10:05.854497 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:10:05.855096 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:10:05.855286 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:10:05.855592 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 13:10:05.855885 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 13:10:05.856587 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '70c2fbcc218d' as replica host.
2020.08.11 13:10:05.857756 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:10:05.859913 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 13:10:05.861306 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 13:10:05.861846 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:10:05.862291 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:10:05.862670 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 13:10:05.862978 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:10:05.867165 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 4 tables and 0 dictionaries.
2020.08.11 13:10:05.868967 [ 116 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 13:10:05.871832 [ 116 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 13:10:05.871895 [ 114 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 13:10:05.871940 [ 113 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 13:10:05.875953 [ 116 ] {} <Debug> system.query_thread_log: Loaded data parts (1 items)
2020.08.11 13:10:05.878424 [ 114 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 13:10:05.885701 [ 115 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 13:10:05.887145 [ 113 ] {} <Debug> system.query_log: Loaded data parts (1 items)
2020.08.11 13:10:05.888322 [ 115 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 13:10:05.889959 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 13:10:05.895867 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 13:10:05.898975 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 13:10:05.900273 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:10:05.900704 [ 131 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 13:10:05.901847 [ 140 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 13:10:05.902353 [ 139 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 13:10:06.893957 [ 134 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 1 entries to flush
2020.08.11 13:10:06.895061 [ 134 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 13:10:06.899032 [ 134 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.33 GiB.
2020.08.11 13:10:06.902617 [ 134 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 13:10:06.904164 [ 134 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 13:10:06.904495 [ 134 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 13:10:06.905028 [ 136 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 13:10:06.909184 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:10:06.910442 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:10:06.911275 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:10:06.911830 [ 1 ] {} <Information> Application: shutting down
2020.08.11 13:10:06.912156 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:10:06.912551 [ 112 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 13:10:06.912832 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:17:59.784676 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:17:59.793883 [ 1 ] {} <Information> : Starting ClickHouse 20.1.16.120 with revision 54431
2020.08.11 13:17:59.794417 [ 1 ] {} <Information> Application: starting up
2020.08.11 13:17:59.806362 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:17:59.807074 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:17:59.807360 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'Etc/UTC'.
2020.08.11 13:17:59.808504 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'f14b4f12b51a' as replica host.
2020.08.11 13:17:59.812946 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:17:59.815602 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:17:59.816219 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:17:59.816673 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:17:59.826761 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 13:17:59.827085 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 13:17:59.827554 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 13:17:59.827845 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:17:59.828456 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 13:17:59.859044 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 13:17:59.859326 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:17:59.862677 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.1.16.120 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:17:59.863795 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.1.16.120 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:17:59.864814 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.1.16.120 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:17:59.865410 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 13:17:59.865862 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 13:17:59.866568 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 13:17:59.868776 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 13:17:59.869083 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 13:18:00.758891 [ 29 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41618, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 13:18:00.798319 [ 30 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44072
2020.08.11 13:18:00.798997 [ 30 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.1.0, revision: 54431, user: default.
2020.08.11 13:18:00.811044 [ 30 ] {1eb79456-e059-4446-bc7a-e3c337095399} <Debug> executeQuery: (from 127.0.0.1:44072) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` UInt32) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 13:18:00.818842 [ 30 ] {1eb79456-e059-4446-bc7a-e3c337095399} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:18:00.819399 [ 30 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 13:18:00.819816 [ 30 ] {} <Information> TCPHandler: Processed in 0.009 sec.
2020.08.11 13:18:00.825157 [ 30 ] {96532910-3494-446f-81a9-c6fefeab553c} <Debug> executeQuery: (from 127.0.0.1:44072) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 13:18:00.831191 [ 30 ] {96532910-3494-446f-81a9-c6fefeab553c} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 13:18:00.834742 [ 30 ] {96532910-3494-446f-81a9-c6fefeab553c} <Debug> default.table: Loading data parts
2020.08.11 13:18:00.836416 [ 30 ] {96532910-3494-446f-81a9-c6fefeab553c} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 13:18:00.840820 [ 30 ] {96532910-3494-446f-81a9-c6fefeab553c} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:18:00.841323 [ 30 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 13:18:00.841512 [ 30 ] {} <Information> TCPHandler: Processed in 0.017 sec.
2020.08.11 13:18:00.841808 [ 30 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 13:18:00.847928 [ 49 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 13:18:00.848780 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 13:18:00.849242 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 13:18:01.133386 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 13:18:01.134474 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 13:18:01.146901 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:18:01.799344 [ 1 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 13:18:01.800117 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:18:01.801361 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:18:01.802120 [ 1 ] {} <Information> Application: shutting down
2020.08.11 13:18:01.802423 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:18:01.802795 [ 49 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:18:01.860817 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:18:01.864815 [ 1 ] {} <Information> : Starting ClickHouse 20.1.16.120 with revision 54431
2020.08.11 13:18:01.865202 [ 1 ] {} <Information> Application: starting up
2020.08.11 13:18:01.872367 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:18:01.872918 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:18:01.873345 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'Etc/UTC'.
2020.08.11 13:18:01.874347 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'f14b4f12b51a' as replica host.
2020.08.11 13:18:01.877476 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:18:01.879939 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:18:01.880384 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:18:01.880848 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:18:01.881684 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 0 tables and 0 dictionaries.
2020.08.11 13:18:01.881903 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 13:18:01.891050 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 13:18:01.946409 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: Cannot attach table '`table`' from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT CAST(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))), 'String')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. 0x102b97a0 Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) in /usr/bin/clickhouse
1. 0x8e885cd DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) in /usr/bin/clickhouse
2. 0xcf09c99 ? in /usr/bin/clickhouse
3. 0xcf14f63 std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const in /usr/bin/clickhouse
4. 0x92e8933 DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
5. 0x90f7825 DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
6. 0x911a9fa DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
7. 0x911adf2 DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
8. 0x912117c DB::FunctionOverloadResolverAdaptor::build(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const in /usr/bin/clickhouse
9. 0xcf6a86e DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) in /usr/bin/clickhouse
10. 0xcf6aa7d DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) in /usr/bin/clickhouse
11. 0xcf4d821 DB::ScopeStack::addAction(DB::ExpressionAction const&) in /usr/bin/clickhouse
12. 0xcf533c2 DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /usr/bin/clickhouse
13. 0xcf521b6 DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /usr/bin/clickhouse
14. 0xcf521b6 DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /usr/bin/clickhouse
15. 0xcf3f5a9 DB::InDepthNodeVisitor<DB::ActionsMatcher, true, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) in /usr/bin/clickhouse
16. 0xcf34bb3 ? in /usr/bin/clickhouse
17. 0xcf36ee5 DB::ExpressionAnalyzer::getActions(bool, bool) in /usr/bin/clickhouse
18. 0xcf25d2b DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&) in /usr/bin/clickhouse
19. 0xcfb7592 DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) in /usr/bin/clickhouse
20. 0xcfad8b5 ? in /usr/bin/clickhouse
21. 0xcfae00b ? in /usr/bin/clickhouse
22. 0x8eaca87 ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) in /usr/bin/clickhouse
23. 0x8ead0e8 ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const in /usr/bin/clickhouse
24. 0x8eabf97 ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) in /usr/bin/clickhouse
25. 0x8eaa3a3 ? in /usr/bin/clickhouse
26. 0x76db start_thread in /lib/x86_64-linux-gnu/libpthread-2.27.so
27. 0x12188f clone in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.1.16.120 (official build))
2020.08.11 13:18:01.947749 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:18:01.949334 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:18:01.950379 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:18:01.951451 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: Cannot attach table '`table`' from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT CAST(dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))), 'String')) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:18:01.952010 [ 1 ] {} <Information> Application: shutting down
2020.08.11 13:18:01.952563 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:18:01.954866 [ 7 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:23:20.290541 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:23:20.296501 [ 39 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 13:23:20.296835 [ 39 ] {} <Information> Application: starting up
2020.08.11 13:23:20.306159 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:23:20.306524 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:23:20.306794 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 13:23:20.307359 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 13:23:20.308794 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '516ef61e014b' as replica host.
2020.08.11 13:23:20.313661 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:23:20.318095 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:23:20.318665 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:23:20.319015 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:23:20.323588 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 13:23:20.323944 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 13:23:20.324437 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 13:23:20.324778 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:23:20.325509 [ 39 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 13:23:20.331206 [ 39 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 13:23:20.331504 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 13:23:20.333870 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:20.334766 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:20.335747 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:20.336621 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 13:23:20.337290 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 13:23:20.337647 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 13:23:20.338028 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 13:23:20.339548 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 13:23:20.340271 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 13:23:20.340546 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 13:23:20.467673 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 13:23:20.469850 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 13:23:20.470231 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 13:23:21.262096 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41678, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 13:23:21.291514 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44132
2020.08.11 13:23:21.292383 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 13:23:21.299640 [ 74 ] {73a2b356-c37f-4b97-bf0f-5a4b2b94ca1d} <Debug> executeQuery: (from 127.0.0.1:44132) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` UInt32) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 13:23:21.300103 [ 74 ] {73a2b356-c37f-4b97-bf0f-5a4b2b94ca1d} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 13:23:21.300398 [ 74 ] {73a2b356-c37f-4b97-bf0f-5a4b2b94ca1d} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 13:23:21.300730 [ 74 ] {73a2b356-c37f-4b97-bf0f-5a4b2b94ca1d} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 13:23:21.301254 [ 74 ] {73a2b356-c37f-4b97-bf0f-5a4b2b94ca1d} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 13:23:21.304226 [ 74 ] {73a2b356-c37f-4b97-bf0f-5a4b2b94ca1d} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:23:21.304611 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 13:23:21.305099 [ 74 ] {} <Information> TCPHandler: Processed in 0.006 sec.
2020.08.11 13:23:21.306716 [ 74 ] {e51486b2-dfd2-4626-9510-2fc478f1678a} <Debug> executeQuery: (from 127.0.0.1:44132) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 13:23:21.307123 [ 74 ] {e51486b2-dfd2-4626-9510-2fc478f1678a} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 13:23:21.311059 [ 74 ] {e51486b2-dfd2-4626-9510-2fc478f1678a} <Trace> AccessRightsContext (default): Access granted: dictGet() ON default.dict
2020.08.11 13:23:21.312429 [ 74 ] {e51486b2-dfd2-4626-9510-2fc478f1678a} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 13:23:21.314565 [ 74 ] {e51486b2-dfd2-4626-9510-2fc478f1678a} <Debug> default.table: Loading data parts
2020.08.11 13:23:21.315389 [ 74 ] {e51486b2-dfd2-4626-9510-2fc478f1678a} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 13:23:21.317592 [ 74 ] {e51486b2-dfd2-4626-9510-2fc478f1678a} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 13:23:21.317969 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 13:23:21.318285 [ 74 ] {} <Information> TCPHandler: Processed in 0.012 sec.
2020.08.11 13:23:21.318599 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 13:23:21.323881 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 13:23:21.324358 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 13:23:21.324658 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 13:23:22.001914 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 13:23:22.002575 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 13:23:22.006575 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:23:22.328279 [ 52 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 13:23:22.328930 [ 52 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 13:23:22.343405 [ 52 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 13:23:22.345381 [ 52 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 13:23:22.353948 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 13:23:22.412659 [ 52 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 13:23:22.415588 [ 39 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 13:23:22.416306 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:23:22.417503 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:23:22.418342 [ 39 ] {} <Information> Application: shutting down
2020.08.11 13:23:22.418626 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:23:22.419010 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 13:23:22.472207 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 13:23:22.476030 [ 1 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 13:23:22.476268 [ 1 ] {} <Information> Application: starting up
2020.08.11 13:23:22.483039 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 13:23:22.483391 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 13:23:22.483623 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 13:23:22.484025 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 13:23:22.484804 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '516ef61e014b' as replica host.
2020.08.11 13:23:22.487860 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 13:23:22.490958 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:23:22.493537 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 13:23:22.493849 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 13:23:22.496958 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 1 tables and 0 dictionaries.
2020.08.11 13:23:22.500307 [ 113 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 13:23:22.507468 [ 113 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 13:23:22.515023 [ 113 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 13:23:22.515676 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 13:23:22.519463 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 13:23:22.561346 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0xd076011 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd081333 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x93f2561 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x93f4cd3 in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x91da895 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x91fe4da in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x91fe8d2 in /usr/bin/clickhouse
9. DB::FunctionOverloadResolverAdaptor::build(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x9204c5c in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13aaae in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13acad in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xd3623ed in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd369b1a in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
15. ? @ 0xd34bd1e in /usr/bin/clickhouse
16. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xd34e85f in /usr/bin/clickhouse
17. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xd6a8c4a in /usr/bin/clickhouse
18. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&) @ 0xd092ba3 in /usr/bin/clickhouse
19. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xd0ae74a in /usr/bin/clickhouse
20. ? @ 0xd0a4def in /usr/bin/clickhouse
21. ? @ 0xd0a55d5 in /usr/bin/clickhouse
22. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
23. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
24. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/clickhouse
25. ? @ 0x8fb97d3 in /usr/bin/clickhouse
26. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
27. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.3.16.165 (official build))
2020.08.11 13:23:22.562558 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 13:23:23.519083 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 13:23:23.519997 [ 133 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 13:23:23.523346 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 13:23:23.574949 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 13:23:23.577856 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 13:23:23.578919 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 13:23:23.579588 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 13:23:23.580085 [ 1 ] {} <Information> Application: shutting down
2020.08.11 13:23:23.580324 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 13:23:23.580859 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 14:04:28.863575 [ 39 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 14:04:28.870935 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 14:04:28.965341 [ 39 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 39
2020.08.11 14:04:28.966237 [ 39 ] {} <Information> Application: starting up
2020.08.11 14:04:28.981463 [ 39 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 14:04:28.982720 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 14:04:28.983397 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 14:04:28.983738 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 14:04:28.984282 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 14:04:28.985946 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '1b98af14fd63' as replica host.
2020.08.11 14:04:28.988939 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 14:04:29.010330 [ 39 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 14:04:29.012980 [ 39 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 14:04:29.013680 [ 39 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 14:04:29.019352 [ 39 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 14:04:29.022035 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:29.022567 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:29.023227 [ 39 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 14:04:29.023657 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 14:04:29.032296 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 14:04:29.032602 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 14:04:29.034290 [ 39 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 14:04:29.043424 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 14:04:29.043956 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 14:04:29.044836 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 14:04:29.056556 [ 39 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:29.058760 [ 39 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:29.059961 [ 39 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:29.061235 [ 39 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:29.062520 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 14:04:29.062985 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 14:04:29.063567 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 14:04:29.065943 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 14:04:29.066893 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 14:04:29.067344 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 14:04:29.240018 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 14:04:29.241856 [ 56 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 14:04:29.242330 [ 56 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 14:04:29.242445 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 14:04:29.243374 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 14:04:29.823996 [ 74 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41684, User-Agent: Wget/1.20.3 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 14:04:29.857406 [ 75 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44138
2020.08.11 14:04:29.858251 [ 75 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.6.0, revision: 54436, user: default.
2020.08.11 14:04:29.859472 [ 75 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2020.08.11 14:04:29.860088 [ 75 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, SYSTEM, dictGet, SOURCES ON *.*
2020.08.11 14:04:29.868096 [ 75 ] {f38ff7b5-72f2-47e6-86d5-6d576359de34} <Debug> executeQuery: (from 127.0.0.1:44138) CREATE DICTIONARY IF NOT EXISTS default.dict( key String, value String ) PRIMARY KEY "key" LAYOUT(COMPLEX_KEY_HASHED()) SOURCE(FILE(path '/var/lib/clickhouse/user_files/dict.txt' format 'TabSeparated')) LIFETIME(MIN 300 MAX 600);
2020.08.11 14:04:29.869055 [ 75 ] {f38ff7b5-72f2-47e6-86d5-6d576359de34} <Trace> ContextAccess (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 14:04:29.872713 [ 75 ] {f38ff7b5-72f2-47e6-86d5-6d576359de34} <Trace> ExternalDictionariesLoader: Loading config file '/var/lib/clickhouse/metadata/default/dict.sql.tmp'.
2020.08.11 14:04:29.873695 [ 75 ] {f38ff7b5-72f2-47e6-86d5-6d576359de34} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 14:04:29.874952 [ 75 ] {f38ff7b5-72f2-47e6-86d5-6d576359de34} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 14:04:29.876556 [ 75 ] {} <Information> TCPHandler: Processed in 0.008829361 sec.
2020.08.11 14:04:29.878607 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Debug> executeQuery: (from 127.0.0.1:44138) CREATE TABLE IF NOT EXISTS default.table ( site_id UInt32, stamp LowCardinality(Nullable(String)), md_ad_format String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) ) ENGINE MergeTree() ORDER BY tuple();
2020.08.11 14:04:29.879257 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.table
2020.08.11 14:04:29.880238 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Trace> ExternalDictionariesLoader: Will load the object 'default.dict' in background, force = false, loading_id = 1
2020.08.11 14:04:29.881382 [ 88 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'default.dict'
2020.08.11 14:04:29.882219 [ 88 ] {} <Trace> DictionaryFactory: Created dictionary source 'File: /var/lib/clickhouse/user_files/dict.txt TabSeparated' for dictionary 'default.dict'
2020.08.11 14:04:29.882946 [ 88 ] {} <Trace> FileDictionary: loadAll File: /var/lib/clickhouse/user_files/dict.txt TabSeparated
2020.08.11 14:04:29.888340 [ 88 ] {} <Trace> ExternalDictionariesLoader: Supposed update time for 'default.dict' is 2020-08-11 14:11:15 (loaded, lifetime [300, 600], no errors)
2020.08.11 14:04:29.888734 [ 88 ] {} <Trace> ExternalDictionariesLoader: Next update time for 'default.dict' was set to 2020-08-11 14:11:15
2020.08.11 14:04:29.889405 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 14:04:29.890785 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 14:04:29.893848 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Debug> default.table: Loading data parts
2020.08.11 14:04:29.895818 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 14:04:29.909354 [ 75 ] {2a62e710-115a-4f2e-bafd-c87f93ee3ac5} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 14:04:29.910208 [ 75 ] {} <Information> TCPHandler: Processed in 0.03225173 sec.
2020.08.11 14:04:29.910674 [ 75 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 14:04:29.917665 [ 48 ] {} <Trace> BaseDaemon: Received signal 15
2020.08.11 14:04:29.918323 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 14:04:29.918792 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 14:04:29.919093 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 14:04:30.508441 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 14:04:30.508847 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 14:04:30.512599 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 14:04:30.513250 [ 53 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 4 entries to flush
2020.08.11 14:04:30.513669 [ 53 ] {} <Debug> SystemLog (system.query_log): Creating new table system.query_log for QueryLog
2020.08.11 14:04:30.517177 [ 53 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 14:04:30.517950 [ 53 ] {} <Debug> system.query_log: Loaded data parts (0 items)
2020.08.11 14:04:30.522541 [ 53 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 14:04:30.524922 [ 53 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:30.525915 [ 53 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2020.08.11 14:04:30.526550 [ 53 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 14:04:30.527333 [ 50 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 2 entries to flush
2020.08.11 14:04:30.527687 [ 50 ] {} <Debug> SystemLog (system.query_thread_log): Creating new table system.query_thread_log for QueryThreadLog
2020.08.11 14:04:30.529830 [ 50 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 14:04:30.530674 [ 50 ] {} <Debug> system.query_thread_log: Loaded data parts (0 items)
2020.08.11 14:04:30.535033 [ 50 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 14:04:30.537851 [ 50 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:30.538840 [ 50 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2020.08.11 14:04:30.539184 [ 50 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 14:04:30.539950 [ 54 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 2 entries to flush
2020.08.11 14:04:30.540280 [ 54 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 14:04:30.542387 [ 54 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 14:04:30.543256 [ 54 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 14:04:30.547723 [ 54 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 14:04:30.549230 [ 54 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:30.549672 [ 54 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 14:04:30.549953 [ 54 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 14:04:31.029923 [ 52 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 2 entries to flush
2020.08.11 14:04:31.030303 [ 52 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 14:04:31.036041 [ 52 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 14:04:31.036990 [ 52 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 14:04:31.046751 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 14:04:31.050089 [ 52 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:31.051392 [ 52 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 14:04:31.051665 [ 52 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 14:04:31.053606 [ 55 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 45 entries to flush
2020.08.11 14:04:31.054221 [ 55 ] {} <Debug> SystemLog (system.asynchronous_metric_log): Creating new table system.asynchronous_metric_log for AsynchronousMetricLog
2020.08.11 14:04:31.055519 [ 55 ] {} <Debug> system.asynchronous_metric_log: Loading data parts
2020.08.11 14:04:31.056236 [ 55 ] {} <Debug> system.asynchronous_metric_log: Loaded data parts (0 items)
2020.08.11 14:04:31.059457 [ 55 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 14:04:31.060808 [ 55 ] {} <Trace> system.asynchronous_metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:31.061349 [ 55 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log
2020.08.11 14:04:31.061599 [ 55 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 14:04:31.062558 [ 39 ] {} <Trace> ExternalDictionariesLoader: Unloading 'default.dict' because its configuration has been removed or detached
2020.08.11 14:04:31.065759 [ 39 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 14:04:31.066717 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 14:04:31.068614 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 14:04:31.069409 [ 39 ] {} <Information> Application: shutting down
2020.08.11 14:04:31.069734 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 14:04:31.070069 [ 48 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 14:04:31.070798 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 14:04:31.159502 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 14:04:31.164258 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 14:04:31.220897 [ 1 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 1
2020.08.11 14:04:31.221427 [ 1 ] {} <Information> Application: starting up
2020.08.11 14:04:31.232844 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 14:04:31.233422 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 14:04:31.233704 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 14:04:31.234218 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 14:04:31.234641 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 14:04:31.235620 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '1b98af14fd63' as replica host.
2020.08.11 14:04:31.238380 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 14:04:31.240982 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 14:04:31.242728 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 14:04:31.243288 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:31.243702 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:31.244066 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 14:04:31.244249 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 14:04:31.251470 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 5 tables and 0 dictionaries.
2020.08.11 14:04:31.255878 [ 116 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 14:04:31.259487 [ 116 ] {} <Debug> system.asynchronous_metric_log: Loading data parts
2020.08.11 14:04:31.259765 [ 117 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 14:04:31.259940 [ 114 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 14:04:31.260829 [ 118 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 14:04:31.263147 [ 116 ] {} <Debug> system.asynchronous_metric_log: Loaded data parts (1 items)
2020.08.11 14:04:31.263552 [ 115 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 14:04:31.263624 [ 118 ] {} <Debug> system.query_thread_log: Loaded data parts (1 items)
2020.08.11 14:04:31.266348 [ 114 ] {} <Debug> system.query_log: Loaded data parts (1 items)
2020.08.11 14:04:31.267333 [ 117 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 14:04:31.269049 [ 115 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 14:04:31.286964 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 14:04:31.306506 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 14:04:31.311394 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 14:04:31.313204 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 14:04:31.313609 [ 136 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 14:04:31.314246 [ 143 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 14:04:31.315003 [ 141 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 14:04:32.303240 [ 142 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 1 entries to flush
2020.08.11 14:04:32.305132 [ 142 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 14:04:32.312787 [ 142 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 14:04:32.344592 [ 142 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 14:04:32.348900 [ 142 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 14:04:32.354133 [ 142 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 14:04:32.367529 [ 126 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 14:04:32.377090 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 14:04:32.379042 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 14:04:32.380566 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 14:04:32.383058 [ 1 ] {} <Information> Application: shutting down
2020.08.11 14:04:32.383819 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 14:04:32.387856 [ 113 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 14:04:32.413217 [ 113 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 14:04:47.521794 [ 40 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 14:04:47.527419 [ 40 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 14:04:47.609390 [ 40 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 40
2020.08.11 14:04:47.610202 [ 40 ] {} <Information> Application: starting up
2020.08.11 14:04:47.628506 [ 40 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 14:04:47.629422 [ 40 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 14:04:47.629942 [ 40 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 14:04:47.630303 [ 40 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 14:04:47.631053 [ 40 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 14:04:47.634561 [ 40 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '5ee93164513e' as replica host.
2020.08.11 14:04:47.637430 [ 40 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 14:04:47.641295 [ 40 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 14:04:47.644517 [ 40 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 14:04:47.645478 [ 40 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 14:04:47.646364 [ 40 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 14:04:47.648988 [ 40 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:47.649745 [ 40 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:47.653118 [ 40 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 14:04:47.653479 [ 40 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 14:04:47.665957 [ 40 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 14:04:47.666799 [ 40 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 14:04:47.667409 [ 40 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 14:04:47.674087 [ 40 ] {} <Debug> Application: Loaded metadata.
2020.08.11 14:04:47.674607 [ 40 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 14:04:47.677726 [ 40 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 14:04:47.693327 [ 40 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:47.694812 [ 40 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:47.696546 [ 40 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:47.698746 [ 40 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 14:04:47.700742 [ 40 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 14:04:47.701280 [ 40 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 14:04:47.702487 [ 40 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 14:04:47.705334 [ 40 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 14:04:47.706251 [ 40 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 14:04:47.706578 [ 40 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 14:04:47.947004 [ 40 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 14:04:47.949827 [ 57 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 14:04:47.950398 [ 40 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 14:04:47.950944 [ 40 ] {} <Information> Application: Ready for connections.
2020.08.11 14:04:47.950553 [ 57 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 14:04:48.480859 [ 75 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41690, User-Agent: Wget/1.20.3 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 14:04:48.514249 [ 76 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44144
2020.08.11 14:04:48.515207 [ 76 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.6.0, revision: 54436, user: default.
2020.08.11 14:04:48.516584 [ 76 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2020.08.11 14:04:48.517617 [ 76 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, SYSTEM, dictGet, SOURCES ON *.*
2020.08.11 14:04:48.526339 [ 76 ] {1777b8f8-4653-4644-92ec-8dd5688fb814} <Debug> executeQuery: (from 127.0.0.1:44144) CREATE DICTIONARY IF NOT EXISTS default.dict( key String, value String ) PRIMARY KEY "key" LAYOUT(COMPLEX_KEY_HASHED()) SOURCE(FILE(path '/var/lib/clickhouse/user_files/dict.txt' format 'TabSeparated')) LIFETIME(MIN 300 MAX 600);
2020.08.11 14:04:48.526907 [ 76 ] {1777b8f8-4653-4644-92ec-8dd5688fb814} <Trace> ContextAccess (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 14:04:48.529981 [ 76 ] {1777b8f8-4653-4644-92ec-8dd5688fb814} <Trace> ExternalDictionariesLoader: Loading config file '/var/lib/clickhouse/metadata/default/dict.sql.tmp'.
2020.08.11 14:04:48.530711 [ 76 ] {1777b8f8-4653-4644-92ec-8dd5688fb814} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 14:04:48.531871 [ 76 ] {1777b8f8-4653-4644-92ec-8dd5688fb814} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 14:04:48.532966 [ 76 ] {} <Information> TCPHandler: Processed in 0.0069355 sec.
2020.08.11 14:04:48.535954 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Debug> executeQuery: (from 127.0.0.1:44144) CREATE TABLE IF NOT EXISTS default.table ( site_id UInt32, stamp LowCardinality(Nullable(String)), md_ad_format String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, ''))) ) ENGINE MergeTree() ORDER BY tuple();
2020.08.11 14:04:48.536796 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.table
2020.08.11 14:04:48.537866 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Trace> ExternalDictionariesLoader: Will load the object 'default.dict' in background, force = false, loading_id = 1
2020.08.11 14:04:48.538574 [ 89 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'default.dict'
2020.08.11 14:04:48.539409 [ 89 ] {} <Trace> DictionaryFactory: Created dictionary source 'File: /var/lib/clickhouse/user_files/dict.txt TabSeparated' for dictionary 'default.dict'
2020.08.11 14:04:48.539808 [ 89 ] {} <Trace> FileDictionary: loadAll File: /var/lib/clickhouse/user_files/dict.txt TabSeparated
2020.08.11 14:04:48.543720 [ 89 ] {} <Trace> ExternalDictionariesLoader: Supposed update time for 'default.dict' is 2020-08-11 14:11:45 (loaded, lifetime [300, 600], no errors)
2020.08.11 14:04:48.548254 [ 89 ] {} <Trace> ExternalDictionariesLoader: Next update time for 'default.dict' was set to 2020-08-11 14:11:45
2020.08.11 14:04:48.549492 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Trace> ContextAccess (default): Access granted: dictGet ON default.dict
2020.08.11 14:04:48.551841 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 14:04:48.555993 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Debug> default.table: Loading data parts
2020.08.11 14:04:48.558085 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 14:04:48.572859 [ 76 ] {8c17404b-a1ce-47b9-8d4f-da62911b27a6} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 14:04:48.573651 [ 76 ] {} <Information> TCPHandler: Processed in 0.038283626 sec.
2020.08.11 14:04:48.574447 [ 76 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 14:04:48.584239 [ 49 ] {} <Trace> BaseDaemon: Received signal 15
2020.08.11 14:04:48.584717 [ 49 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 14:04:48.585682 [ 40 ] {} <Debug> Application: Received termination signal.
2020.08.11 14:04:48.586352 [ 40 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 14:04:48.965671 [ 40 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 14:04:48.966078 [ 40 ] {} <Information> Application: Closed connections.
2020.08.11 14:04:48.967997 [ 40 ] {} <Information> Application: Shutting down storages.
2020.08.11 14:04:48.968365 [ 55 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 4 entries to flush
2020.08.11 14:04:48.968948 [ 55 ] {} <Debug> SystemLog (system.query_log): Creating new table system.query_log for QueryLog
2020.08.11 14:04:48.993175 [ 55 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 14:04:48.993697 [ 55 ] {} <Debug> system.query_log: Loaded data parts (0 items)
2020.08.11 14:04:48.998579 [ 55 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 14:04:49.001293 [ 55 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:49.002043 [ 55 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2020.08.11 14:04:49.002292 [ 55 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 14:04:49.003189 [ 52 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 2 entries to flush
2020.08.11 14:04:49.003572 [ 52 ] {} <Debug> SystemLog (system.query_thread_log): Creating new table system.query_thread_log for QueryThreadLog
2020.08.11 14:04:49.006065 [ 52 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 14:04:49.006731 [ 52 ] {} <Debug> system.query_thread_log: Loaded data parts (0 items)
2020.08.11 14:04:49.011722 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 14:04:49.014669 [ 52 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:49.015465 [ 52 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2020.08.11 14:04:49.015760 [ 52 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 14:04:49.016952 [ 53 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 14:04:49.017419 [ 53 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 14:04:49.020402 [ 53 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 14:04:49.021653 [ 53 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 14:04:49.025386 [ 53 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 14:04:49.027340 [ 53 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:49.028279 [ 53 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 14:04:49.028662 [ 53 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 14:04:49.661607 [ 54 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 2 entries to flush
2020.08.11 14:04:49.662888 [ 54 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 14:04:49.697743 [ 54 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 14:04:49.704487 [ 54 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 14:04:49.722077 [ 54 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.38 GiB.
2020.08.11 14:04:49.728209 [ 54 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 14:04:49.730135 [ 54 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 14:04:49.730725 [ 54 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 14:04:49.731598 [ 51 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 14:04:49.732098 [ 40 ] {} <Trace> ExternalDictionariesLoader: Unloading 'default.dict' because its configuration has been removed or detached
2020.08.11 14:04:49.739210 [ 40 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 14:04:49.740216 [ 40 ] {} <Debug> Application: Shut down storages.
2020.08.11 14:04:49.741836 [ 40 ] {} <Debug> Application: Destroyed global context.
2020.08.11 14:04:49.742949 [ 40 ] {} <Information> Application: shutting down
2020.08.11 14:04:49.743374 [ 40 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 14:04:49.743660 [ 49 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 14:04:49.744229 [ 49 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 14:04:49.821897 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 14:04:49.828649 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 14:04:49.895557 [ 1 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 1
2020.08.11 14:04:49.896153 [ 1 ] {} <Information> Application: starting up
2020.08.11 14:04:49.917586 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 14:04:49.918227 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 14:04:49.918540 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 14:04:49.918772 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 14:04:49.919204 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 14:04:49.919952 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '5ee93164513e' as replica host.
2020.08.11 14:04:49.922396 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 14:04:49.925087 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 14:04:49.927130 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 14:04:49.927952 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:49.928378 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 14:04:49.929832 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 14:04:49.930186 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 14:04:49.938252 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 4 tables and 0 dictionaries.
2020.08.11 14:04:49.941218 [ 119 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 14:04:49.945732 [ 119 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 14:04:49.945950 [ 116 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 14:04:49.945937 [ 117 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 14:04:49.951788 [ 118 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 14:04:49.954708 [ 116 ] {} <Debug> system.query_thread_log: Loaded data parts (1 items)
2020.08.11 14:04:49.978584 [ 119 ] {} <Debug> system.query_log: Loaded data parts (1 items)
2020.08.11 14:04:49.994784 [ 117 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 14:04:49.997219 [ 118 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 14:04:49.999033 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 14:04:50.004938 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 14:04:50.009146 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 36, e.displayText() = DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12405650 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa2423fd in /usr/bin/clickhouse
2. ? @ 0xf01d271 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf028323 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb0722b1 in /usr/bin/clickhouse
5. DB::FunctionDictGetNoType::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xb07453a in /usr/bin/clickhouse
6. DB::DefaultOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae446a5 in /usr/bin/clickhouse
7. DB::FunctionOverloadResolverAdaptor::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6bdfd in /usr/bin/clickhouse
8. DB::FunctionOverloadResolverAdaptor::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xae6c1f5 in /usr/bin/clickhouse
9. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b538c in /usr/bin/clickhouse
10. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xf1b562d in /usr/bin/clickhouse
11. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xf25a6ad in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf264310 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xf262d63 in /usr/bin/clickhouse
14. ? @ 0xf241e19 in /usr/bin/clickhouse
15. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xf2451f7 in /usr/bin/clickhouse
16. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf60d108 in /usr/bin/clickhouse
17. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&, bool) @ 0xf134316 in /usr/bin/clickhouse
18. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool) @ 0xf00ad2d in /usr/bin/clickhouse
19. ? @ 0xeffd6c8 in /usr/bin/clickhouse
20. ? @ 0xeffe082 in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa271a57 in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0xa2721ca in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa270f67 in /usr/bin/clickhouse
24. ? @ 0xa26f4a3 in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122103 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.6.3.28 (official build))
2020.08.11 14:04:50.010441 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 14:04:50.010976 [ 145 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 14:04:50.011651 [ 143 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 14:04:50.012495 [ 136 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 14:04:51.010376 [ 147 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 1 entries to flush
2020.08.11 14:04:51.011819 [ 147 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 14:04:51.017107 [ 147 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 14:04:51.021015 [ 147 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 14:04:51.022990 [ 147 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 14:04:51.023340 [ 147 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 14:04:51.023982 [ 142 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 14:04:51.027357 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 14:04:51.028534 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 14:04:51.029587 [ 1 ] {} <Error> Application: DB::Exception: external dictionary 'default.dict' not found: default expression and column type are incompatible.: Cannot attach table `default`.`table` from metadata file /var/lib/clickhouse/metadata/default/table.sql from query ATTACH TABLE table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGet('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192
2020.08.11 14:04:51.030301 [ 1 ] {} <Information> Application: shutting down
2020.08.11 14:04:51.030622 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 14:04:51.031130 [ 114 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 14:04:51.031703 [ 114 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 15:39:17.547832 [ 38 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 15:39:17.556355 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:39:17.600290 [ 38 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 38
2020.08.11 15:39:17.600761 [ 38 ] {} <Information> Application: starting up
2020.08.11 15:39:17.609702 [ 38 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:39:17.610338 [ 38 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 15:39:17.610578 [ 38 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 15:39:17.610807 [ 38 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 15:39:17.611567 [ 38 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 15:39:17.612420 [ 38 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'b02ef0b49d60' as replica host.
2020.08.11 15:39:17.613944 [ 38 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 15:39:17.616120 [ 38 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 15:39:17.620717 [ 38 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 15:39:17.621152 [ 38 ] {} <Warning> Access(disk): File /var/lib/clickhouse/access/users.list doesn't exist
2020.08.11 15:39:17.621803 [ 38 ] {} <Warning> Access(disk): Recovering lists in directory /var/lib/clickhouse/access/
2020.08.11 15:39:17.622585 [ 38 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:39:17.622880 [ 38 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:39:17.623230 [ 38 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 15:39:17.623422 [ 38 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 15:39:17.629153 [ 38 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 15:39:17.629503 [ 38 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 15:39:17.629852 [ 38 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 15:39:17.633489 [ 38 ] {} <Debug> Application: Loaded metadata.
2020.08.11 15:39:17.633832 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:39:17.634179 [ 38 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:39:17.641775 [ 38 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:17.642667 [ 38 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:17.643922 [ 38 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:17.645196 [ 38 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:17.645808 [ 38 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 15:39:17.646184 [ 38 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 15:39:17.646458 [ 38 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 15:39:17.647564 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 15:39:17.648071 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 15:39:17.648170 [ 38 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 15:39:17.739651 [ 38 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 15:39:17.741194 [ 56 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:39:17.741374 [ 38 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 15:39:17.741776 [ 56 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:39:17.742225 [ 38 ] {} <Information> Application: Ready for connections.
2020.08.11 15:39:18.512972 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41696, User-Agent: Wget/1.20.3 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 15:39:18.544538 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44150
2020.08.11 15:39:18.545324 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.6.0, revision: 54436, user: default.
2020.08.11 15:39:18.546044 [ 74 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2020.08.11 15:39:18.546597 [ 74 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, SYSTEM, dictGet, SOURCES ON *.*
2020.08.11 15:39:18.551968 [ 74 ] {72ad735f-12a5-4d12-8ed1-2560b42f447b} <Debug> executeQuery: (from 127.0.0.1:44150) CREATE DICTIONARY IF NOT EXISTS default.dict( key String, value String ) PRIMARY KEY "key" LAYOUT(COMPLEX_KEY_HASHED()) SOURCE(FILE(path '/var/lib/clickhouse/user_files/dict.txt' format 'TabSeparated')) LIFETIME(MIN 300 MAX 600);
2020.08.11 15:39:18.552699 [ 74 ] {72ad735f-12a5-4d12-8ed1-2560b42f447b} <Trace> ContextAccess (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 15:39:18.554396 [ 74 ] {72ad735f-12a5-4d12-8ed1-2560b42f447b} <Trace> ExternalDictionariesLoader: Loading config file '/var/lib/clickhouse/metadata/default/dict.sql.tmp'.
2020.08.11 15:39:18.554889 [ 74 ] {72ad735f-12a5-4d12-8ed1-2560b42f447b} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 15:39:18.555614 [ 74 ] {72ad735f-12a5-4d12-8ed1-2560b42f447b} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 15:39:18.556148 [ 74 ] {} <Information> TCPHandler: Processed in 0.004406187 sec.
2020.08.11 15:39:18.562774 [ 74 ] {915aa51f-8160-4e0e-9dde-e433b2d017e3} <Debug> executeQuery: (from 127.0.0.1:44150) CREATE TABLE IF NOT EXISTS default.table ( site_id UInt32, stamp LowCardinality(Nullable(String)), md_ad_format String DEFAULT dictGetString('default.dict', 'value', tuple(coalesce(stamp, ''))) ) ENGINE MergeTree() ORDER BY tuple();
2020.08.11 15:39:18.564231 [ 74 ] {915aa51f-8160-4e0e-9dde-e433b2d017e3} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.table
2020.08.11 15:39:18.565450 [ 74 ] {915aa51f-8160-4e0e-9dde-e433b2d017e3} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 15:39:18.567946 [ 74 ] {915aa51f-8160-4e0e-9dde-e433b2d017e3} <Debug> default.table: Loading data parts
2020.08.11 15:39:18.568742 [ 74 ] {915aa51f-8160-4e0e-9dde-e433b2d017e3} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 15:39:18.581051 [ 74 ] {915aa51f-8160-4e0e-9dde-e433b2d017e3} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 15:39:18.581550 [ 74 ] {} <Information> TCPHandler: Processed in 0.019801435 sec.
2020.08.11 15:39:18.581865 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 15:39:18.587098 [ 47 ] {} <Trace> BaseDaemon: Received signal 15
2020.08.11 15:39:18.587489 [ 47 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 15:39:18.587837 [ 38 ] {} <Debug> Application: Received termination signal.
2020.08.11 15:39:18.588131 [ 38 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 15:39:19.270893 [ 38 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 15:39:19.271919 [ 38 ] {} <Information> Application: Closed connections.
2020.08.11 15:39:19.280355 [ 38 ] {} <Information> Application: Shutting down storages.
2020.08.11 15:39:19.281899 [ 50 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 4 entries to flush
2020.08.11 15:39:19.282659 [ 50 ] {} <Debug> SystemLog (system.query_log): Creating new table system.query_log for QueryLog
2020.08.11 15:39:19.310702 [ 50 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 15:39:19.322868 [ 50 ] {} <Debug> system.query_log: Loaded data parts (0 items)
2020.08.11 15:39:19.325870 [ 50 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:19.327433 [ 50 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 15:39:19.328188 [ 50 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2020.08.11 15:39:19.328538 [ 50 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 15:39:19.329656 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 2 entries to flush
2020.08.11 15:39:19.330287 [ 53 ] {} <Debug> SystemLog (system.query_thread_log): Creating new table system.query_thread_log for QueryThreadLog
2020.08.11 15:39:19.332169 [ 53 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 15:39:19.332803 [ 53 ] {} <Debug> system.query_thread_log: Loaded data parts (0 items)
2020.08.11 15:39:19.336351 [ 53 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:19.337627 [ 53 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 15:39:19.338354 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2020.08.11 15:39:19.338709 [ 53 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 15:39:19.340008 [ 52 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 15:39:19.340398 [ 52 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 15:39:19.342148 [ 52 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 15:39:19.342787 [ 52 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 15:39:19.346713 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:19.347875 [ 52 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 15:39:19.348266 [ 52 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 15:39:19.348454 [ 52 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 15:39:19.629564 [ 49 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 2 entries to flush
2020.08.11 15:39:19.630337 [ 49 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 15:39:19.645373 [ 49 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 15:39:19.646912 [ 49 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 15:39:19.659024 [ 49 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:19.663518 [ 49 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 15:39:19.664860 [ 49 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:39:19.665309 [ 49 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 15:39:19.665896 [ 51 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 15:39:19.669619 [ 38 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 15:39:19.670176 [ 38 ] {} <Debug> Application: Shut down storages.
2020.08.11 15:39:19.671304 [ 38 ] {} <Debug> Application: Destroyed global context.
2020.08.11 15:39:19.672179 [ 38 ] {} <Information> Application: shutting down
2020.08.11 15:39:19.672521 [ 38 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 15:39:19.673785 [ 47 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 15:39:19.674078 [ 47 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 15:39:19.741755 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2020.08.11 15:39:19.746097 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:39:19.791216 [ 1 ] {} <Information> : Starting ClickHouse 20.6.3.28 with revision 54436, build id: 98AA85C5BFA75A53, PID 1
2020.08.11 15:39:19.791730 [ 1 ] {} <Information> Application: starting up
2020.08.11 15:39:19.800411 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:39:19.800886 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 15:39:19.801549 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 15:39:19.801849 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 15:39:19.802590 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 15:39:19.803544 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'b02ef0b49d60' as replica host.
2020.08.11 15:39:19.804820 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 15:39:19.806834 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2020.08.11 15:39:19.808104 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2020.08.11 15:39:19.808609 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:39:19.809030 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:39:19.809235 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 1.75 GiB (1.95 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2020.08.11 15:39:19.809505 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 15:39:19.814788 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 4 tables and 0 dictionaries.
2020.08.11 15:39:19.816704 [ 115 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 15:39:19.819828 [ 115 ] {} <Debug> system.query_thread_log: Loading data parts
2020.08.11 15:39:19.819878 [ 114 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 15:39:19.819938 [ 117 ] {} <Debug> system.query_log: Loading data parts
2020.08.11 15:39:19.822254 [ 116 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 15:39:19.823554 [ 117 ] {} <Debug> system.query_log: Loaded data parts (1 items)
2020.08.11 15:39:19.837279 [ 115 ] {} <Debug> system.query_thread_log: Loaded data parts (1 items)
2020.08.11 15:39:19.837840 [ 114 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 15:39:19.838501 [ 116 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 15:39:19.839199 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 15:39:19.842221 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 15:39:19.843372 [ 135 ] {} <Debug> default.table: Loading data parts
2020.08.11 15:39:19.843785 [ 135 ] {} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 15:39:19.845445 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 15:39:19.846771 [ 1 ] {} <Trace> ExternalDictionariesLoader: Loading config file 'default.dict'.
2020.08.11 15:39:19.847889 [ 1 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2020.08.11 15:39:19.851173 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 15:39:19.851576 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:39:19.852183 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:39:19.853914 [ 1 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.854917 [ 1 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.856006 [ 1 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.856885 [ 1 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.6.3.28 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:39:19.857632 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 15:39:19.858003 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 15:39:19.858315 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 15:39:19.859381 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.6.3.28 (official build))
2020.08.11 15:39:19.859866 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.6.3.28 (official build))
2020.08.11 15:39:19.860160 [ 1 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 15:39:20.056735 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 15:39:20.058141 [ 142 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:39:20.058321 [ 142 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:39:20.058420 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 15:39:20.058836 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 15:39:22.066923 [ 163 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2020.08.11 15:39:22.079608 [ 163 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performing update on configuration
2020.08.11 15:39:22.084905 [ 163 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performed update on configuration
2020.08.11 15:39:27.343328 [ 140 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 15:39:27.343348 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:39:27.343940 [ 140 ] {} <Debug> SystemLog (system.trace_log): Will use existing table system.trace_log for TraceLog
2020.08.11 15:39:27.345048 [ 137 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 15:39:27.345287 [ 140 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:27.347636 [ 140 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 15:39:27.348526 [ 140 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 15:39:27.350500 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:27.353782 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 15:39:27.354843 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:39:34.858485 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:39:34.862187 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:34.865359 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 15:39:34.866571 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:39:35.060339 [ 117 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:39:35.060855 [ 117 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:39:42.370335 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:39:42.374217 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:42.377244 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_4_4_0.
2020.08.11 15:39:42.378241 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:39:49.879796 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:39:49.885808 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:49.890145 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_4_4_0 to 202008_5_5_0.
2020.08.11 15:39:49.892093 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:39:50.065098 [ 114 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:39:50.067356 [ 114 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:39:57.393381 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:39:57.399547 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:57.403279 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_5_5_0 to 202008_6_6_0.
2020.08.11 15:39:57.404283 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_1_0 to 202008_6_6_0
2020.08.11 15:39:57.404604 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:39:57.404645 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:39:57.405360 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_1_0 to 202008_6_6_0 into Compact
2020.08.11 15:39:57.406238 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:39:57.407312 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_1_1_0, total 2 rows starting from the beginning of the part
2020.08.11 15:39:57.410176 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_2_2_0, total 8 rows starting from the beginning of the part
2020.08.11 15:39:57.412797 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_3_3_0, total 8 rows starting from the beginning of the part
2020.08.11 15:39:57.415140 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_4_4_0, total 7 rows starting from the beginning of the part
2020.08.11 15:39:57.417457 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_5_5_0, total 8 rows starting from the beginning of the part
2020.08.11 15:39:57.419697 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_6_6_0, total 7 rows starting from the beginning of the part
2020.08.11 15:39:57.440269 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 40 rows, containing 241 columns (241 merged, 0 gathered) in 0.034926175 sec., 1145.2728505197035 rows/sec., 2.09 MiB/sec.
2020.08.11 15:39:57.444283 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_6_1 to 202008_1_6_1.
2020.08.11 15:39:57.445003 [ 132 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_1_0 to 202008_6_6_0
2020.08.11 15:39:57.445269 [ 132 ] {} <Debug> MemoryTracker: Peak memory usage: 4.07 MiB.
2020.08.11 15:40:04.855668 [ 140 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 15:40:04.856649 [ 140 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:04.858240 [ 140 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 15:40:04.858806 [ 140 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 15:40:04.906254 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:40:04.910317 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:04.913301 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_6_6_0 to 202008_7_7_0.
2020.08.11 15:40:04.914449 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:40:05.069230 [ 144 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:40:05.069960 [ 144 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:40:12.416340 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:40:12.424462 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:12.430468 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_7_7_0 to 202008_8_8_0.
2020.08.11 15:40:12.432262 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:40:19.841429 [ 141 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 45 entries to flush
2020.08.11 15:40:19.842391 [ 141 ] {} <Debug> SystemLog (system.asynchronous_metric_log): Creating new table system.asynchronous_metric_log for AsynchronousMetricLog
2020.08.11 15:40:19.846727 [ 141 ] {} <Debug> system.asynchronous_metric_log: Loading data parts
2020.08.11 15:40:19.850420 [ 141 ] {} <Debug> system.asynchronous_metric_log: Loaded data parts (0 items)
2020.08.11 15:40:19.857491 [ 141 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:19.860753 [ 141 ] {} <Trace> system.asynchronous_metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 15:40:19.861587 [ 141 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log
2020.08.11 15:40:19.935551 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:40:19.952173 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:19.961312 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_8_8_0 to 202008_9_9_0.
2020.08.11 15:40:19.963535 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:40:20.077376 [ 139 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:40:20.078972 [ 139 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:40:27.468599 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:40:27.474432 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:27.478969 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_9_9_0 to 202008_10_10_0.
2020.08.11 15:40:27.480194 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:40:34.983631 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:40:34.988957 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:34.991718 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_10_10_0 to 202008_11_11_0.
2020.08.11 15:40:34.993139 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:40:34.993154 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_6_1 to 202008_11_11_0
2020.08.11 15:40:34.993733 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:34.994040 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_6_1 to 202008_11_11_0 into Compact
2020.08.11 15:40:34.994674 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:40:34.995451 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_1_6_1, total 40 rows starting from the beginning of the part
2020.08.11 15:40:34.997268 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_7_7_0, total 8 rows starting from the beginning of the part
2020.08.11 15:40:34.999509 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_8_8_0, total 7 rows starting from the beginning of the part
2020.08.11 15:40:35.001363 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_9_9_0, total 8 rows starting from the beginning of the part
2020.08.11 15:40:35.003280 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_10_10_0, total 7 rows starting from the beginning of the part
2020.08.11 15:40:35.005052 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_11_11_0, total 8 rows starting from the beginning of the part
2020.08.11 15:40:35.023198 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 78 rows, containing 241 columns (241 merged, 0 gathered) in 0.02915135 sec., 2675.690834215225 rows/sec., 4.89 MiB/sec.
2020.08.11 15:40:35.026640 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_11_2 to 202008_1_11_2.
2020.08.11 15:40:35.027361 [ 132 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_6_1 to 202008_11_11_0
2020.08.11 15:40:35.027766 [ 132 ] {} <Debug> MemoryTracker: Peak memory usage: 4.06 MiB.
2020.08.11 15:40:35.081104 [ 135 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:40:35.081524 [ 135 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:40:42.362625 [ 140 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 1 entries to flush
2020.08.11 15:40:42.363821 [ 140 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:42.366882 [ 140 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_4_4_0.
2020.08.11 15:40:42.367428 [ 140 ] {} <Trace> SystemLog (system.trace_log): Flushed system log
2020.08.11 15:40:42.493978 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:40:42.498185 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:42.501016 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_11_11_0 to 202008_12_12_0.
2020.08.11 15:40:42.501914 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:40:50.002519 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:40:50.007085 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:50.011877 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_12_12_0 to 202008_13_13_0.
2020.08.11 15:40:50.012993 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:40:50.086658 [ 115 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:40:50.087268 [ 115 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:40:57.514554 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:40:57.519203 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:40:57.522218 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_13_13_0 to 202008_14_14_0.
2020.08.11 15:40:57.523230 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:05.026397 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:41:05.037452 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:05.044932 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_14_14_0 to 202008_15_15_0.
2020.08.11 15:41:05.046698 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:05.088151 [ 143 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:41:05.089143 [ 143 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:41:12.550276 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:41:12.555134 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:12.559309 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_15_15_0 to 202008_16_16_0.
2020.08.11 15:41:12.560617 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:12.567598 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_11_2 to 202008_16_16_0
2020.08.11 15:41:12.568211 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:12.568489 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_11_2 to 202008_16_16_0 into Compact
2020.08.11 15:41:12.569004 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:41:12.570332 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_1_11_2, total 78 rows starting from the beginning of the part
2020.08.11 15:41:12.572555 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_12_12_0, total 7 rows starting from the beginning of the part
2020.08.11 15:41:12.576145 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_13_13_0, total 8 rows starting from the beginning of the part
2020.08.11 15:41:12.578958 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_14_14_0, total 7 rows starting from the beginning of the part
2020.08.11 15:41:12.581263 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_15_15_0, total 8 rows starting from the beginning of the part
2020.08.11 15:41:12.583195 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_16_16_0, total 7 rows starting from the beginning of the part
2020.08.11 15:41:12.603337 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 115 rows, containing 241 columns (241 merged, 0 gathered) in 0.034849386 sec., 3299.9146670762 rows/sec., 6.04 MiB/sec.
2020.08.11 15:41:12.606143 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_16_3 to 202008_1_16_3.
2020.08.11 15:41:12.607259 [ 132 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_11_2 to 202008_16_16_0
2020.08.11 15:41:12.607537 [ 132 ] {} <Debug> MemoryTracker: Peak memory usage: 4.06 MiB.
2020.08.11 15:41:19.872437 [ 141 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 45 entries to flush
2020.08.11 15:41:19.873628 [ 141 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:19.877295 [ 141 ] {} <Trace> system.asynchronous_metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_2_2_0.
2020.08.11 15:41:19.878292 [ 141 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log
2020.08.11 15:41:20.068437 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:41:20.081432 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:20.088281 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_16_16_0 to 202008_17_17_0.
2020.08.11 15:41:20.089744 [ 146 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:41:20.090768 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:20.090891 [ 146 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:41:27.591549 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:41:27.600382 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:27.606154 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_17_17_0 to 202008_18_18_0.
2020.08.11 15:41:27.609330 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:35.093499 [ 145 ] {} <Debug> DNSResolver: Updating DNS cache
2020.08.11 15:41:35.094199 [ 145 ] {} <Debug> DNSResolver: Updated DNS cache
2020.08.11 15:41:35.112959 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:41:35.117461 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:35.120653 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_18_18_0 to 202008_19_19_0.
2020.08.11 15:41:35.121695 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:42.626955 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush
2020.08.11 15:41:42.639140 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:42.644874 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_19_19_0 to 202008_20_20_0.
2020.08.11 15:41:42.646207 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:49.299953 [ 112 ] {} <Trace> BaseDaemon: Received signal 2
2020.08.11 15:41:49.300738 [ 112 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 15:41:49.301548 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 15:41:49.302556 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 15:41:49.910432 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 15:41:49.911417 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 15:41:49.919724 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 15:41:49.920732 [ 136 ] {} <Trace> SystemLog (system.query_log): Terminating
2020.08.11 15:41:49.922109 [ 134 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2020.08.11 15:41:49.923414 [ 140 ] {} <Trace> SystemLog (system.trace_log): Terminating
2020.08.11 15:41:49.988948 [ 112 ] {} <Trace> BaseDaemon: Received signal 2
2020.08.11 15:41:49.989261 [ 112 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 15:41:49.989524 [ 112 ] {} <Information> Application: Received second signal Interrupt. Immediately terminate.
2020.08.11 15:41:50.147227 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush
2020.08.11 15:41:50.151647 [ 137 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:50.154682 [ 137 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_20_20_0 to 202008_21_21_0.
2020.08.11 15:41:50.155586 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_16_3 to 202008_21_21_0
2020.08.11 15:41:50.155813 [ 137 ] {} <Trace> SystemLog (system.metric_log): Flushed system log
2020.08.11 15:41:50.155861 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:50.156612 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_16_3 to 202008_21_21_0 into Compact
2020.08.11 15:41:50.157298 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:41:50.158155 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_1_16_3, total 115 rows starting from the beginning of the part
2020.08.11 15:41:50.160076 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_17_17_0, total 8 rows starting from the beginning of the part
2020.08.11 15:41:50.162111 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_18_18_0, total 7 rows starting from the beginning of the part
2020.08.11 15:41:50.164334 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_19_19_0, total 8 rows starting from the beginning of the part
2020.08.11 15:41:50.166387 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_20_20_0, total 7 rows starting from the beginning of the part
2020.08.11 15:41:50.168272 [ 132 ] {} <Trace> MergeTreeSequentialSource: Reading 2 marks from part 202008_21_21_0, total 8 rows starting from the beginning of the part
2020.08.11 15:41:50.187174 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 153 rows, containing 241 columns (241 merged, 0 gathered) in 0.030561129 sec., 5006.359549086031 rows/sec., 9.16 MiB/sec.
2020.08.11 15:41:50.190122 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_21_4 to 202008_1_21_4.
2020.08.11 15:41:50.191077 [ 132 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_16_3 to 202008_21_21_0
2020.08.11 15:41:50.191354 [ 132 ] {} <Debug> MemoryTracker: Peak memory usage: 4.06 MiB.
2020.08.11 15:41:50.408540 [ 112 ] {} <Trace> BaseDaemon: Received signal 2
2020.08.11 15:41:50.409555 [ 112 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 15:41:50.410836 [ 112 ] {} <Information> Application: Received second signal Interrupt. Immediately terminate.
2020.08.11 15:41:50.841383 [ 137 ] {} <Trace> SystemLog (system.metric_log): Terminating
2020.08.11 15:41:50.842585 [ 141 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 45 entries to flush
2020.08.11 15:41:50.844001 [ 141 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:41:50.848027 [ 141 ] {} <Trace> system.asynchronous_metric_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_3_3_0.
2020.08.11 15:41:50.857685 [ 141 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log
2020.08.11 15:41:50.858574 [ 141 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2020.08.11 15:41:50.860190 [ 1 ] {} <Trace> system.metric_log: Found 24 old parts to remove.
2020.08.11 15:41:50.860739 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_1_0
2020.08.11 15:41:50.862502 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_6_1
2020.08.11 15:41:50.864148 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_11_2
2020.08.11 15:41:50.865678 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_16_3
2020.08.11 15:41:50.866779 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_2_2_0
2020.08.11 15:41:50.867976 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_3_3_0
2020.08.11 15:41:50.868864 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_4_4_0
2020.08.11 15:41:50.869906 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_5_5_0
2020.08.11 15:41:50.870622 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_6_6_0
2020.08.11 15:41:50.871471 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_7_7_0
2020.08.11 15:41:50.872533 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_8_8_0
2020.08.11 15:41:50.873495 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_9_9_0
2020.08.11 15:41:50.875549 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_10_10_0
2020.08.11 15:41:50.876837 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_11_11_0
2020.08.11 15:41:50.877882 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_12_12_0
2020.08.11 15:41:50.878682 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_13_13_0
2020.08.11 15:41:50.879413 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_14_14_0
2020.08.11 15:41:50.880267 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_15_15_0
2020.08.11 15:41:50.880911 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_16_16_0
2020.08.11 15:41:50.881598 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_17_17_0
2020.08.11 15:41:50.882207 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_18_18_0
2020.08.11 15:41:50.882907 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_19_19_0
2020.08.11 15:41:50.883812 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_20_20_0
2020.08.11 15:41:50.884572 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_21_21_0
2020.08.11 15:41:50.890555 [ 1 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2020.08.11 15:41:50.891729 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 15:41:50.897274 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 15:41:50.897673 [ 1 ] {} <Information> Application: shutting down
2020.08.11 15:41:50.898345 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 15:41:50.898731 [ 112 ] {} <Trace> BaseDaemon: Received signal -2
2020.08.11 15:41:50.899141 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 15:42:12.601068 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:42:12.606609 [ 39 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 15:42:12.607590 [ 39 ] {} <Information> Application: starting up
2020.08.11 15:42:12.616757 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 15:42:12.617247 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 15:42:12.617648 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 15:42:12.618035 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 15:42:12.619014 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '23c78c046e71' as replica host.
2020.08.11 15:42:12.623744 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 15:42:12.627147 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:42:12.627835 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:42:12.628185 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 15:42:12.631346 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 15:42:12.631740 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 15:42:12.632172 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 15:42:12.632568 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:42:12.633273 [ 39 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 15:42:12.637046 [ 39 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 15:42:12.637427 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:42:12.675908 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.677752 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.679218 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.680714 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:12.681716 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 15:42:12.682244 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 15:42:12.682828 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 15:42:12.684432 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 15:42:12.685530 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 15:42:12.685948 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 15:42:12.859521 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 15:42:12.862441 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 15:42:12.862833 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 15:42:13.577255 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41702, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 15:42:13.617916 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44156
2020.08.11 15:42:13.618347 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 15:42:13.625813 [ 74 ] {ceebf29b-4974-4546-bc56-a8862af0b11a} <Debug> executeQuery: (from 127.0.0.1:44156) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` String) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 15:42:13.626296 [ 74 ] {ceebf29b-4974-4546-bc56-a8862af0b11a} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 15:42:13.626559 [ 74 ] {ceebf29b-4974-4546-bc56-a8862af0b11a} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 15:42:13.626847 [ 74 ] {ceebf29b-4974-4546-bc56-a8862af0b11a} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 15:42:13.627189 [ 74 ] {ceebf29b-4974-4546-bc56-a8862af0b11a} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 15:42:13.630033 [ 74 ] {ceebf29b-4974-4546-bc56-a8862af0b11a} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 15:42:13.630403 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 15:42:13.630673 [ 74 ] {} <Information> TCPHandler: Processed in 0.005 sec.
2020.08.11 15:42:13.632728 [ 74 ] {92cade87-612b-47cb-ad65-eff2eb9cfafa} <Debug> executeQuery: (from 127.0.0.1:44156) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGetString('default.dict', 'value', tuple(coalesce(stamp, '')))) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 15:42:13.633132 [ 74 ] {92cade87-612b-47cb-ad65-eff2eb9cfafa} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 15:42:13.634585 [ 74 ] {92cade87-612b-47cb-ad65-eff2eb9cfafa} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 15:42:13.638897 [ 74 ] {92cade87-612b-47cb-ad65-eff2eb9cfafa} <Debug> default.table: Loading data parts
2020.08.11 15:42:13.639812 [ 74 ] {92cade87-612b-47cb-ad65-eff2eb9cfafa} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 15:42:13.642816 [ 74 ] {92cade87-612b-47cb-ad65-eff2eb9cfafa} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 15:42:13.643379 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 15:42:13.643754 [ 74 ] {} <Information> TCPHandler: Processed in 0.011 sec.
2020.08.11 15:42:13.644252 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 15:42:13.650209 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 15:42:13.650737 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 15:42:13.651240 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 15:42:14.377929 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 15:42:14.378363 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 15:42:14.380712 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 15:42:14.631855 [ 51 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:42:14.632515 [ 51 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 15:42:14.641617 [ 51 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 15:42:14.642271 [ 51 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 15:42:14.649438 [ 51 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:42:14.716327 [ 51 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 15:42:14.719614 [ 39 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 15:42:14.720076 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 15:42:14.721605 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 15:42:14.722410 [ 39 ] {} <Information> Application: shutting down
2020.08.11 15:42:14.722638 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 15:42:14.723181 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 15:42:14.772248 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:42:14.777285 [ 1 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 15:42:14.777766 [ 1 ] {} <Information> Application: starting up
2020.08.11 15:42:14.783493 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 15:42:14.783798 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 15:42:14.783889 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 15:42:14.784016 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 15:42:14.784971 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '23c78c046e71' as replica host.
2020.08.11 15:42:14.788194 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 15:42:14.791861 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:42:14.792477 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:42:14.792831 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 15:42:14.795844 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 1 tables and 0 dictionaries.
2020.08.11 15:42:14.799116 [ 113 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 15:42:14.801804 [ 113 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 15:42:14.810399 [ 113 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 15:42:14.811177 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 15:42:14.814682 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 15:42:14.815559 [ 135 ] {} <Debug> default.table: Loading data parts
2020.08.11 15:42:14.816071 [ 135 ] {} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 15:42:14.816442 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 15:42:14.817927 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 15:42:14.818219 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:42:14.818575 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 15:42:14.821064 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 15:42:14.821384 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:42:14.823513 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.825088 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.826332 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.827508 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:42:14.828566 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 15:42:14.829004 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 15:42:14.829784 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 15:42:14.830898 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 15:42:14.831372 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 15:42:14.831625 [ 1 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 15:42:14.964999 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 15:42:14.966362 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 15:42:14.966685 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 15:42:16.977964 [ 163 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2020.08.11 15:42:22.313674 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:42:22.315749 [ 133 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 15:42:22.327897 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:42:22.382852 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 15:42:29.884324 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:42:29.889087 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:42:29.914214 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 15:42:37.424477 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:42:37.432107 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:42:37.466543 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_4_4_0.
2020.08.11 15:42:44.970662 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:42:44.986631 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:42:45.019096 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_4_4_0 to 202008_5_5_0.
2020.08.11 15:42:52.520681 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:42:52.525715 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:42:52.555951 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_5_5_0 to 202008_6_6_0.
2020.08.11 15:42:52.557416 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_1_0 to 202008_6_6_0
2020.08.11 15:42:52.560966 [ 127 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:42:52.561307 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_1_0 to 202008_6_6_0 into tmp_merge_202008_1_6_1 with type Wide
2020.08.11 15:42:52.561900 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:42:52.562333 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_1_0, total 2 rows starting from the beginning of the part
2020.08.11 15:42:52.574688 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_2_2_0, total 8 rows starting from the beginning of the part
2020.08.11 15:42:52.585790 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_3_3_0, total 8 rows starting from the beginning of the part
2020.08.11 15:42:52.612128 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_4_4_0, total 7 rows starting from the beginning of the part
2020.08.11 15:42:52.625658 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_5_5_0, total 8 rows starting from the beginning of the part
2020.08.11 15:42:52.648149 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_6_6_0, total 7 rows starting from the beginning of the part
2020.08.11 15:42:52.726174 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 40 rows, containing 211 columns (211 merged, 0 gathered) in 0.16 sec., 242.60 rows/sec., 0.41 MB/sec.
2020.08.11 15:42:52.738028 [ 127 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_6_1 to 202008_1_6_1.
2020.08.11 15:42:52.741662 [ 127 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_1_0 to 202008_6_6_0
2020.08.11 15:43:00.066913 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:00.077322 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:43:00.115546 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_6_6_0 to 202008_7_7_0.
2020.08.11 15:43:07.617017 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:07.630035 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:43:07.662845 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_7_7_0 to 202008_8_8_0.
2020.08.11 15:43:15.171952 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:15.180012 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:43:15.207217 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_8_8_0 to 202008_9_9_0.
2020.08.11 15:43:22.711555 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:22.725550 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:43:22.766651 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_9_9_0 to 202008_10_10_0.
2020.08.11 15:43:30.271826 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:30.283006 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:43:30.316442 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_10_10_0 to 202008_11_11_0.
2020.08.11 15:43:30.317638 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_6_1 to 202008_11_11_0
2020.08.11 15:43:30.320305 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 15:43:30.320628 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_6_1 to 202008_11_11_0 into tmp_merge_202008_1_11_2 with type Wide
2020.08.11 15:43:30.322505 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:43:30.323066 [ 132 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_6_1, total 40 rows starting from the beginning of the part
2020.08.11 15:43:30.333534 [ 132 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_7_7_0, total 8 rows starting from the beginning of the part
2020.08.11 15:43:30.344426 [ 132 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_8_8_0, total 7 rows starting from the beginning of the part
2020.08.11 15:43:30.354437 [ 132 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_9_9_0, total 8 rows starting from the beginning of the part
2020.08.11 15:43:30.364872 [ 132 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_10_10_0, total 7 rows starting from the beginning of the part
2020.08.11 15:43:30.376580 [ 132 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_11_11_0, total 8 rows starting from the beginning of the part
2020.08.11 15:43:30.444771 [ 132 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 78 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 628.30 rows/sec., 1.05 MB/sec.
2020.08.11 15:43:30.457955 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_11_2 to 202008_1_11_2.
2020.08.11 15:43:30.461038 [ 132 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_6_1 to 202008_11_11_0
2020.08.11 15:43:37.818090 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:37.827708 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 15:43:37.873970 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_11_11_0 to 202008_12_12_0.
2020.08.11 15:43:45.375842 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:45.380803 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 15:43:45.405306 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_12_12_0 to 202008_13_13_0.
2020.08.11 15:43:52.910227 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:43:52.919320 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 15:43:52.954686 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_13_13_0 to 202008_14_14_0.
2020.08.11 15:44:00.460042 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:00.474988 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 15:44:00.507326 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_14_14_0 to 202008_15_15_0.
2020.08.11 15:44:08.009518 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:08.020121 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 15:44:08.052856 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_15_15_0 to 202008_16_16_0.
2020.08.11 15:44:08.053971 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_11_2 to 202008_16_16_0
2020.08.11 15:44:08.057509 [ 128 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 15:44:08.057850 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_11_2 to 202008_16_16_0 into tmp_merge_202008_1_16_3 with type Wide
2020.08.11 15:44:08.058377 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:44:08.058927 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_11_2, total 78 rows starting from the beginning of the part
2020.08.11 15:44:08.068969 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_12_12_0, total 8 rows starting from the beginning of the part
2020.08.11 15:44:08.079068 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_13_13_0, total 7 rows starting from the beginning of the part
2020.08.11 15:44:08.090441 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_14_14_0, total 8 rows starting from the beginning of the part
2020.08.11 15:44:08.101886 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_15_15_0, total 7 rows starting from the beginning of the part
2020.08.11 15:44:08.112548 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_16_16_0, total 8 rows starting from the beginning of the part
2020.08.11 15:44:08.189829 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 116 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 878.92 rows/sec., 1.47 MB/sec.
2020.08.11 15:44:08.202945 [ 128 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_16_3 to 202008_1_16_3.
2020.08.11 15:44:08.206567 [ 128 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_11_2 to 202008_16_16_0
2020.08.11 15:44:15.559297 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:15.569586 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 15:44:15.601765 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_16_16_0 to 202008_17_17_0.
2020.08.11 15:44:23.103511 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:23.108961 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 15:44:23.133729 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_17_17_0 to 202008_18_18_0.
2020.08.11 15:44:30.639759 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:30.648372 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 15:44:30.682397 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_18_18_0 to 202008_19_19_0.
2020.08.11 15:44:38.185723 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:38.190828 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 15:44:38.217193 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_19_19_0 to 202008_20_20_0.
2020.08.11 15:44:45.719546 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:45.736983 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 15:44:45.766549 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_20_20_0 to 202008_21_21_0.
2020.08.11 15:44:45.767711 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_16_3 to 202008_21_21_0
2020.08.11 15:44:45.770739 [ 125 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 15:44:45.771147 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_16_3 to 202008_21_21_0 into tmp_merge_202008_1_21_4 with type Wide
2020.08.11 15:44:45.771690 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:44:45.772270 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_16_3, total 116 rows starting from the beginning of the part
2020.08.11 15:44:45.782462 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_17_17_0, total 7 rows starting from the beginning of the part
2020.08.11 15:44:45.792925 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_18_18_0, total 8 rows starting from the beginning of the part
2020.08.11 15:44:45.803232 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_19_19_0, total 7 rows starting from the beginning of the part
2020.08.11 15:44:45.813430 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_20_20_0, total 8 rows starting from the beginning of the part
2020.08.11 15:44:45.831544 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_21_21_0, total 7 rows starting from the beginning of the part
2020.08.11 15:44:45.902674 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 153 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 1163.14 rows/sec., 1.95 MB/sec.
2020.08.11 15:44:45.924700 [ 125 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_21_4 to 202008_1_21_4.
2020.08.11 15:44:45.927566 [ 125 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_16_3 to 202008_21_21_0
2020.08.11 15:44:53.269942 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:44:53.274529 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 15:44:53.302186 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_21_21_0 to 202008_22_22_0.
2020.08.11 15:45:00.810880 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:00.818721 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 15:45:00.848991 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_22_22_0 to 202008_23_23_0.
2020.08.11 15:45:08.352404 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:08.358429 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 15:45:08.384947 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_23_23_0 to 202008_24_24_0.
2020.08.11 15:45:15.889379 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:15.899375 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 15:45:15.932767 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_24_24_0 to 202008_25_25_0.
2020.08.11 15:45:23.437441 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:23.444389 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 15:45:23.474096 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_25_25_0 to 202008_26_26_0.
2020.08.11 15:45:23.474925 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_21_4 to 202008_26_26_0
2020.08.11 15:45:23.478260 [ 128 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 15:45:23.478621 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_21_4 to 202008_26_26_0 into tmp_merge_202008_1_26_5 with type Wide
2020.08.11 15:45:23.479412 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:45:23.479900 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_21_4, total 153 rows starting from the beginning of the part
2020.08.11 15:45:23.490285 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_22_22_0, total 8 rows starting from the beginning of the part
2020.08.11 15:45:23.501215 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_23_23_0, total 7 rows starting from the beginning of the part
2020.08.11 15:45:23.511375 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_24_24_0, total 8 rows starting from the beginning of the part
2020.08.11 15:45:23.521912 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_25_25_0, total 8 rows starting from the beginning of the part
2020.08.11 15:45:23.531696 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_26_26_0, total 7 rows starting from the beginning of the part
2020.08.11 15:45:23.600736 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 191 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 1563.96 rows/sec., 2.62 MB/sec.
2020.08.11 15:45:23.612667 [ 128 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_26_5 to 202008_1_26_5.
2020.08.11 15:45:23.616094 [ 128 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_21_4 to 202008_26_26_0
2020.08.11 15:45:30.977673 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:30.982430 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 15:45:31.006081 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_26_26_0 to 202008_27_27_0.
2020.08.11 15:45:38.508606 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:38.519272 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 15:45:38.551520 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_27_27_0 to 202008_28_28_0.
2020.08.11 15:45:46.059081 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:46.073311 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 15:45:46.109348 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_28_28_0 to 202008_29_29_0.
2020.08.11 15:45:53.615493 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:45:53.623652 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 15:45:53.654826 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_29_29_0 to 202008_30_30_0.
2020.08.11 15:46:01.159736 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:01.165831 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 15:46:01.192448 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_30_30_0 to 202008_31_31_0.
2020.08.11 15:46:01.194823 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_26_5 to 202008_31_31_0
2020.08.11 15:46:01.197949 [ 125 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 15:46:01.198271 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_26_5 to 202008_31_31_0 into tmp_merge_202008_1_31_6 with type Wide
2020.08.11 15:46:01.198785 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:46:01.199269 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_26_5, total 191 rows starting from the beginning of the part
2020.08.11 15:46:01.208918 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_27_27_0, total 8 rows starting from the beginning of the part
2020.08.11 15:46:01.219822 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_28_28_0, total 7 rows starting from the beginning of the part
2020.08.11 15:46:01.229551 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_29_29_0, total 8 rows starting from the beginning of the part
2020.08.11 15:46:01.240348 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_30_30_0, total 7 rows starting from the beginning of the part
2020.08.11 15:46:01.250712 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_31_31_0, total 8 rows starting from the beginning of the part
2020.08.11 15:46:01.319472 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 229 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 1889.43 rows/sec., 3.17 MB/sec.
2020.08.11 15:46:01.331127 [ 125 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_31_6 to 202008_1_31_6.
2020.08.11 15:46:01.334226 [ 125 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_26_5 to 202008_31_31_0
2020.08.11 15:46:08.698185 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:08.701947 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 15:46:08.730331 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_31_31_0 to 202008_32_32_0.
2020.08.11 15:46:16.236126 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:16.249332 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 15:46:16.279122 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_32_32_0 to 202008_33_33_0.
2020.08.11 15:46:23.781697 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:23.786256 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 15:46:23.810668 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_33_33_0 to 202008_34_34_0.
2020.08.11 15:46:31.317061 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:31.329010 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 15:46:31.369165 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_34_34_0 to 202008_35_35_0.
2020.08.11 15:46:38.870783 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:38.888125 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 15:46:38.911870 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_35_35_0 to 202008_36_36_0.
2020.08.11 15:46:38.913264 [ 119 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_31_6 to 202008_36_36_0
2020.08.11 15:46:38.916234 [ 119 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 15:46:38.916618 [ 119 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_31_6 to 202008_36_36_0 into tmp_merge_202008_1_36_7 with type Wide
2020.08.11 15:46:38.917355 [ 119 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:46:38.917645 [ 119 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_31_6, total 229 rows starting from the beginning of the part
2020.08.11 15:46:38.927604 [ 119 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_32_32_0, total 7 rows starting from the beginning of the part
2020.08.11 15:46:38.937495 [ 119 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_33_33_0, total 8 rows starting from the beginning of the part
2020.08.11 15:46:38.949578 [ 119 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_34_34_0, total 7 rows starting from the beginning of the part
2020.08.11 15:46:38.959428 [ 119 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_35_35_0, total 8 rows starting from the beginning of the part
2020.08.11 15:46:38.969701 [ 119 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_36_36_0, total 8 rows starting from the beginning of the part
2020.08.11 15:46:39.038971 [ 119 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 267 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 2182.11 rows/sec., 3.66 MB/sec.
2020.08.11 15:46:39.051072 [ 119 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_36_7 to 202008_1_36_7.
2020.08.11 15:46:39.053839 [ 119 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_31_6 to 202008_36_36_0
2020.08.11 15:46:46.421243 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:46.428921 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 15:46:46.461550 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_36_36_0 to 202008_37_37_0.
2020.08.11 15:46:53.964197 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:46:53.975307 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 15:46:54.007041 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_37_37_0 to 202008_38_38_0.
2020.08.11 15:47:01.512000 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:01.522913 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 15:47:01.556483 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_38_38_0 to 202008_39_39_0.
2020.08.11 15:47:09.066467 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:09.072710 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 15:47:09.098363 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_39_39_0 to 202008_40_40_0.
2020.08.11 15:47:16.602008 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:16.618509 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 15:47:16.649389 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_40_40_0 to 202008_41_41_0.
2020.08.11 15:47:16.650398 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_36_7 to 202008_41_41_0
2020.08.11 15:47:16.653159 [ 127 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 15:47:16.653469 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_36_7 to 202008_41_41_0 into tmp_merge_202008_1_41_8 with type Wide
2020.08.11 15:47:16.653954 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:47:16.654295 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_36_7, total 267 rows starting from the beginning of the part
2020.08.11 15:47:16.664875 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_37_37_0, total 7 rows starting from the beginning of the part
2020.08.11 15:47:16.674410 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_38_38_0, total 8 rows starting from the beginning of the part
2020.08.11 15:47:16.685043 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_39_39_0, total 7 rows starting from the beginning of the part
2020.08.11 15:47:16.695449 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_40_40_0, total 8 rows starting from the beginning of the part
2020.08.11 15:47:16.705284 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_41_41_0, total 7 rows starting from the beginning of the part
2020.08.11 15:47:16.784209 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 304 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 2325.32 rows/sec., 3.90 MB/sec.
2020.08.11 15:47:16.796418 [ 127 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_41_8 to 202008_1_41_8.
2020.08.11 15:47:16.799483 [ 127 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_36_7 to 202008_41_41_0
2020.08.11 15:47:24.153798 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:24.157634 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 15:47:24.181588 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_41_41_0 to 202008_42_42_0.
2020.08.11 15:47:31.687472 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:31.705972 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 15:47:31.742822 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_42_42_0 to 202008_43_43_0.
2020.08.11 15:47:39.249024 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:39.255852 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 15:47:39.287498 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_43_43_0 to 202008_44_44_0.
2020.08.11 15:47:46.792436 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:46.799053 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 15:47:46.827208 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_44_44_0 to 202008_45_45_0.
2020.08.11 15:47:54.333472 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:47:54.349006 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 15:47:54.380526 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_45_45_0 to 202008_46_46_0.
2020.08.11 15:47:54.381523 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_41_8 to 202008_46_46_0
2020.08.11 15:47:54.384351 [ 129 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 15:47:54.384692 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_41_8 to 202008_46_46_0 into tmp_merge_202008_1_46_9 with type Wide
2020.08.11 15:47:54.385249 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:47:54.385604 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_41_8, total 304 rows starting from the beginning of the part
2020.08.11 15:47:54.395854 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_42_42_0, total 8 rows starting from the beginning of the part
2020.08.11 15:47:54.405712 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_43_43_0, total 7 rows starting from the beginning of the part
2020.08.11 15:47:54.416290 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_44_44_0, total 8 rows starting from the beginning of the part
2020.08.11 15:47:54.426543 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_45_45_0, total 7 rows starting from the beginning of the part
2020.08.11 15:47:54.436730 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_46_46_0, total 8 rows starting from the beginning of the part
2020.08.11 15:47:54.507533 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 342 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 2784.25 rows/sec., 4.67 MB/sec.
2020.08.11 15:47:54.519743 [ 129 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_46_9 to 202008_1_46_9.
2020.08.11 15:47:54.523058 [ 129 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_41_8 to 202008_46_46_0
2020.08.11 15:48:01.885736 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:01.894322 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 15:48:01.925473 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_46_46_0 to 202008_47_47_0.
2020.08.11 15:48:09.428338 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:09.444143 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 15:48:09.475442 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_47_47_0 to 202008_48_48_0.
2020.08.11 15:48:16.977478 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:16.989088 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 15:48:17.021384 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_48_48_0 to 202008_49_49_0.
2020.08.11 15:48:24.527985 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:24.535827 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 15:48:24.571678 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_49_49_0 to 202008_50_50_0.
2020.08.11 15:48:32.073539 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:32.078576 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 15:48:32.102484 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_50_50_0 to 202008_51_51_0.
2020.08.11 15:48:32.103739 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_46_9 to 202008_51_51_0
2020.08.11 15:48:32.107175 [ 121 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 15:48:32.107691 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_46_9 to 202008_51_51_0 into tmp_merge_202008_1_51_10 with type Wide
2020.08.11 15:48:32.108471 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:48:32.108815 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_46_9, total 342 rows starting from the beginning of the part
2020.08.11 15:48:32.120575 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_47_47_0, total 8 rows starting from the beginning of the part
2020.08.11 15:48:32.131543 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_48_48_0, total 7 rows starting from the beginning of the part
2020.08.11 15:48:32.142529 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_49_49_0, total 8 rows starting from the beginning of the part
2020.08.11 15:48:32.152281 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_50_50_0, total 7 rows starting from the beginning of the part
2020.08.11 15:48:32.162723 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_51_51_0, total 8 rows starting from the beginning of the part
2020.08.11 15:48:32.232010 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 380 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 3056.32 rows/sec., 5.13 MB/sec.
2020.08.11 15:48:32.244260 [ 121 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_51_10 to 202008_1_51_10.
2020.08.11 15:48:32.247230 [ 121 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_46_9 to 202008_51_51_0
2020.08.11 15:48:39.611421 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:39.621751 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 15:48:39.650595 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_51_51_0 to 202008_52_52_0.
2020.08.11 15:48:47.153128 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:47.163833 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 15:48:47.198637 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_52_52_0 to 202008_53_53_0.
2020.08.11 15:48:54.701441 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:48:54.717804 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 15:48:54.752083 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_53_53_0 to 202008_54_54_0.
2020.08.11 15:49:02.253595 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:02.258877 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 15:49:02.282353 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_54_54_0 to 202008_55_55_0.
2020.08.11 15:49:09.789061 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:09.795049 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 15:49:09.820205 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_55_55_0 to 202008_56_56_0.
2020.08.11 15:49:09.821312 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_51_10 to 202008_56_56_0
2020.08.11 15:49:09.824284 [ 114 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 15:49:09.824602 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_51_10 to 202008_56_56_0 into tmp_merge_202008_1_56_11 with type Wide
2020.08.11 15:49:09.825542 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:49:09.826015 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_51_10, total 380 rows starting from the beginning of the part
2020.08.11 15:49:09.840827 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_52_52_0, total 7 rows starting from the beginning of the part
2020.08.11 15:49:09.850619 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_53_53_0, total 8 rows starting from the beginning of the part
2020.08.11 15:49:09.861854 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_54_54_0, total 7 rows starting from the beginning of the part
2020.08.11 15:49:09.887080 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_55_55_0, total 8 rows starting from the beginning of the part
2020.08.11 15:49:09.897688 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_56_56_0, total 7 rows starting from the beginning of the part
2020.08.11 15:49:09.978106 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 417 rows, containing 211 columns (211 merged, 0 gathered) in 0.15 sec., 2716.51 rows/sec., 4.56 MB/sec.
2020.08.11 15:49:09.990664 [ 114 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_56_11 to 202008_1_56_11.
2020.08.11 15:49:09.993789 [ 114 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_51_10 to 202008_56_56_0
2020.08.11 15:49:17.322037 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:17.333936 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 15:49:17.361285 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_56_56_0 to 202008_57_57_0.
2020.08.11 15:49:24.864826 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:24.870618 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 15:49:24.895754 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_57_57_0 to 202008_58_58_0.
2020.08.11 15:49:32.397523 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:32.407676 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 15:49:32.442884 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_58_58_0 to 202008_59_59_0.
2020.08.11 15:49:39.945262 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:39.955831 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 15:49:39.986808 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_59_59_0 to 202008_60_60_0.
2020.08.11 15:49:47.490650 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:47.501028 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:49:47.531562 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_60_60_0 to 202008_61_61_0.
2020.08.11 15:49:47.532840 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_56_11 to 202008_61_61_0
2020.08.11 15:49:47.535987 [ 121 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:49:47.536342 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_56_11 to 202008_61_61_0 into tmp_merge_202008_1_61_12 with type Wide
2020.08.11 15:49:47.536801 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:49:47.537258 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_56_11, total 417 rows starting from the beginning of the part
2020.08.11 15:49:47.546943 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_57_57_0, total 8 rows starting from the beginning of the part
2020.08.11 15:49:47.557751 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_58_58_0, total 8 rows starting from the beginning of the part
2020.08.11 15:49:47.567569 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_59_59_0, total 7 rows starting from the beginning of the part
2020.08.11 15:49:47.577608 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_60_60_0, total 8 rows starting from the beginning of the part
2020.08.11 15:49:47.588564 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_61_61_0, total 7 rows starting from the beginning of the part
2020.08.11 15:49:47.662388 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 455 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 3609.78 rows/sec., 6.06 MB/sec.
2020.08.11 15:49:47.674376 [ 121 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_61_12 to 202008_1_61_12.
2020.08.11 15:49:47.677439 [ 121 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_56_11 to 202008_61_61_0
2020.08.11 15:49:55.034450 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:49:55.038885 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:49:55.064047 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_61_61_0 to 202008_62_62_0.
2020.08.11 15:50:02.566654 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:02.573007 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:50:02.602120 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_62_62_0 to 202008_63_63_0.
2020.08.11 15:50:10.107758 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:10.121345 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:50:10.151621 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_63_63_0 to 202008_64_64_0.
2020.08.11 15:50:17.655518 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:17.662838 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:50:17.691823 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_64_64_0 to 202008_65_65_0.
2020.08.11 15:50:25.195854 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:25.199740 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:50:25.223478 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_65_65_0 to 202008_66_66_0.
2020.08.11 15:50:25.224460 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_61_12 to 202008_66_66_0
2020.08.11 15:50:25.227723 [ 129 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:50:25.228075 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_61_12 to 202008_66_66_0 into tmp_merge_202008_1_66_13 with type Wide
2020.08.11 15:50:25.228594 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:50:25.228957 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_61_12, total 455 rows starting from the beginning of the part
2020.08.11 15:50:25.239251 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_62_62_0, total 8 rows starting from the beginning of the part
2020.08.11 15:50:25.249354 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_63_63_0, total 7 rows starting from the beginning of the part
2020.08.11 15:50:25.259815 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_64_64_0, total 8 rows starting from the beginning of the part
2020.08.11 15:50:25.270191 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_65_65_0, total 7 rows starting from the beginning of the part
2020.08.11 15:50:25.279808 [ 129 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_66_66_0, total 8 rows starting from the beginning of the part
2020.08.11 15:50:25.350930 [ 129 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 493 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 4013.01 rows/sec., 6.73 MB/sec.
2020.08.11 15:50:25.366901 [ 129 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_66_13 to 202008_1_66_13.
2020.08.11 15:50:25.369864 [ 129 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_61_12 to 202008_66_66_0
2020.08.11 15:50:32.725437 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:32.734135 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:50:32.769385 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_66_66_0 to 202008_67_67_0.
2020.08.11 15:50:40.271422 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:40.277688 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:50:40.307403 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_67_67_0 to 202008_68_68_0.
2020.08.11 15:50:47.810820 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:47.818096 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:50:47.844725 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_68_68_0 to 202008_69_69_0.
2020.08.11 15:50:54.855793 [ 118 ] {} <Trace> system.metric_log: Found 6 old parts to remove.
2020.08.11 15:50:54.856297 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_1_0
2020.08.11 15:50:54.888812 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_2_2_0
2020.08.11 15:50:54.903414 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_3_3_0
2020.08.11 15:50:54.916297 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_4_4_0
2020.08.11 15:50:54.927846 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_5_5_0
2020.08.11 15:50:54.939792 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_6_6_0
2020.08.11 15:50:55.346665 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:50:55.357478 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:50:55.401312 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_69_69_0 to 202008_70_70_0.
2020.08.11 15:51:02.906050 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:02.916779 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:02.949238 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_70_70_0 to 202008_71_71_0.
2020.08.11 15:51:02.950194 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_66_13 to 202008_71_71_0
2020.08.11 15:51:02.952880 [ 126 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:02.953213 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_66_13 to 202008_71_71_0 into tmp_merge_202008_1_71_14 with type Wide
2020.08.11 15:51:02.953764 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:51:02.954241 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_66_13, total 493 rows starting from the beginning of the part
2020.08.11 15:51:02.967726 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_67_67_0, total 7 rows starting from the beginning of the part
2020.08.11 15:51:02.978503 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_68_68_0, total 8 rows starting from the beginning of the part
2020.08.11 15:51:02.995487 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_69_69_0, total 7 rows starting from the beginning of the part
2020.08.11 15:51:03.008836 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_70_70_0, total 8 rows starting from the beginning of the part
2020.08.11 15:51:03.021658 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_71_71_0, total 8 rows starting from the beginning of the part
2020.08.11 15:51:03.108836 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 531 rows, containing 211 columns (211 merged, 0 gathered) in 0.16 sec., 3412.11 rows/sec., 5.73 MB/sec.
2020.08.11 15:51:03.121862 [ 126 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_71_14 to 202008_1_71_14.
2020.08.11 15:51:03.124979 [ 126 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_66_13 to 202008_71_71_0
2020.08.11 15:51:10.450719 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:10.455289 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:10.479790 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_71_71_0 to 202008_72_72_0.
2020.08.11 15:51:17.984799 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:17.994644 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:18.026593 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_72_72_0 to 202008_73_73_0.
2020.08.11 15:51:25.536856 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:25.548809 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:25.582076 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_73_73_0 to 202008_74_74_0.
2020.08.11 15:51:32.606186 [ 126 ] {} <Trace> system.metric_log: Found 6 old parts to remove.
2020.08.11 15:51:32.606700 [ 126 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_6_1
2020.08.11 15:51:32.630163 [ 126 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_7_7_0
2020.08.11 15:51:32.644479 [ 126 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_8_8_0
2020.08.11 15:51:32.665031 [ 126 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_9_9_0
2020.08.11 15:51:32.677188 [ 126 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_10_10_0
2020.08.11 15:51:32.688806 [ 126 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_11_11_0
2020.08.11 15:51:33.084514 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:33.097360 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:51:33.128623 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_74_74_0 to 202008_75_75_0.
2020.08.11 15:51:40.634507 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:40.644204 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:40.674587 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_75_75_0 to 202008_76_76_0.
2020.08.11 15:51:40.675979 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_71_14 to 202008_76_76_0
2020.08.11 15:51:40.680838 [ 125 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:40.681208 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_71_14 to 202008_76_76_0 into tmp_merge_202008_1_76_15 with type Wide
2020.08.11 15:51:40.681700 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:51:40.682186 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_71_14, total 531 rows starting from the beginning of the part
2020.08.11 15:51:40.691952 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_72_72_0, total 7 rows starting from the beginning of the part
2020.08.11 15:51:40.712956 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_73_73_0, total 8 rows starting from the beginning of the part
2020.08.11 15:51:40.723042 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_74_74_0, total 7 rows starting from the beginning of the part
2020.08.11 15:51:40.733440 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_75_75_0, total 8 rows starting from the beginning of the part
2020.08.11 15:51:40.744244 [ 125 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_76_76_0, total 7 rows starting from the beginning of the part
2020.08.11 15:51:40.815772 [ 125 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 568 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 4221.09 rows/sec., 7.08 MB/sec.
2020.08.11 15:51:40.827438 [ 125 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_76_15 to 202008_1_76_15.
2020.08.11 15:51:40.830871 [ 125 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_71_14 to 202008_76_76_0
2020.08.11 15:51:48.177307 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:48.182803 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:48.207905 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_76_76_0 to 202008_77_77_0.
2020.08.11 15:51:55.712617 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:51:55.717633 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:51:55.745181 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_77_77_0 to 202008_78_78_0.
2020.08.11 15:52:03.254192 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:03.275556 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:03.308685 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_78_78_0 to 202008_79_79_0.
2020.08.11 15:52:10.034383 [ 115 ] {} <Trace> system.metric_log: Found 6 old parts to remove.
2020.08.11 15:52:10.035371 [ 115 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_11_2
2020.08.11 15:52:10.061761 [ 115 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_12_12_0
2020.08.11 15:52:10.075063 [ 115 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_13_13_0
2020.08.11 15:52:10.087626 [ 115 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_14_14_0
2020.08.11 15:52:10.101073 [ 115 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_15_15_0
2020.08.11 15:52:10.115171 [ 115 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_16_16_0
2020.08.11 15:52:10.811534 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:10.817006 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:52:10.845793 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_79_79_0 to 202008_80_80_0.
2020.08.11 15:52:18.348405 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:18.355311 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:18.381230 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_80_80_0 to 202008_81_81_0.
2020.08.11 15:52:18.382445 [ 118 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_76_15 to 202008_81_81_0
2020.08.11 15:52:18.385211 [ 118 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:18.385549 [ 118 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_76_15 to 202008_81_81_0 into tmp_merge_202008_1_81_16 with type Wide
2020.08.11 15:52:18.386072 [ 118 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:52:18.386479 [ 118 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_76_15, total 568 rows starting from the beginning of the part
2020.08.11 15:52:18.397455 [ 118 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_77_77_0, total 8 rows starting from the beginning of the part
2020.08.11 15:52:18.407424 [ 118 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_78_78_0, total 7 rows starting from the beginning of the part
2020.08.11 15:52:18.418669 [ 118 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_79_79_0, total 8 rows starting from the beginning of the part
2020.08.11 15:52:18.430557 [ 118 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_80_80_0, total 7 rows starting from the beginning of the part
2020.08.11 15:52:18.440747 [ 118 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_81_81_0, total 8 rows starting from the beginning of the part
2020.08.11 15:52:18.516403 [ 118 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 606 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 4631.25 rows/sec., 7.77 MB/sec.
2020.08.11 15:52:18.529635 [ 118 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_81_16 to 202008_1_81_16.
2020.08.11 15:52:18.532460 [ 118 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_76_15 to 202008_81_81_0
2020.08.11 15:52:25.885969 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:25.890513 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:25.914465 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_81_81_0 to 202008_82_82_0.
2020.08.11 15:52:33.416515 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:33.421660 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:33.448111 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_82_82_0 to 202008_83_83_0.
2020.08.11 15:52:40.955709 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:40.966833 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:41.003776 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_83_83_0 to 202008_84_84_0.
2020.08.11 15:52:48.188989 [ 118 ] {} <Trace> system.metric_log: Found 6 old parts to remove.
2020.08.11 15:52:48.189375 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_16_3
2020.08.11 15:52:48.212119 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_17_17_0
2020.08.11 15:52:48.226582 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_18_18_0
2020.08.11 15:52:48.240076 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_19_19_0
2020.08.11 15:52:48.253236 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_20_20_0
2020.08.11 15:52:48.268778 [ 118 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_21_21_0
2020.08.11 15:52:48.507041 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:48.511419 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 15:52:48.535101 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_84_84_0 to 202008_85_85_0.
2020.08.11 15:52:56.038470 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:52:56.045527 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:56.095444 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_85_85_0 to 202008_86_86_0.
2020.08.11 15:52:56.097329 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_81_16 to 202008_86_86_0
2020.08.11 15:52:56.103380 [ 126 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:52:56.103862 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_81_16 to 202008_86_86_0 into tmp_merge_202008_1_86_17 with type Wide
2020.08.11 15:52:56.104892 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:52:56.105733 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_81_16, total 606 rows starting from the beginning of the part
2020.08.11 15:52:56.124186 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_82_82_0, total 8 rows starting from the beginning of the part
2020.08.11 15:52:56.145962 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_83_83_0, total 7 rows starting from the beginning of the part
2020.08.11 15:52:56.167767 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_84_84_0, total 8 rows starting from the beginning of the part
2020.08.11 15:52:56.187376 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_85_85_0, total 7 rows starting from the beginning of the part
2020.08.11 15:52:56.211186 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_86_86_0, total 8 rows starting from the beginning of the part
2020.08.11 15:52:56.341806 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 644 rows, containing 211 columns (211 merged, 0 gathered) in 0.24 sec., 2706.67 rows/sec., 4.54 MB/sec.
2020.08.11 15:52:56.364593 [ 126 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_86_17 to 202008_1_86_17.
2020.08.11 15:52:56.376600 [ 126 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_81_16 to 202008_86_86_0
2020.08.11 15:53:03.599653 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:53:03.613367 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:53:03.647113 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_86_86_0 to 202008_87_87_0.
2020.08.11 15:53:11.149203 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:53:11.154647 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:53:11.180776 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_87_87_0 to 202008_88_88_0.
2020.08.11 15:53:17.299759 [ 112 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 15:53:17.302220 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 15:53:17.302643 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 15:53:18.363228 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 15:53:18.364457 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 15:53:18.369236 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 15:53:18.728613 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:53:18.748309 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.24 GiB.
2020.08.11 15:53:18.893848 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_88_88_0 to 202008_89_89_0.
2020.08.11 15:53:18.900182 [ 1 ] {} <Trace> system.metric_log: Found 78 old parts to remove.
2020.08.11 15:53:18.900792 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_21_4
2020.08.11 15:53:18.968016 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_26_5
2020.08.11 15:53:19.061204 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_31_6
2020.08.11 15:53:19.141337 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_36_7
2020.08.11 15:53:19.205864 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_41_8
2020.08.11 15:53:19.269630 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_46_9
2020.08.11 15:53:19.336573 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_51_10
2020.08.11 15:53:19.393183 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_56_11
2020.08.11 15:53:19.439436 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_61_12
2020.08.11 15:53:19.486658 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_66_13
2020.08.11 15:53:19.534567 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_71_14
2020.08.11 15:53:19.580795 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_76_15
2020.08.11 15:53:19.622298 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_81_16
2020.08.11 15:53:19.661937 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_22_22_0
2020.08.11 15:53:19.702294 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_23_23_0
2020.08.11 15:53:19.734474 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_24_24_0
2020.08.11 15:53:19.784493 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_25_25_0
2020.08.11 15:53:19.825463 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_26_26_0
2020.08.11 15:53:19.868928 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_27_27_0
2020.08.11 15:53:19.907904 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_28_28_0
2020.08.11 15:53:19.940235 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_29_29_0
2020.08.11 15:53:19.969661 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_30_30_0
2020.08.11 15:53:20.003493 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_31_31_0
2020.08.11 15:53:20.034915 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_32_32_0
2020.08.11 15:53:20.074791 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_33_33_0
2020.08.11 15:53:20.110721 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_34_34_0
2020.08.11 15:53:20.144051 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_35_35_0
2020.08.11 15:53:20.182638 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_36_36_0
2020.08.11 15:53:20.223260 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_37_37_0
2020.08.11 15:53:20.261365 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_38_38_0
2020.08.11 15:53:20.291135 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_39_39_0
2020.08.11 15:53:20.319869 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_40_40_0
2020.08.11 15:53:20.349351 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_41_41_0
2020.08.11 15:53:20.378095 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_42_42_0
2020.08.11 15:53:20.405359 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_43_43_0
2020.08.11 15:53:20.433964 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_44_44_0
2020.08.11 15:53:20.463537 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_45_45_0
2020.08.11 15:53:20.491353 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_46_46_0
2020.08.11 15:53:20.520365 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_47_47_0
2020.08.11 15:53:20.549482 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_48_48_0
2020.08.11 15:53:20.579818 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_49_49_0
2020.08.11 15:53:20.614025 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_50_50_0
2020.08.11 15:53:20.641451 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_51_51_0
2020.08.11 15:53:20.675840 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_52_52_0
2020.08.11 15:53:20.710093 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_53_53_0
2020.08.11 15:53:20.742131 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_54_54_0
2020.08.11 15:53:20.777320 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_55_55_0
2020.08.11 15:53:20.803896 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_56_56_0
2020.08.11 15:53:20.832863 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_57_57_0
2020.08.11 15:53:20.865515 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_58_58_0
2020.08.11 15:53:20.898690 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_59_59_0
2020.08.11 15:53:20.925284 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_60_60_0
2020.08.11 15:53:20.951958 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_61_61_0
2020.08.11 15:53:20.978571 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_62_62_0
2020.08.11 15:53:21.004904 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_63_63_0
2020.08.11 15:53:21.032100 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_64_64_0
2020.08.11 15:53:21.059475 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_65_65_0
2020.08.11 15:53:21.085831 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_66_66_0
2020.08.11 15:53:21.113050 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_67_67_0
2020.08.11 15:53:21.140978 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_68_68_0
2020.08.11 15:53:21.170170 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_69_69_0
2020.08.11 15:53:21.197755 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_70_70_0
2020.08.11 15:53:21.232878 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_71_71_0
2020.08.11 15:53:21.271544 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_72_72_0
2020.08.11 15:53:21.297409 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_73_73_0
2020.08.11 15:53:21.321660 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_74_74_0
2020.08.11 15:53:21.347717 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_75_75_0
2020.08.11 15:53:21.372605 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_76_76_0
2020.08.11 15:53:21.398479 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_77_77_0
2020.08.11 15:53:21.423193 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_78_78_0
2020.08.11 15:53:21.450538 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_79_79_0
2020.08.11 15:53:21.477532 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_80_80_0
2020.08.11 15:53:21.501485 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_81_81_0
2020.08.11 15:53:21.527942 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_82_82_0
2020.08.11 15:53:21.552085 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_83_83_0
2020.08.11 15:53:21.577895 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_84_84_0
2020.08.11 15:53:21.603245 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_85_85_0
2020.08.11 15:53:21.629195 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_86_86_0
2020.08.11 15:53:21.688329 [ 1 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 15:53:21.689444 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 15:53:21.693371 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 15:53:21.693994 [ 1 ] {} <Information> Application: shutting down
2020.08.11 15:53:21.694333 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 15:53:21.695005 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 15:53:31.558492 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:53:31.562311 [ 38 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 15:53:31.562609 [ 38 ] {} <Information> Application: starting up
2020.08.11 15:53:31.569644 [ 38 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 15:53:31.570011 [ 38 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 15:53:31.570229 [ 38 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 15:53:31.570639 [ 38 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 15:53:31.572222 [ 38 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '2762d3f02f00' as replica host.
2020.08.11 15:53:31.575833 [ 38 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 15:53:31.578114 [ 38 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:53:31.578838 [ 38 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:53:31.579189 [ 38 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 15:53:31.582528 [ 38 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 15:53:31.582789 [ 38 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 15:53:31.583383 [ 38 ] {} <Debug> Application: Loaded metadata.
2020.08.11 15:53:31.583727 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:53:31.585477 [ 38 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 15:53:31.589717 [ 38 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 15:53:31.589947 [ 38 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:53:31.591639 [ 38 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.592750 [ 38 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.593603 [ 38 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.594468 [ 38 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:31.595171 [ 38 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 15:53:31.595452 [ 38 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 15:53:31.595816 [ 38 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 15:53:31.596937 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 15:53:31.597487 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 15:53:31.597674 [ 38 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 15:53:31.680831 [ 38 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 15:53:31.682634 [ 38 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 15:53:31.682880 [ 38 ] {} <Information> Application: Ready for connections.
2020.08.11 15:53:32.543969 [ 72 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41708, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 15:53:32.586177 [ 73 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44162
2020.08.11 15:53:32.588614 [ 73 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 15:53:32.594813 [ 73 ] {57fbb7bb-fbf4-49ca-9bf8-d056351a6625} <Debug> executeQuery: (from 127.0.0.1:44162) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` String) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 15:53:32.595271 [ 73 ] {57fbb7bb-fbf4-49ca-9bf8-d056351a6625} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 15:53:32.595546 [ 73 ] {57fbb7bb-fbf4-49ca-9bf8-d056351a6625} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 15:53:32.595812 [ 73 ] {57fbb7bb-fbf4-49ca-9bf8-d056351a6625} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 15:53:32.596195 [ 73 ] {57fbb7bb-fbf4-49ca-9bf8-d056351a6625} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 15:53:32.598834 [ 73 ] {57fbb7bb-fbf4-49ca-9bf8-d056351a6625} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 15:53:32.599200 [ 73 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 15:53:32.599539 [ 73 ] {} <Information> TCPHandler: Processed in 0.005 sec.
2020.08.11 15:53:32.601377 [ 73 ] {16ad3524-ba21-4fd3-951c-91b0b61d0a98} <Debug> executeQuery: (from 127.0.0.1:44162) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGetString('default.dict', 'value', tuple(dictGetString('default.dict', 'value', tuple(coalesce(stamp, '')))))) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 15:53:32.601675 [ 73 ] {16ad3524-ba21-4fd3-951c-91b0b61d0a98} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 15:53:32.602907 [ 73 ] {16ad3524-ba21-4fd3-951c-91b0b61d0a98} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 15:53:32.606098 [ 73 ] {16ad3524-ba21-4fd3-951c-91b0b61d0a98} <Debug> default.table: Loading data parts
2020.08.11 15:53:32.606703 [ 73 ] {16ad3524-ba21-4fd3-951c-91b0b61d0a98} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 15:53:32.608672 [ 73 ] {16ad3524-ba21-4fd3-951c-91b0b61d0a98} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 15:53:32.609058 [ 73 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 15:53:32.609329 [ 73 ] {} <Information> TCPHandler: Processed in 0.008 sec.
2020.08.11 15:53:32.609765 [ 73 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 15:53:32.614704 [ 47 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 15:53:32.615223 [ 38 ] {} <Debug> Application: Received termination signal.
2020.08.11 15:53:32.615513 [ 38 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 15:53:32.949953 [ 38 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 15:53:32.950795 [ 38 ] {} <Information> Application: Closed connections.
2020.08.11 15:53:32.956749 [ 38 ] {} <Information> Application: Shutting down storages.
2020.08.11 15:53:33.582773 [ 48 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:53:33.583538 [ 48 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 15:53:33.598121 [ 48 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 15:53:33.600042 [ 48 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 15:53:33.611805 [ 48 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:53:33.674787 [ 48 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 15:53:33.677864 [ 38 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 15:53:33.678448 [ 38 ] {} <Debug> Application: Shut down storages.
2020.08.11 15:53:33.679664 [ 38 ] {} <Debug> Application: Destroyed global context.
2020.08.11 15:53:33.680275 [ 38 ] {} <Information> Application: shutting down
2020.08.11 15:53:33.680513 [ 38 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 15:53:33.681020 [ 47 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 15:53:33.736830 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:53:33.741133 [ 1 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 15:53:33.741438 [ 1 ] {} <Information> Application: starting up
2020.08.11 15:53:33.747276 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 15:53:33.747731 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 15:53:33.747986 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 15:53:33.748187 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 15:53:33.748794 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '2762d3f02f00' as replica host.
2020.08.11 15:53:33.752021 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 15:53:33.754534 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:53:33.755792 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 15:53:33.756075 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 15:53:33.759904 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 1 tables and 0 dictionaries.
2020.08.11 15:53:33.765160 [ 112 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 15:53:33.767954 [ 112 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 15:53:33.775826 [ 112 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 15:53:33.776592 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 15:53:33.779587 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 15:53:33.780492 [ 135 ] {} <Debug> default.table: Loading data parts
2020.08.11 15:53:33.781212 [ 135 ] {} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 15:53:33.781834 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 15:53:33.782482 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 15:53:33.782739 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 15:53:33.782996 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 15:53:33.785402 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 15:53:33.785661 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 15:53:33.787804 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.789192 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.790170 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.790935 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 15:53:33.791613 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 15:53:33.791861 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 15:53:33.792153 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 15:53:33.793058 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 15:53:33.793513 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 15:53:33.793747 [ 1 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 15:53:33.919277 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 15:53:33.920910 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 15:53:33.921265 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 15:53:35.924901 [ 162 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2020.08.11 15:53:41.280067 [ 130 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:53:41.281915 [ 130 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 15:53:41.287458 [ 130 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.37 GiB.
2020.08.11 15:53:41.336414 [ 130 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 15:53:48.841742 [ 130 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:53:48.845557 [ 130 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:53:48.871989 [ 130 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 15:53:56.374185 [ 130 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:53:56.379676 [ 130 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:53:56.416606 [ 130 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_4_4_0.
2020.08.11 15:54:03.920484 [ 130 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:54:03.932120 [ 130 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:54:03.962563 [ 130 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_4_4_0 to 202008_5_5_0.
2020.08.11 15:54:11.466830 [ 130 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:54:11.475131 [ 130 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:54:11.502863 [ 130 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_5_5_0 to 202008_6_6_0.
2020.08.11 15:54:11.503896 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_1_0 to 202008_6_6_0
2020.08.11 15:54:11.506640 [ 126 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:54:11.507023 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_1_0 to 202008_6_6_0 into tmp_merge_202008_1_6_1 with type Wide
2020.08.11 15:54:11.507623 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 15:54:11.508020 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_1_0, total 2 rows starting from the beginning of the part
2020.08.11 15:54:11.519284 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_2_2_0, total 8 rows starting from the beginning of the part
2020.08.11 15:54:11.529556 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_3_3_0, total 8 rows starting from the beginning of the part
2020.08.11 15:54:11.555344 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_4_4_0, total 7 rows starting from the beginning of the part
2020.08.11 15:54:11.565345 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_5_5_0, total 8 rows starting from the beginning of the part
2020.08.11 15:54:11.588909 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_6_6_0, total 7 rows starting from the beginning of the part
2020.08.11 15:54:11.656710 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 40 rows, containing 211 columns (211 merged, 0 gathered) in 0.15 sec., 267.21 rows/sec., 0.45 MB/sec.
2020.08.11 15:54:11.669035 [ 126 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_6_1 to 202008_1_6_1.
2020.08.11 15:54:11.672585 [ 126 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_1_0 to 202008_6_6_0
2020.08.11 15:54:15.624269 [ 111 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 15:54:15.625162 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 15:54:15.625864 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 15:54:16.225557 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 15:54:16.226147 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 15:54:16.228493 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 15:54:16.786524 [ 130 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 15:54:16.792553 [ 130 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 15:54:16.820182 [ 130 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_6_6_0 to 202008_7_7_0.
2020.08.11 15:54:16.821799 [ 1 ] {} <Trace> system.metric_log: Found 6 old parts to remove.
2020.08.11 15:54:16.822114 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_1_0
2020.08.11 15:54:16.835981 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_2_2_0
2020.08.11 15:54:16.848484 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_3_3_0
2020.08.11 15:54:16.862927 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_4_4_0
2020.08.11 15:54:16.877277 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_5_5_0
2020.08.11 15:54:16.895039 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_6_6_0
2020.08.11 15:54:16.911817 [ 1 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 15:54:16.912444 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 15:54:16.914904 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 15:54:16.915294 [ 1 ] {} <Information> Application: shutting down
2020.08.11 15:54:16.915505 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 15:54:16.916000 [ 111 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:00:59.830537 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:00:59.834342 [ 39 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:00:59.834577 [ 39 ] {} <Information> Application: starting up
2020.08.11 16:00:59.840107 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:00:59.840416 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:00:59.840622 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:00:59.840882 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:00:59.841445 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '06d0290c5fb7' as replica host.
2020.08.11 16:00:59.844305 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:00:59.846211 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:00:59.846657 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:00:59.846941 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:00:59.849361 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 16:00:59.849671 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:00:59.850012 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:00:59.850285 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:00:59.850578 [ 39 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:00:59.852540 [ 39 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:00:59.852736 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:00:59.854945 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.855710 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.856446 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.857088 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:00:59.857570 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:00:59.857816 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:00:59.858060 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:00:59.858792 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:00:59.859124 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:00:59.859320 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:00:59.972728 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:00:59.974310 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:00:59.974493 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 16:01:00.810157 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41714, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 16:01:00.831195 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44168
2020.08.11 16:01:00.831644 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 16:01:00.837960 [ 74 ] {1448512e-fa3e-4093-864d-ed951e10622a} <Debug> executeQuery: (from 127.0.0.1:44168) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` String) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 16:01:00.838340 [ 74 ] {1448512e-fa3e-4093-864d-ed951e10622a} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:01:00.838565 [ 74 ] {1448512e-fa3e-4093-864d-ed951e10622a} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 16:01:00.838790 [ 74 ] {1448512e-fa3e-4093-864d-ed951e10622a} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:01:00.839093 [ 74 ] {1448512e-fa3e-4093-864d-ed951e10622a} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 16:01:00.842262 [ 74 ] {1448512e-fa3e-4093-864d-ed951e10622a} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:01:00.842606 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:01:00.842834 [ 74 ] {} <Information> TCPHandler: Processed in 0.005 sec.
2020.08.11 16:01:00.844302 [ 74 ] {1ef84f67-83c9-4e22-bbd6-c80efdbcbae8} <Debug> executeQuery: (from 127.0.0.1:44168) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT dictGetString('default.dict', 'value', tuple(dictGetString('default.dict', 'value', tuple(coalesce(stamp, '')))))) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 16:01:00.844559 [ 74 ] {1ef84f67-83c9-4e22-bbd6-c80efdbcbae8} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 16:01:00.845565 [ 74 ] {1ef84f67-83c9-4e22-bbd6-c80efdbcbae8} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:01:00.847720 [ 74 ] {1ef84f67-83c9-4e22-bbd6-c80efdbcbae8} <Debug> default.table: Loading data parts
2020.08.11 16:01:00.848316 [ 74 ] {1ef84f67-83c9-4e22-bbd6-c80efdbcbae8} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:01:00.850494 [ 74 ] {1ef84f67-83c9-4e22-bbd6-c80efdbcbae8} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:01:00.850771 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:01:00.850969 [ 74 ] {} <Information> TCPHandler: Processed in 0.007 sec.
2020.08.11 16:01:00.923095 [ 74 ] {0dd99465-edc3-4e6a-aab4-33568873127c} <Debug> executeQuery: (from 127.0.0.1:44168) CREATE TABLE IF NOT EXISTS default.raw_data (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `event_type` Enum8('transaction' = 0, 'session' = 1) DEFAULT CAST('transaction', 'Enum8(\'transaction\' = 0, \'session\' = 1)'), `import_date` DateTime DEFAULT toDateTime(now()), `uid` String DEFAULT '', `session2_id` UInt64, `date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `datefin` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `wid` Int64 DEFAULT CAST(0, 'Int64'), `famillewap` Int32 DEFAULT CAST(0, 'Int32'), `ratio` Float64 DEFAULT CAST(0, 'Float64'), `Status` UInt8 DEFAULT 0, `periode` UInt16 DEFAULT CAST(0, 'UInt16'), `uacore` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `langue` String DEFAULT '', `ml` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `video` LowCardinality(Nullable(String)), `xhtml` LowCardinality(String) DEFAULT CAST('non', 'LowCardinality(String)'), `telechargement` UInt8 DEFAULT 0, `uaextension` LowCardinality(Nullable(String)), `multiobject` UInt8 DEFAULT 0, `mms` UInt8 DEFAULT 0, `best_ml` LowCardinality(String) DEFAULT CAST('wml', 'LowCardinality(String)'), `3g` FixedString(1) DEFAULT CAST('0', 'FixedString(1)'), `age` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `login` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `famille` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `created` Date DEFAULT toDate('0000-00-00'), `modified` Date DEFAULT toDate('0000-00-00'), `familledld` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `chatvalidation` UInt8 DEFAULT 0, `https` UInt8 DEFAULT 0, `motclef` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `bgcolor` Enum8('0' = 0, '1' = 1) DEFAULT CAST('0', 'Enum8(\'0\' = 0, \'1\' = 1)'), `http_x_nokia_bearer` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stream` UInt8 DEFAULT 0, `ip` LowCardinality(Nullable(String)), `tactile` Enum8('0' = 0, '1' = 1) DEFAULT CAST('0', 'Enum8(\'0\' = 0, \'1\' = 1)') COMMENT '1 Le terminal est tactile, 0 sinon', `familledldvideo` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)') COMMENT 'Format vidéo pour le terminal', `transaction_id` UInt32, `trxidpartenaire` Nullable(String), `date_achat` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `idoffre` Nullable(UInt32), `offre` LowCardinality(Nullable(String)), `price_point_code` LowCardinality(String) DEFAULT CAST('MISC', 'LowCardinality(String)'), `price_point` LowCardinality(String) DEFAULT CAST('Various', 'LowCardinality(String)'), `typeachat` FixedString(1) DEFAULT CAST('', 'FixedString(1)'), `type` LowCardinality(Nullable(String)), `offer_type` LowCardinality(Nullable(String)), `groupe` LowCardinality(Nullable(String)), `distributeur` LowCardinality(Nullable(String)), `affilie` LowCardinality(Nullable(String)), `ope_factu` LowCardinality(Nullable(String)) DEFAULT CAST('NULL', 'LowCardinality(Nullable(String))'), `booster` UInt8 DEFAULT 0, `abo_id` Nullable(UInt32), `prix` Nullable(Float64), `cawister` Nullable(Float64), `castats` Nullable(Float64), `ca` Nullable(Float64), `sessionid` String DEFAULT '', `device_family` LowCardinality(String) DEFAULT CAST('Other', 'LowCardinality(String)'), `code_service` LowCardinality(String), `nom_service` LowCardinality(Nullable(String)), `opco` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `nom_operateur` LowCardinality(Nullable(String)), `origine` LowCardinality(String), `pays` LowCardinality(String) DEFAULT CAST('France', 'LowCardinality(String)') COMMENT 'pays de provenance de la session', `pays_code` LowCardinality(String) DEFAULT CAST('FRA', 'LowCardinality(String)'), `ope_telecom` LowCardinality(String) DEFAULT CAST('FRA_WISTER', 'LowCardinality(String)') COMMENT 'operateur telecom de la session', `ope_mobile` LowCardinality(String) DEFAULT CAST('INC', 'LowCardinality(String)'), `crm` Nullable(UInt32), `stamp` LowCardinality(Nullable(String)), `stat_mktg_tracker` LowCardinality(Nullable(String)), `stat_crm_tracker` LowCardinality(Nullable(String)), `optin` LowCardinality(String) DEFAULT CAST('INC', 'LowCardinality(String)'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `id_site` Nullable(UInt32), `partco` LowCardinality(String), `is_bot` UInt8, `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_ad_id` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_id'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'adid')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'adid'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_app_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_app_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_banner_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_banner_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_bid` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_bid_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_blp_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_blp_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_browser` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'browser')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'browser'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_campaign_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'campaign_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'campaign_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_carrier` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'carrier')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'carrier'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_category` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_category_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_click_id` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'click_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'click_id'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'clickId')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'clickId'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom1` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom1')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom1'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom2` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom2')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom2'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom_deux` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_deux')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_deux'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom_un` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_un')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_un'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_country` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'country')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'country'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_device` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'device')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'device'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_lp` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'MabForcedSolutionId')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'MabForcedSolutionId'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_lp_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'lp_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'lp_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_os` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'os')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'os'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_pricing_model` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_model')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_model'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_mod')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_mod'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_pub_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pub_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pub_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_publisher_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'publisher_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'publisher_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_site_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_site_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_stat_tracker` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'stat_tracker')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'stat_tracker'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_target_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'target_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'target_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_timestamp` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'timestamp')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'timestamp'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_zone` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_zone_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_ad_type` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_type')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_type'))]), '+', ' '), ''), 'LowCardinality(String)'), `stat_tracker` LowCardinality(String) DEFAULT CAST(if(coalesce(stamp, '') LIKE '%_MB:%', substr(stamp, 1, position(stamp, '_MB:') - 1), coalesce(stamp, '')), 'LowCardinality(String)')) ENGINE = MergeTree() PARTITION BY toYYYYMM(event_date) ORDER BY (toDate(event_date), event_type, code_affilie, device_family, code_service) SETTINGS index_granularity = 8192
2020.08.11 16:01:00.925092 [ 74 ] {0dd99465-edc3-4e6a-aab4-33568873127c} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.raw_data
2020.08.11 16:01:00.935281 [ 74 ] {0dd99465-edc3-4e6a-aab4-33568873127c} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:01:00.972641 [ 74 ] {0dd99465-edc3-4e6a-aab4-33568873127c} <Error> executeQuery: Code: 36, e.displayText() = DB::Exception: external dictionary 'wister.dict_prod_partner_affiliate_links' not found: default expression and column type are incompatible. (version 20.3.16.165 (official build)) (from 127.0.0.1:44168) (in query: CREATE TABLE IF NOT EXISTS default.raw_data (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `event_type` Enum8('transaction' = 0, 'session' = 1) DEFAULT CAST('transaction', 'Enum8(\'transaction\' = 0, \'session\' = 1)'), `import_date` DateTime DEFAULT toDateTime(now()), `uid` String DEFAULT '', `session2_id` UInt64, `date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `datefin` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `wid` Int64 DEFAULT CAST(0, 'Int64'), `famillewap` Int32 DEFAULT CAST(0, 'Int32'), `ratio` Float64 DEFAULT CAST(0, 'Float64'), `Status` UInt8 DEFAULT 0, `periode` UInt16 DEFAULT CAST(0, 'UInt16'), `uacore` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `langue` String DEFAULT '', `ml` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `video` LowCardinality(Nullable(String)), `xhtml` LowCardinality(String) DEFAULT CAST('non', 'LowCardinality(String)'), `telechargement` UInt8 DEFAULT 0, `uaextension` LowCardinality(Nullable(String)), `multiobject` UInt8 DEFAULT 0, `mms` UInt8 DEFAULT 0, `best_ml` LowCardinality(String) DEFAULT CAST('wml', 'LowCardinality(String)'), `3g` FixedString(1) DEFAULT CAST('0', 'FixedString(1)'), `age` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `login` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `famille` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `created` Date DEFAULT toDate('0000-00-00'), `modified` Date DEFAULT toDate('0000-00-00'), `familledld` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `chatvalidation` UInt8 DEFAULT 0, `https` UInt8 DEFAULT 0, `motclef` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `bgcolor` Enum8('0' = 0, '1' = 1) DEFAULT CAST('0', 'Enum8(\'0\' = 0, \'1\' = 1)'), `http_x_nokia_bearer` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stream` UInt8 DEFAULT 0, `ip` LowCardinality(Nullable(String)), `tactile` Enum8('0' = 0, '1' = 1) DEFAULT CAST('0', 'Enum8(\'0\' = 0, \'1\' = 1)') COMMENT '1 Le terminal est tactile, 0 sinon', `familledldvideo` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)') COMMENT 'Format vidéo pour le terminal', `transaction_id` UInt32, `trxidpartenaire` Nullable(String), `date_achat` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `idoffre` Nullable(UInt32), `offre` LowCardinality(Nullable(String)), `price_point_code` LowCardinality(String) DEFAULT CAST('MISC', 'LowCardinality(String)'), `price_point` LowCardinality(String) DEFAULT CAST('Various', 'LowCardinality(String)'), `typeachat` FixedString(1) DEFAULT CAST('', 'FixedString(1)'), `type` LowCardinality(Nullable(String)), `offer_type` LowCardinality(Nullable(String)), `groupe` LowCardinality(Nullable(String)), `distributeur` LowCardinality(Nullable(String)), `affilie` LowCardinality(Nullable(String)), `ope_factu` LowCardinality(Nullable(String)) DEFAULT CAST('NULL', 'LowCardinality(Nullable(String))'), `booster` UInt8 DEFAULT 0, `abo_id` Nullable(UInt32), `prix` Nullable(Float64), `cawister` Nullable(Float64), `castats` Nullable(Float64), `ca` Nullable(Float64), `sessionid` String DEFAULT '', `device_family` LowCardinality(String) DEFAULT CAST('Other', 'LowCardinality(String)'), `code_service` LowCardinality(String), `nom_service` LowCardinality(Nullable(String)), `opco` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `nom_operateur` LowCardinality(Nullable(String)), `origine` LowCardinality(String), `pays` LowCardinality(String) DEFAULT CAST('France', 'LowCardinality(String)') COMMENT 'pays de provenance de la session', `pays_code` LowCardinality(String) DEFAULT CAST('FRA', 'LowCardinality(String)'), `ope_telecom` LowCardinality(String) DEFAULT CAST('FRA_WISTER', 'LowCardinality(String)') COMMENT 'operateur telecom de la session', `ope_mobile` LowCardinality(String) DEFAULT CAST('INC', 'LowCardinality(String)'), `crm` Nullable(UInt32), `stamp` LowCardinality(Nullable(String)), `stat_mktg_tracker` LowCardinality(Nullable(String)), `stat_crm_tracker` LowCardinality(Nullable(String)), `optin` LowCardinality(String) DEFAULT CAST('INC', 'LowCardinality(String)'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `id_site` Nullable(UInt32), `partco` LowCardinality(String), `is_bot` UInt8, `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_ad_id` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_id'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'adid')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'adid'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_app_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_app_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'app_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_banner_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_banner_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'banner_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_bid` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_bid_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'bid_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_blp_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_blp_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'blp_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_browser` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'browser')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'browser'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_campaign_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'campaign_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'campaign_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_carrier` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'carrier')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'carrier'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_category` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_category_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'category_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_click_id` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'click_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'click_id'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'clickId')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'clickId'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom1` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom1')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom1'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom2` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom2')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom2'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom_deux` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_deux')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_deux'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_custom_un` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_un')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'custom_un'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_country` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'country')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'country'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_device` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'device')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'device'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_lp` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'MabForcedSolutionId')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'MabForcedSolutionId'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_lp_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'lp_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'lp_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_os` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'os')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'os'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_pricing_model` LowCardinality(String) DEFAULT CAST(multiIf(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_model')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_model'))]), '+', ' '), dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_mod')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pricing_mod'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_pub_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pub_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'pub_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_publisher_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'publisher_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'publisher_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_site_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_site_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'site_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_stat_tracker` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'stat_tracker')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'stat_tracker'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_target_name` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'target_name')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'target_name'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_timestamp` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'timestamp')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'timestamp'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_zone` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_zone_id` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone_id')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'zone_id'))]), '+', ' '), ''), 'LowCardinality(String)'), `md_ad_type` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_type')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('wister.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('wister.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_type'))]), '+', ' '), ''), 'LowCardinality(String)'), `stat_tracker` LowCardinality(String) DEFAULT CAST(if(coalesce(stamp, '') LIKE '%_MB:%', substr(stamp, 1, position(stamp, '_MB:') - 1), coalesce(stamp, '')), 'LowCardinality(String)')) ENGINE = MergeTree() PARTITION BY toYYYYMM(event_date) ORDER BY (toDate(event_date), event_type, code_affilie, device_family, code_service) SETTINGS index_granularity = 8192), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0xd076011 in /usr/bin/clickhouse
3. std::__1::shared_ptr<DB::IExternalLoadable const> DB::ExternalLoader::load<std::__1::shared_ptr<DB::IExternalLoadable const>, void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd081333 in /usr/bin/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x93f2561 in /usr/bin/clickhouse
5. DB::FunctionDictGet<DB::DataTypeNumber<int>, DB::NameDictGetInt32>::executeImpl(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0x9423bcf in /usr/bin/clickhouse
6. DB::ExecutableFunctionAdaptor::defaultImplementationForConstantArguments(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x91ff218 in /usr/bin/clickhouse
7. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x91ff492 in /usr/bin/clickhouse
8. DB::ExecutableFunctionAdaptor::execute(DB::Block&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x9200201 in /usr/bin/clickhouse
9. DB::ExpressionAction::prepare(DB::Block&, DB::Settings const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd139cb5 in /usr/bin/clickhouse
10. DB::ExpressionActions::addImpl(DB::ExpressionAction, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13a833 in /usr/bin/clickhouse
11. DB::ExpressionActions::add(DB::ExpressionAction const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&) @ 0xd13acad in /usr/bin/clickhouse
12. DB::ScopeStack::addAction(DB::ExpressionAction const&) @ 0xd3623ed in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd369b1a in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
15. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
16. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
17. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
18. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
19. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
20. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0xd368a47 in /usr/bin/clickhouse
21. ? @ 0xd34bd1e in /usr/bin/clickhouse
22. DB::ExpressionAnalyzer::getActions(bool, bool) @ 0xd34e85f in /usr/bin/clickhouse
23. DB::validateColumnsDefaultsAndGetSampleBlock(std::__1::shared_ptr<DB::IAST>, DB::NamesAndTypesList const&, DB::Context const&) @ 0xd6a8c4a in /usr/bin/clickhouse
24. DB::InterpreterCreateQuery::getColumnsDescription(DB::ASTExpressionList const&, DB::Context const&) @ 0xd092ba3 in /usr/bin/clickhouse
25. DB::InterpreterCreateQuery::setProperties(DB::ASTCreateQuery&) const @ 0xd094e02 in /usr/bin/clickhouse
26. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0xd09640d in /usr/bin/clickhouse
27. DB::InterpreterCreateQuery::execute() @ 0xd098721 in /usr/bin/clickhouse
28. ? @ 0xd5aa698 in /usr/bin/clickhouse
29. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd5ad2d1 in /usr/bin/clickhouse
30. DB::TCPHandler::runImpl() @ 0x90794f9 in /usr/bin/clickhouse
31. DB::TCPHandler::run() @ 0x907a4e0 in /usr/bin/clickhouse
2020.08.11 16:01:00.976578 [ 74 ] {0dd99465-edc3-4e6a-aab4-33568873127c} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:01:00.976680 [ 74 ] {} <Information> TCPHandler: Processed in 0.089 sec.
2020.08.11 16:01:00.977936 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 16:01:00.982756 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:01:00.983133 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:01:00.983353 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:01:01.485607 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:01:01.486297 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 16:01:01.491368 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:01:01.493226 [ 52 ] {} <Trace> SystemLog (system.trace_log): Flushing system log
2020.08.11 16:01:01.493681 [ 52 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
2020.08.11 16:01:01.496921 [ 52 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 16:01:01.497728 [ 52 ] {} <Debug> system.trace_log: Loaded data parts (0 items)
2020.08.11 16:01:01.503260 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:01:01.505816 [ 52 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 16:01:01.852296 [ 51 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:01.852546 [ 51 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 16:01:01.857630 [ 51 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:01:01.858208 [ 51 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 16:01:01.864159 [ 51 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:01:01.928957 [ 51 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 16:01:01.931730 [ 39 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:01:01.932379 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:01:01.933593 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:01:01.934116 [ 39 ] {} <Information> Application: shutting down
2020.08.11 16:01:01.934347 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:01:01.934698 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:01:01.982100 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:01:01.985716 [ 1 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:01:01.985924 [ 1 ] {} <Information> Application: starting up
2020.08.11 16:01:01.991134 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:01:01.991422 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:01:01.991634 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:01:01.991835 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:01:01.992312 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '06d0290c5fb7' as replica host.
2020.08.11 16:01:01.995148 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:01:01.997091 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:01:01.997719 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:01:01.998001 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:01:02.001003 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
2020.08.11 16:01:02.002092 [ 113 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:01:02.004583 [ 113 ] {} <Debug> system.trace_log: Loading data parts
2020.08.11 16:01:02.007000 [ 114 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:01:02.008604 [ 113 ] {} <Debug> system.trace_log: Loaded data parts (1 items)
2020.08.11 16:01:02.013550 [ 114 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 16:01:02.014214 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 16:01:02.016848 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 16:01:02.017561 [ 114 ] {} <Debug> default.table: Loading data parts
2020.08.11 16:01:02.018082 [ 114 ] {} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:01:02.018582 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:01:02.019473 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:01:02.019708 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:01:02.019934 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:01:02.021969 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:01:02.022243 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:01:02.023906 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:02.024673 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:02.025282 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:02.025927 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:01:02.026382 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:01:02.026639 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:01:02.026893 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:01:02.027637 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:01:02.028011 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:01:02.028225 [ 1 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:01:02.121306 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:01:02.122831 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:01:02.123132 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 16:01:04.140717 [ 162 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2020.08.11 16:01:09.516737 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:09.517609 [ 131 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 16:01:09.520981 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:01:09.568003 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 16:01:17.072540 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:17.085514 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:01:17.115041 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 16:01:24.618298 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:24.624084 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:01:24.650149 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_4_4_0.
2020.08.11 16:01:32.155972 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:32.164948 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:01:32.194326 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_4_4_0 to 202008_5_5_0.
2020.08.11 16:01:39.698420 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:39.708445 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:01:39.742241 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_5_5_0 to 202008_6_6_0.
2020.08.11 16:01:39.743205 [ 122 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_1_0 to 202008_6_6_0
2020.08.11 16:01:39.745930 [ 122 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:01:39.746288 [ 122 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_1_0 to 202008_6_6_0 into tmp_merge_202008_1_6_1 with type Wide
2020.08.11 16:01:39.746775 [ 122 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:01:39.747200 [ 122 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_1_0, total 2 rows starting from the beginning of the part
2020.08.11 16:01:39.756980 [ 122 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_2_2_0, total 8 rows starting from the beginning of the part
2020.08.11 16:01:39.766873 [ 122 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_3_3_0, total 8 rows starting from the beginning of the part
2020.08.11 16:01:39.794827 [ 122 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_4_4_0, total 7 rows starting from the beginning of the part
2020.08.11 16:01:39.804765 [ 122 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_5_5_0, total 8 rows starting from the beginning of the part
2020.08.11 16:01:39.822244 [ 122 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_6_6_0, total 7 rows starting from the beginning of the part
2020.08.11 16:01:39.889084 [ 122 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 40 rows, containing 211 columns (211 merged, 0 gathered) in 0.14 sec., 280.10 rows/sec., 0.47 MB/sec.
2020.08.11 16:01:39.900901 [ 122 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_6_1 to 202008_1_6_1.
2020.08.11 16:01:39.903512 [ 122 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_1_0 to 202008_6_6_0
2020.08.11 16:01:47.244521 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:47.252611 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:01:47.278823 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_6_6_0 to 202008_7_7_0.
2020.08.11 16:01:54.780659 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:01:54.793879 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:01:54.821673 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_7_7_0 to 202008_8_8_0.
2020.08.11 16:02:02.324632 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:02.328287 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:02:02.353263 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_8_8_0 to 202008_9_9_0.
2020.08.11 16:02:09.855763 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:09.860144 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:02:09.885715 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_9_9_0 to 202008_10_10_0.
2020.08.11 16:02:17.392092 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:17.405739 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:02:17.436964 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_10_10_0 to 202008_11_11_0.
2020.08.11 16:02:17.438161 [ 120 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_6_1 to 202008_11_11_0
2020.08.11 16:02:17.441012 [ 120 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:02:17.441438 [ 120 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_6_1 to 202008_11_11_0 into tmp_merge_202008_1_11_2 with type Wide
2020.08.11 16:02:17.441872 [ 120 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:02:17.442231 [ 120 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_6_1, total 40 rows starting from the beginning of the part
2020.08.11 16:02:17.452223 [ 120 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_7_7_0, total 8 rows starting from the beginning of the part
2020.08.11 16:02:17.462356 [ 120 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_8_8_0, total 7 rows starting from the beginning of the part
2020.08.11 16:02:17.472303 [ 120 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_9_9_0, total 8 rows starting from the beginning of the part
2020.08.11 16:02:17.482661 [ 120 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_10_10_0, total 7 rows starting from the beginning of the part
2020.08.11 16:02:17.492704 [ 120 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_11_11_0, total 8 rows starting from the beginning of the part
2020.08.11 16:02:17.559576 [ 120 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 78 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 660.22 rows/sec., 1.11 MB/sec.
2020.08.11 16:02:17.571251 [ 120 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_11_2 to 202008_1_11_2.
2020.08.11 16:02:17.574066 [ 120 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_6_1 to 202008_11_11_0
2020.08.11 16:02:24.942872 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:24.947878 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:02:24.979473 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_11_11_0 to 202008_12_12_0.
2020.08.11 16:02:32.482987 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:32.486843 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:02:32.510272 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_12_12_0 to 202008_13_13_0.
2020.08.11 16:02:40.011906 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:40.027904 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:02:40.064729 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_13_13_0 to 202008_14_14_0.
2020.08.11 16:02:47.566809 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:47.571486 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:02:47.597819 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_14_14_0 to 202008_15_15_0.
2020.08.11 16:02:55.106442 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:02:55.118078 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:02:55.143739 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_15_15_0 to 202008_16_16_0.
2020.08.11 16:02:55.144714 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_11_2 to 202008_16_16_0
2020.08.11 16:02:55.147269 [ 116 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:02:55.147554 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_11_2 to 202008_16_16_0 into tmp_merge_202008_1_16_3 with type Wide
2020.08.11 16:02:55.148006 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:02:55.148382 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_11_2, total 78 rows starting from the beginning of the part
2020.08.11 16:02:55.158231 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_12_12_0, total 7 rows starting from the beginning of the part
2020.08.11 16:02:55.170136 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_13_13_0, total 8 rows starting from the beginning of the part
2020.08.11 16:02:55.181352 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_14_14_0, total 7 rows starting from the beginning of the part
2020.08.11 16:02:55.192050 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_15_15_0, total 8 rows starting from the beginning of the part
2020.08.11 16:02:55.201730 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_16_16_0, total 8 rows starting from the beginning of the part
2020.08.11 16:02:55.270418 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 116 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 944.17 rows/sec., 1.58 MB/sec.
2020.08.11 16:02:55.281791 [ 116 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_16_3 to 202008_1_16_3.
2020.08.11 16:02:55.284647 [ 116 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_11_2 to 202008_16_16_0
2020.08.11 16:03:02.645498 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:02.663718 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:03:02.695445 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_16_16_0 to 202008_17_17_0.
2020.08.11 16:03:10.197389 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:10.201212 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:03:10.223800 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_17_17_0 to 202008_18_18_0.
2020.08.11 16:03:17.727173 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:17.730859 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:03:17.755491 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_18_18_0 to 202008_19_19_0.
2020.08.11 16:03:25.257965 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:25.261950 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:03:25.284832 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_19_19_0 to 202008_20_20_0.
2020.08.11 16:03:32.787877 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:32.791564 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:03:32.814378 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_20_20_0 to 202008_21_21_0.
2020.08.11 16:03:32.815386 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_16_3 to 202008_21_21_0
2020.08.11 16:03:32.817975 [ 128 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:03:32.818199 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_16_3 to 202008_21_21_0 into tmp_merge_202008_1_21_4 with type Wide
2020.08.11 16:03:32.818702 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:03:32.819049 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_16_3, total 116 rows starting from the beginning of the part
2020.08.11 16:03:32.828573 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_17_17_0, total 7 rows starting from the beginning of the part
2020.08.11 16:03:32.838673 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_18_18_0, total 8 rows starting from the beginning of the part
2020.08.11 16:03:32.849076 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_19_19_0, total 7 rows starting from the beginning of the part
2020.08.11 16:03:32.858850 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_20_20_0, total 8 rows starting from the beginning of the part
2020.08.11 16:03:32.868647 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_21_21_0, total 7 rows starting from the beginning of the part
2020.08.11 16:03:32.944057 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 153 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 1215.69 rows/sec., 2.04 MB/sec.
2020.08.11 16:03:32.956570 [ 128 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_21_4 to 202008_1_21_4.
2020.08.11 16:03:32.959429 [ 128 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_16_3 to 202008_21_21_0
2020.08.11 16:03:40.319162 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:40.322824 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:03:40.345330 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_21_21_0 to 202008_22_22_0.
2020.08.11 16:03:46.422337 [ 112 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:03:46.422734 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:03:46.423252 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:03:47.280094 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:03:47.280881 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 16:03:47.283975 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:03:47.854008 [ 131 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:47.863501 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:03:47.901867 [ 131 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_22_22_0 to 202008_23_23_0.
2020.08.11 16:03:48.018127 [ 1 ] {} <Trace> system.metric_log: Found 24 old parts to remove.
2020.08.11 16:03:48.018808 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_1_0
2020.08.11 16:03:48.052212 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_6_1
2020.08.11 16:03:48.066174 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_11_2
2020.08.11 16:03:48.078299 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_16_3
2020.08.11 16:03:48.090771 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_2_2_0
2020.08.11 16:03:48.103177 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_3_3_0
2020.08.11 16:03:48.115813 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_4_4_0
2020.08.11 16:03:48.127831 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_5_5_0
2020.08.11 16:03:48.140415 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_6_6_0
2020.08.11 16:03:48.152959 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_7_7_0
2020.08.11 16:03:48.165611 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_8_8_0
2020.08.11 16:03:48.180385 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_9_9_0
2020.08.11 16:03:48.192394 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_10_10_0
2020.08.11 16:03:48.204650 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_11_11_0
2020.08.11 16:03:48.218373 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_12_12_0
2020.08.11 16:03:48.230422 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_13_13_0
2020.08.11 16:03:48.242507 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_14_14_0
2020.08.11 16:03:48.254664 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_15_15_0
2020.08.11 16:03:48.267237 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_16_16_0
2020.08.11 16:03:48.279243 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_17_17_0
2020.08.11 16:03:48.291770 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_18_18_0
2020.08.11 16:03:48.304385 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_19_19_0
2020.08.11 16:03:48.316624 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_20_20_0
2020.08.11 16:03:48.328872 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_21_21_0
2020.08.11 16:03:48.344674 [ 1 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:03:48.345175 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:03:48.347291 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:03:48.347602 [ 1 ] {} <Information> Application: shutting down
2020.08.11 16:03:48.347781 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:03:48.347978 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:03:53.263311 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:03:53.267000 [ 39 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:03:53.267303 [ 39 ] {} <Information> Application: starting up
2020.08.11 16:03:53.273005 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:03:53.273294 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:03:53.273520 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:03:53.273712 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:03:53.274619 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'c24074755b35' as replica host.
2020.08.11 16:03:53.277625 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:03:53.279969 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:03:53.281192 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:03:53.281561 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:03:53.284949 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 16:03:53.285281 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:03:53.285634 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:03:53.285988 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:03:53.286400 [ 39 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:03:53.288356 [ 39 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:03:53.288648 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:03:53.291597 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.292529 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.293442 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.294601 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:53.295675 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:03:53.296019 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:03:53.296573 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:03:53.298520 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:03:53.299090 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:03:53.299370 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:03:53.447351 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:03:53.449735 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:03:53.450050 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 16:03:54.245355 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41720, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 16:03:54.271544 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44174
2020.08.11 16:03:54.272604 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 16:03:54.278734 [ 74 ] {24bc60ca-526f-4cf0-98cf-f58b58a25aaa} <Debug> executeQuery: (from 127.0.0.1:44174) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` String) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 16:03:54.279282 [ 74 ] {24bc60ca-526f-4cf0-98cf-f58b58a25aaa} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:03:54.279865 [ 74 ] {24bc60ca-526f-4cf0-98cf-f58b58a25aaa} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 16:03:54.280439 [ 74 ] {24bc60ca-526f-4cf0-98cf-f58b58a25aaa} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:03:54.280869 [ 74 ] {24bc60ca-526f-4cf0-98cf-f58b58a25aaa} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 16:03:54.283970 [ 74 ] {24bc60ca-526f-4cf0-98cf-f58b58a25aaa} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:03:54.284321 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:03:54.284639 [ 74 ] {} <Information> TCPHandler: Processed in 0.006 sec.
2020.08.11 16:03:54.286348 [ 74 ] {f9963f8a-2654-4966-b1b3-d5f61b3d9959} <Debug> executeQuery: (from 127.0.0.1:44174) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT if(dictGetString('default.dict', 'value', tuple(coalesce(stamp, ''))) != '', dictGetString('default.dict', 'value', tuple(coalesce(stamp, ''))), 'empty')) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 16:03:54.286707 [ 74 ] {f9963f8a-2654-4966-b1b3-d5f61b3d9959} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 16:03:54.288211 [ 74 ] {f9963f8a-2654-4966-b1b3-d5f61b3d9959} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:03:54.290412 [ 74 ] {f9963f8a-2654-4966-b1b3-d5f61b3d9959} <Debug> default.table: Loading data parts
2020.08.11 16:03:54.291608 [ 74 ] {f9963f8a-2654-4966-b1b3-d5f61b3d9959} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:03:54.294672 [ 74 ] {f9963f8a-2654-4966-b1b3-d5f61b3d9959} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:03:54.295063 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:03:54.295299 [ 74 ] {} <Information> TCPHandler: Processed in 0.009 sec.
2020.08.11 16:03:54.295617 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 16:03:54.302062 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:03:54.302647 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:03:54.302959 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:03:54.960984 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:03:54.962891 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 16:03:54.969316 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:03:55.292245 [ 50 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:03:55.292829 [ 50 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 16:03:55.302346 [ 50 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:03:55.303648 [ 50 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 16:03:55.314747 [ 50 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:03:55.377200 [ 50 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 16:03:55.381198 [ 39 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:03:55.381807 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:03:55.383083 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:03:55.383820 [ 39 ] {} <Information> Application: shutting down
2020.08.11 16:03:55.383962 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:03:55.384196 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:03:55.471616 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:03:55.475578 [ 1 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:03:55.475976 [ 1 ] {} <Information> Application: starting up
2020.08.11 16:03:55.483499 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:03:55.483776 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:03:55.484003 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:03:55.484345 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:03:55.485164 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'c24074755b35' as replica host.
2020.08.11 16:03:55.488339 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:03:55.490575 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:03:55.490996 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:03:55.491333 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:03:55.494333 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 1 tables and 0 dictionaries.
2020.08.11 16:03:55.498897 [ 113 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:03:55.501830 [ 113 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:03:55.508566 [ 113 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 16:03:55.509322 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 16:03:55.514018 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 16:03:55.515221 [ 135 ] {} <Debug> default.table: Loading data parts
2020.08.11 16:03:55.515737 [ 135 ] {} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:03:55.516297 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:03:55.517872 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:03:55.518231 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:03:55.518760 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:03:55.522258 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:03:55.522508 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:03:55.524742 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.525617 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.526625 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.527494 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:03:55.528026 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:03:55.528271 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:03:55.529156 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:03:55.530955 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:03:55.531469 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:03:55.531722 [ 1 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:03:55.682447 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:03:55.683506 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:03:55.683792 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 16:03:57.686894 [ 162 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2020.08.11 16:04:03.012130 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:04:03.014799 [ 133 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 16:04:03.024395 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:04:03.089342 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 16:04:04.877864 [ 112 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 16:04:04.878354 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:04:04.878684 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:04:05.564554 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:04:05.565331 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 16:04:05.568982 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:04:06.512542 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:04:06.519681 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.36 GiB.
2020.08.11 16:04:06.542436 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 16:04:06.546333 [ 1 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:04:06.547466 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:04:06.549776 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:04:06.550537 [ 1 ] {} <Information> Application: shutting down
2020.08.11 16:04:06.550832 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:04:06.551243 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:04:09.958414 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:04:09.963415 [ 39 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:04:09.963698 [ 39 ] {} <Information> Application: starting up
2020.08.11 16:04:09.969691 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:04:09.969991 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:04:09.970289 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:04:09.970538 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:04:09.971308 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '7d603a5b819e' as replica host.
2020.08.11 16:04:09.974213 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:04:09.976382 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:04:09.976916 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:04:09.977222 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:04:09.981598 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 16:04:09.981929 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:04:09.982293 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:04:09.982597 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:04:09.982995 [ 39 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:04:09.993889 [ 39 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:04:09.994174 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:04:09.996789 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.997622 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.998392 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.999122 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:09.999810 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:04:10.000279 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:04:10.000622 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:04:10.001746 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:04:10.002209 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:04:10.002548 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:04:10.154196 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:04:10.155794 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:04:10.156067 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 16:04:10.937389 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41726, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 16:04:10.966584 [ 48 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 16:04:10.967041 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:04:10.967409 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:04:10.970846 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:04:11.131075 [ 48 ] {} <Information> Application: Received termination signal (Interrupt)
2020.08.11 16:04:11.131495 [ 48 ] {} <Information> Application: Received second signal Interrupt. Immediately terminate.
2020.08.11 16:04:20.718509 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:04:20.722230 [ 39 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:04:20.722456 [ 39 ] {} <Information> Application: starting up
2020.08.11 16:04:20.728299 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:04:20.728618 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:04:20.728856 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:04:20.729117 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:04:20.729710 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '4e7a62a4a8c6' as replica host.
2020.08.11 16:04:20.744359 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:04:20.746283 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:04:20.746752 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:04:20.746974 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:04:20.749543 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 16:04:20.749866 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:04:20.750177 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:04:20.750476 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:04:20.750813 [ 39 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:04:20.753214 [ 39 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:04:20.753433 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:04:20.755383 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.756147 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.756853 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.757485 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:20.757991 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:04:20.758242 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:04:20.758506 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:04:20.759281 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:04:20.759639 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:04:20.759894 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:04:20.899884 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:04:20.901642 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:04:20.901905 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 16:04:21.701482 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41730, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 16:04:21.726070 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44184
2020.08.11 16:04:21.726584 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 16:04:21.732899 [ 74 ] {82e9a863-b6f5-4aab-854d-1596a8ecc09d} <Debug> executeQuery: (from 127.0.0.1:44184) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` String) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 16:04:21.733390 [ 74 ] {82e9a863-b6f5-4aab-854d-1596a8ecc09d} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:04:21.733694 [ 74 ] {82e9a863-b6f5-4aab-854d-1596a8ecc09d} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 16:04:21.733961 [ 74 ] {82e9a863-b6f5-4aab-854d-1596a8ecc09d} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:04:21.734173 [ 74 ] {82e9a863-b6f5-4aab-854d-1596a8ecc09d} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 16:04:21.737002 [ 74 ] {82e9a863-b6f5-4aab-854d-1596a8ecc09d} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:04:21.737343 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:04:21.737618 [ 74 ] {} <Information> TCPHandler: Processed in 0.005 sec.
2020.08.11 16:04:21.739097 [ 74 ] {843aa4c3-5082-4040-8d74-e8d69b411305} <Debug> executeQuery: (from 127.0.0.1:44184) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` String DEFAULT if(dictGetString('default.dict', 'value', tuple(coalesce(stamp, ''))) != '', dictGetString('default.dict', 'value', tuple(coalesce(stamp, ''))), 'empty')) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 16:04:21.739359 [ 74 ] {843aa4c3-5082-4040-8d74-e8d69b411305} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 16:04:21.740512 [ 74 ] {843aa4c3-5082-4040-8d74-e8d69b411305} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:04:21.742422 [ 74 ] {843aa4c3-5082-4040-8d74-e8d69b411305} <Debug> default.table: Loading data parts
2020.08.11 16:04:21.743146 [ 74 ] {843aa4c3-5082-4040-8d74-e8d69b411305} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:04:21.745662 [ 74 ] {843aa4c3-5082-4040-8d74-e8d69b411305} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:04:21.746089 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:04:21.746398 [ 74 ] {} <Information> TCPHandler: Processed in 0.008 sec.
2020.08.11 16:04:21.746682 [ 74 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 16:04:21.751091 [ 48 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:04:21.751421 [ 39 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:04:21.751650 [ 39 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:04:22.423491 [ 39 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:04:22.424023 [ 39 ] {} <Information> Application: Closed connections.
2020.08.11 16:04:22.426416 [ 39 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:04:22.756532 [ 50 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:04:22.757186 [ 50 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 16:04:22.770008 [ 50 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:04:22.770671 [ 50 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 16:04:22.781844 [ 50 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:04:22.838629 [ 50 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 16:04:22.841034 [ 39 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:04:22.841491 [ 39 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:04:22.842682 [ 39 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:04:22.843483 [ 39 ] {} <Information> Application: shutting down
2020.08.11 16:04:22.843645 [ 39 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:04:22.844156 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:04:22.887669 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:04:22.891170 [ 1 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:04:22.891443 [ 1 ] {} <Information> Application: starting up
2020.08.11 16:04:22.896800 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:04:22.897070 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:04:22.897325 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:04:22.897581 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:04:22.898055 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '4e7a62a4a8c6' as replica host.
2020.08.11 16:04:22.901650 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:04:22.903730 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:04:22.904176 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:04:22.904417 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:04:22.906987 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 1 tables and 0 dictionaries.
2020.08.11 16:04:22.910079 [ 113 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:04:22.912324 [ 113 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:04:22.918669 [ 113 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 16:04:22.919318 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 16:04:22.922542 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 16:04:22.923426 [ 135 ] {} <Debug> default.table: Loading data parts
2020.08.11 16:04:22.923885 [ 135 ] {} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:04:22.924231 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:04:22.924862 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:04:22.925146 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:04:22.925438 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:04:22.927824 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:04:22.928042 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:04:22.929528 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.930225 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.930785 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.931365 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:04:22.931818 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:04:22.932060 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:04:22.932334 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:04:22.933070 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:04:22.933464 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:04:22.933676 [ 1 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:04:23.063561 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:04:23.065453 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:04:23.065664 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 16:04:25.075434 [ 162 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2020.08.11 16:04:30.427257 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:04:30.430142 [ 133 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 16:04:30.438157 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:04:30.497688 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 16:04:38.003609 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:04:38.015251 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:04:38.047198 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 16:04:45.548705 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:04:45.559213 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:04:45.593761 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_4_4_0.
2020.08.11 16:04:53.097637 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:04:53.101057 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.35 GiB.
2020.08.11 16:04:53.128722 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_4_4_0 to 202008_5_5_0.
2020.08.11 16:05:00.633245 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:00.651217 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:00.687786 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_5_5_0 to 202008_6_6_0.
2020.08.11 16:05:00.688784 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_1_0 to 202008_6_6_0
2020.08.11 16:05:00.691618 [ 126 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:00.692106 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_1_0 to 202008_6_6_0 into tmp_merge_202008_1_6_1 with type Wide
2020.08.11 16:05:00.692605 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:05:00.693032 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_1_0, total 2 rows starting from the beginning of the part
2020.08.11 16:05:00.703358 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_2_2_0, total 8 rows starting from the beginning of the part
2020.08.11 16:05:00.713595 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_3_3_0, total 8 rows starting from the beginning of the part
2020.08.11 16:05:00.732908 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_4_4_0, total 7 rows starting from the beginning of the part
2020.08.11 16:05:00.742870 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_5_5_0, total 8 rows starting from the beginning of the part
2020.08.11 16:05:00.763443 [ 126 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_6_6_0, total 7 rows starting from the beginning of the part
2020.08.11 16:05:00.849575 [ 126 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 40 rows, containing 211 columns (211 merged, 0 gathered) in 0.16 sec., 253.94 rows/sec., 0.43 MB/sec.
2020.08.11 16:05:00.861785 [ 126 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_6_1 to 202008_1_6_1.
2020.08.11 16:05:00.864515 [ 126 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_1_0 to 202008_6_6_0
2020.08.11 16:05:08.190436 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:08.195889 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:08.225997 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_6_6_0 to 202008_7_7_0.
2020.08.11 16:05:15.727559 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:15.732753 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:15.759318 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_7_7_0 to 202008_8_8_0.
2020.08.11 16:05:23.261311 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:23.266565 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:23.292094 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_8_8_0 to 202008_9_9_0.
2020.08.11 16:05:28.608430 [ 112 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:05:28.608884 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:05:28.609257 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:05:29.325197 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:05:29.325836 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 16:05:29.329402 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:05:29.921937 [ 133 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:29.935589 [ 133 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:29.969218 [ 133 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_9_9_0 to 202008_10_10_0.
2020.08.11 16:05:29.970788 [ 1 ] {} <Trace> system.metric_log: Found 6 old parts to remove.
2020.08.11 16:05:29.971366 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_1_0
2020.08.11 16:05:29.984587 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_2_2_0
2020.08.11 16:05:29.997493 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_3_3_0
2020.08.11 16:05:30.009891 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_4_4_0
2020.08.11 16:05:30.023115 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_5_5_0
2020.08.11 16:05:30.035065 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_6_6_0
2020.08.11 16:05:30.055761 [ 1 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:05:30.056329 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:05:30.058010 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:05:30.058374 [ 1 ] {} <Information> Application: shutting down
2020.08.11 16:05:30.058647 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:05:30.059054 [ 112 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:05:34.985872 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:05:34.989524 [ 38 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:05:34.989815 [ 38 ] {} <Information> Application: starting up
2020.08.11 16:05:34.995214 [ 38 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:05:34.995551 [ 38 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:05:34.995801 [ 38 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:05:34.996017 [ 38 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:05:34.996541 [ 38 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '460edfecc17e' as replica host.
2020.08.11 16:05:34.999359 [ 38 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:05:35.001402 [ 38 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:05:35.001787 [ 38 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:05:35.002017 [ 38 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:05:35.004831 [ 38 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 16:05:35.005180 [ 38 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:05:35.005548 [ 38 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:05:35.005773 [ 38 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:05:35.006092 [ 38 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:05:35.008706 [ 38 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:05:35.008941 [ 38 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:05:35.011595 [ 38 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.012378 [ 38 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.013009 [ 38 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.013695 [ 38 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:35.014173 [ 38 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:05:35.014425 [ 38 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:05:35.014683 [ 38 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:05:35.015418 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:05:35.015808 [ 38 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:05:35.016028 [ 38 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:05:35.112558 [ 38 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:05:35.114071 [ 38 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:05:35.114344 [ 38 ] {} <Information> Application: Ready for connections.
2020.08.11 16:05:35.964297 [ 72 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41736, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 16:05:36.004334 [ 73 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44190
2020.08.11 16:05:36.004862 [ 73 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 16:05:36.018010 [ 73 ] {737430a5-8183-4663-b716-7001747e5d25} <Debug> executeQuery: (from 127.0.0.1:44190) CREATE DICTIONARY IF NOT EXISTS default.dict (`key` String, `value` String) PRIMARY KEY key SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 16:05:36.018355 [ 73 ] {737430a5-8183-4663-b716-7001747e5d25} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:05:36.018567 [ 73 ] {737430a5-8183-4663-b716-7001747e5d25} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 16:05:36.018783 [ 73 ] {737430a5-8183-4663-b716-7001747e5d25} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:05:36.018998 [ 73 ] {737430a5-8183-4663-b716-7001747e5d25} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict
2020.08.11 16:05:36.021693 [ 73 ] {737430a5-8183-4663-b716-7001747e5d25} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:05:36.022046 [ 73 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:05:36.022248 [ 73 ] {} <Information> TCPHandler: Processed in 0.004 sec.
2020.08.11 16:05:36.023719 [ 73 ] {f0a791d4-35eb-4e14-bd24-c9d26bbfe562} <Debug> executeQuery: (from 127.0.0.1:44190) CREATE TABLE IF NOT EXISTS default.table (`site_id` UInt32, `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT if(dictGetString('default.dict', 'value', tuple(coalesce(stamp, ''))) != '', dictGetString('default.dict', 'value', tuple(coalesce(stamp, ''))), 'empty')) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 16:05:36.023990 [ 73 ] {f0a791d4-35eb-4e14-bd24-c9d26bbfe562} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 16:05:36.025064 [ 73 ] {f0a791d4-35eb-4e14-bd24-c9d26bbfe562} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:05:36.027240 [ 73 ] {f0a791d4-35eb-4e14-bd24-c9d26bbfe562} <Debug> default.table: Loading data parts
2020.08.11 16:05:36.027694 [ 73 ] {f0a791d4-35eb-4e14-bd24-c9d26bbfe562} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:05:36.029561 [ 73 ] {f0a791d4-35eb-4e14-bd24-c9d26bbfe562} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:05:36.029850 [ 73 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:05:36.030109 [ 73 ] {} <Information> TCPHandler: Processed in 0.007 sec.
2020.08.11 16:05:36.030368 [ 73 ] {} <Information> TCPHandler: Done processing connection.
2020.08.11 16:05:36.034721 [ 47 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:05:36.035030 [ 38 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:05:36.035256 [ 38 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:05:36.633013 [ 38 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:05:36.633794 [ 38 ] {} <Information> Application: Closed connections.
2020.08.11 16:05:36.638643 [ 38 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:05:37.004860 [ 52 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:37.005684 [ 52 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2020.08.11 16:05:37.018644 [ 52 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:05:37.019655 [ 52 ] {} <Debug> system.metric_log: Loaded data parts (0 items)
2020.08.11 16:05:37.029467 [ 52 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:37.091938 [ 52 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_1_1_0.
2020.08.11 16:05:37.094313 [ 38 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:05:37.094849 [ 38 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:05:37.095939 [ 38 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:05:37.096626 [ 38 ] {} <Information> Application: shutting down
2020.08.11 16:05:37.096858 [ 38 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:05:37.097223 [ 47 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:05:37.150682 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:05:37.154488 [ 1 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:05:37.154822 [ 1 ] {} <Information> Application: starting up
2020.08.11 16:05:37.160559 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:05:37.160838 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:05:37.161074 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:05:37.161319 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:05:37.161928 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '460edfecc17e' as replica host.
2020.08.11 16:05:37.168040 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:05:37.170201 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:05:37.170661 [ 1 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:05:37.170901 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:05:37.174136 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 1 tables and 0 dictionaries.
2020.08.11 16:05:37.178050 [ 112 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2020.08.11 16:05:37.181004 [ 112 ] {} <Debug> system.metric_log: Loading data parts
2020.08.11 16:05:37.190277 [ 112 ] {} <Debug> system.metric_log: Loaded data parts (1 items)
2020.08.11 16:05:37.190935 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2020.08.11 16:05:37.194929 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 1 tables and 1 dictionaries.
2020.08.11 16:05:37.195956 [ 135 ] {} <Debug> default.table: Loading data parts
2020.08.11 16:05:37.196463 [ 135 ] {} <Debug> default.table: Loaded data parts (0 items)
2020.08.11 16:05:37.197070 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:05:37.197990 [ 1 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:05:37.198268 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:05:37.198553 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:05:37.200775 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:05:37.200989 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:05:37.202709 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.203605 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.204446 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.205312 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:05:37.205867 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:05:37.206203 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:05:37.206550 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:05:37.207325 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:05:37.207763 [ 1 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:05:37.207947 [ 1 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:05:37.312039 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:05:37.312987 [ 1 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:05:37.313210 [ 1 ] {} <Information> Application: Ready for connections.
2020.08.11 16:05:39.325706 [ 161 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2020.08.11 16:05:44.695316 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:44.697533 [ 132 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2020.08.11 16:05:44.707614 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:44.757237 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_1_1_0 to 202008_2_2_0.
2020.08.11 16:05:52.261122 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:52.265273 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:52.288868 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_2_2_0 to 202008_3_3_0.
2020.08.11 16:05:59.794564 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:05:59.800737 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:05:59.832697 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_3_3_0 to 202008_4_4_0.
2020.08.11 16:06:07.336167 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:06:07.339485 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:06:07.362900 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_4_4_0 to 202008_5_5_0.
2020.08.11 16:06:14.864835 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:06:14.870729 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.34 GiB.
2020.08.11 16:06:14.899856 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_5_5_0 to 202008_6_6_0.
2020.08.11 16:06:14.900864 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_1_0 to 202008_6_6_0
2020.08.11 16:06:14.903244 [ 115 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:06:14.903526 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_1_0 to 202008_6_6_0 into tmp_merge_202008_1_6_1 with type Wide
2020.08.11 16:06:14.904039 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:06:14.904424 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_1_0, total 2 rows starting from the beginning of the part
2020.08.11 16:06:14.914070 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_2_2_0, total 8 rows starting from the beginning of the part
2020.08.11 16:06:14.924309 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_3_3_0, total 8 rows starting from the beginning of the part
2020.08.11 16:06:14.945742 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_4_4_0, total 7 rows starting from the beginning of the part
2020.08.11 16:06:14.955627 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_5_5_0, total 8 rows starting from the beginning of the part
2020.08.11 16:06:14.980534 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_6_6_0, total 7 rows starting from the beginning of the part
2020.08.11 16:06:15.046901 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 40 rows, containing 211 columns (211 merged, 0 gathered) in 0.14 sec., 278.98 rows/sec., 0.47 MB/sec.
2020.08.11 16:06:15.058617 [ 115 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_6_1 to 202008_1_6_1.
2020.08.11 16:06:15.061687 [ 115 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_1_0 to 202008_6_6_0
2020.08.11 16:06:22.402277 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:06:22.410667 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:06:22.446895 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_6_6_0 to 202008_7_7_0.
2020.08.11 16:06:29.955522 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:06:29.969662 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:06:30.005261 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_7_7_0 to 202008_8_8_0.
2020.08.11 16:06:37.510524 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:06:37.516136 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:06:37.549350 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_8_8_0 to 202008_9_9_0.
2020.08.11 16:06:45.272404 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:06:45.281339 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:06:45.341079 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_9_9_0 to 202008_10_10_0.
2020.08.11 16:06:52.844059 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:06:52.850262 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.33 GiB.
2020.08.11 16:06:52.878985 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_10_10_0 to 202008_11_11_0.
2020.08.11 16:06:52.880368 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_6_1 to 202008_11_11_0
2020.08.11 16:06:52.882942 [ 114 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:06:52.883196 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_6_1 to 202008_11_11_0 into tmp_merge_202008_1_11_2 with type Wide
2020.08.11 16:06:52.883575 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:06:52.883892 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_6_1, total 40 rows starting from the beginning of the part
2020.08.11 16:06:52.893785 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_7_7_0, total 8 rows starting from the beginning of the part
2020.08.11 16:06:52.903924 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_8_8_0, total 7 rows starting from the beginning of the part
2020.08.11 16:06:52.913864 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_9_9_0, total 8 rows starting from the beginning of the part
2020.08.11 16:06:52.923716 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_10_10_0, total 8 rows starting from the beginning of the part
2020.08.11 16:06:52.933941 [ 114 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_11_11_0, total 7 rows starting from the beginning of the part
2020.08.11 16:06:53.002654 [ 114 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 78 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 652.95 rows/sec., 1.10 MB/sec.
2020.08.11 16:06:53.014573 [ 114 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_11_2 to 202008_1_11_2.
2020.08.11 16:06:53.017368 [ 114 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_6_1 to 202008_11_11_0
2020.08.11 16:07:00.385589 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:00.391763 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:07:00.420666 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_11_11_0 to 202008_12_12_0.
2020.08.11 16:07:07.922557 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:07.926106 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:07:07.956692 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_12_12_0 to 202008_13_13_0.
2020.08.11 16:07:15.459150 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:15.463151 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:07:15.486211 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_13_13_0 to 202008_14_14_0.
2020.08.11 16:07:22.988930 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:22.998527 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:07:23.031978 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_14_14_0 to 202008_15_15_0.
2020.08.11 16:07:30.533768 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:30.541041 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.32 GiB.
2020.08.11 16:07:30.571495 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_15_15_0 to 202008_16_16_0.
2020.08.11 16:07:30.572383 [ 117 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_11_2 to 202008_16_16_0
2020.08.11 16:07:30.574894 [ 117 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 16:07:30.575223 [ 117 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_11_2 to 202008_16_16_0 into tmp_merge_202008_1_16_3 with type Wide
2020.08.11 16:07:30.575561 [ 117 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:07:30.575889 [ 117 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_11_2, total 78 rows starting from the beginning of the part
2020.08.11 16:07:30.585794 [ 117 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_12_12_0, total 8 rows starting from the beginning of the part
2020.08.11 16:07:30.595866 [ 117 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_13_13_0, total 7 rows starting from the beginning of the part
2020.08.11 16:07:30.605799 [ 117 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_14_14_0, total 8 rows starting from the beginning of the part
2020.08.11 16:07:30.616000 [ 117 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_15_15_0, total 7 rows starting from the beginning of the part
2020.08.11 16:07:30.626045 [ 117 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_16_16_0, total 8 rows starting from the beginning of the part
2020.08.11 16:07:30.695286 [ 117 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 116 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 966.19 rows/sec., 1.62 MB/sec.
2020.08.11 16:07:30.707742 [ 117 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_16_3 to 202008_1_16_3.
2020.08.11 16:07:30.710739 [ 117 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_11_2 to 202008_16_16_0
2020.08.11 16:07:38.075330 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:38.085399 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 16:07:38.121073 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_16_16_0 to 202008_17_17_0.
2020.08.11 16:07:45.623561 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:45.642522 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 16:07:45.671508 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_17_17_0 to 202008_18_18_0.
2020.08.11 16:07:53.173092 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:07:53.182332 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 16:07:53.212672 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_18_18_0 to 202008_19_19_0.
2020.08.11 16:08:00.715760 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:00.721523 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 16:08:00.746277 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_19_19_0 to 202008_20_20_0.
2020.08.11 16:08:08.251986 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:08.267694 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.31 GiB.
2020.08.11 16:08:08.303669 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_20_20_0 to 202008_21_21_0.
2020.08.11 16:08:08.305592 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_16_3 to 202008_21_21_0
2020.08.11 16:08:08.309117 [ 121 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 16:08:08.309437 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_16_3 to 202008_21_21_0 into tmp_merge_202008_1_21_4 with type Wide
2020.08.11 16:08:08.309910 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:08:08.310301 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_16_3, total 116 rows starting from the beginning of the part
2020.08.11 16:08:08.320216 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_17_17_0, total 7 rows starting from the beginning of the part
2020.08.11 16:08:08.330192 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_18_18_0, total 8 rows starting from the beginning of the part
2020.08.11 16:08:08.340033 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_19_19_0, total 7 rows starting from the beginning of the part
2020.08.11 16:08:08.349964 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_20_20_0, total 8 rows starting from the beginning of the part
2020.08.11 16:08:08.360018 [ 121 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_21_21_0, total 8 rows starting from the beginning of the part
2020.08.11 16:08:08.429841 [ 121 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 154 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 1279.05 rows/sec., 2.15 MB/sec.
2020.08.11 16:08:08.442410 [ 121 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_21_4 to 202008_1_21_4.
2020.08.11 16:08:08.445245 [ 121 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_16_3 to 202008_21_21_0
2020.08.11 16:08:15.809388 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:15.814775 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 16:08:15.838516 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_21_21_0 to 202008_22_22_0.
2020.08.11 16:08:23.345708 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:23.349935 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 16:08:23.373503 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_22_22_0 to 202008_23_23_0.
2020.08.11 16:08:30.878325 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:30.895637 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 16:08:30.928432 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_23_23_0 to 202008_24_24_0.
2020.08.11 16:08:38.431721 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:38.444735 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 16:08:38.480559 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_24_24_0 to 202008_25_25_0.
2020.08.11 16:08:45.982625 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:45.988718 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.30 GiB.
2020.08.11 16:08:46.010578 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_25_25_0 to 202008_26_26_0.
2020.08.11 16:08:46.011622 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_21_4 to 202008_26_26_0
2020.08.11 16:08:46.014796 [ 115 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 16:08:46.015106 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_21_4 to 202008_26_26_0 into tmp_merge_202008_1_26_5 with type Wide
2020.08.11 16:08:46.015589 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:08:46.015931 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_21_4, total 154 rows starting from the beginning of the part
2020.08.11 16:08:46.025686 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_22_22_0, total 7 rows starting from the beginning of the part
2020.08.11 16:08:46.035410 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_23_23_0, total 8 rows starting from the beginning of the part
2020.08.11 16:08:46.045033 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_24_24_0, total 7 rows starting from the beginning of the part
2020.08.11 16:08:46.055562 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_25_25_0, total 8 rows starting from the beginning of the part
2020.08.11 16:08:46.064948 [ 115 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_26_26_0, total 7 rows starting from the beginning of the part
2020.08.11 16:08:46.130661 [ 115 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 191 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 1653.44 rows/sec., 2.77 MB/sec.
2020.08.11 16:08:46.142776 [ 115 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_26_5 to 202008_1_26_5.
2020.08.11 16:08:46.145596 [ 115 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_21_4 to 202008_26_26_0
2020.08.11 16:08:53.514757 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:08:53.525916 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 16:08:53.558334 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_26_26_0 to 202008_27_27_0.
2020.08.11 16:09:01.061568 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:01.075042 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 16:09:01.107748 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_27_27_0 to 202008_28_28_0.
2020.08.11 16:09:08.610559 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:08.620313 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 16:09:08.649857 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_28_28_0 to 202008_29_29_0.
2020.08.11 16:09:16.154872 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:16.160502 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 16:09:16.185218 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_29_29_0 to 202008_30_30_0.
2020.08.11 16:09:23.691618 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:23.696850 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.29 GiB.
2020.08.11 16:09:23.721947 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_30_30_0 to 202008_31_31_0.
2020.08.11 16:09:23.722927 [ 124 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_26_5 to 202008_31_31_0
2020.08.11 16:09:23.725245 [ 124 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 16:09:23.725398 [ 124 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_26_5 to 202008_31_31_0 into tmp_merge_202008_1_31_6 with type Wide
2020.08.11 16:09:23.725711 [ 124 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:09:23.725930 [ 124 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_26_5, total 191 rows starting from the beginning of the part
2020.08.11 16:09:23.739817 [ 124 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_27_27_0, total 8 rows starting from the beginning of the part
2020.08.11 16:09:23.751141 [ 124 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_28_28_0, total 7 rows starting from the beginning of the part
2020.08.11 16:09:23.762212 [ 124 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_29_29_0, total 8 rows starting from the beginning of the part
2020.08.11 16:09:23.772041 [ 124 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_30_30_0, total 7 rows starting from the beginning of the part
2020.08.11 16:09:23.782157 [ 124 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_31_31_0, total 8 rows starting from the beginning of the part
2020.08.11 16:09:23.854510 [ 124 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 229 rows, containing 211 columns (211 merged, 0 gathered) in 0.13 sec., 1773.66 rows/sec., 2.98 MB/sec.
2020.08.11 16:09:23.866486 [ 124 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_31_6 to 202008_1_31_6.
2020.08.11 16:09:23.869152 [ 124 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_26_5 to 202008_31_31_0
2020.08.11 16:09:31.223646 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:31.228675 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 16:09:31.252812 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_31_31_0 to 202008_32_32_0.
2020.08.11 16:09:38.754856 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:38.767153 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 16:09:38.801192 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_32_32_0 to 202008_33_33_0.
2020.08.11 16:09:46.304533 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:46.309534 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 16:09:46.333051 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_33_33_0 to 202008_34_34_0.
2020.08.11 16:09:53.834379 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:09:53.839033 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 16:09:53.862436 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_34_34_0 to 202008_35_35_0.
2020.08.11 16:10:01.364197 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:01.378363 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.28 GiB.
2020.08.11 16:10:01.409160 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_35_35_0 to 202008_36_36_0.
2020.08.11 16:10:01.410148 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_31_6 to 202008_36_36_0
2020.08.11 16:10:01.412685 [ 116 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 16:10:01.412976 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_31_6 to 202008_36_36_0 into tmp_merge_202008_1_36_7 with type Wide
2020.08.11 16:10:01.413496 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:10:01.413862 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_31_6, total 229 rows starting from the beginning of the part
2020.08.11 16:10:01.423451 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_32_32_0, total 8 rows starting from the beginning of the part
2020.08.11 16:10:01.433431 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_33_33_0, total 7 rows starting from the beginning of the part
2020.08.11 16:10:01.443410 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_34_34_0, total 8 rows starting from the beginning of the part
2020.08.11 16:10:01.453074 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_35_35_0, total 7 rows starting from the beginning of the part
2020.08.11 16:10:01.462934 [ 116 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_36_36_0, total 8 rows starting from the beginning of the part
2020.08.11 16:10:01.531641 [ 116 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 267 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 2250.15 rows/sec., 3.78 MB/sec.
2020.08.11 16:10:01.544416 [ 116 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_36_7 to 202008_1_36_7.
2020.08.11 16:10:01.547181 [ 116 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_31_6 to 202008_36_36_0
2020.08.11 16:10:08.916914 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:08.921808 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 16:10:08.944966 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_36_36_0 to 202008_37_37_0.
2020.08.11 16:10:16.449247 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:16.460250 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 16:10:16.487478 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_37_37_0 to 202008_38_38_0.
2020.08.11 16:10:23.990627 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:23.999808 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 16:10:24.036018 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_38_38_0 to 202008_39_39_0.
2020.08.11 16:10:31.616928 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:31.652413 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 16:10:31.715134 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_39_39_0 to 202008_40_40_0.
2020.08.11 16:10:39.219412 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:39.223129 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.27 GiB.
2020.08.11 16:10:39.246108 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_40_40_0 to 202008_41_41_0.
2020.08.11 16:10:39.246997 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_36_7 to 202008_41_41_0
2020.08.11 16:10:39.249923 [ 128 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 16:10:39.250191 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_36_7 to 202008_41_41_0 into tmp_merge_202008_1_41_8 with type Wide
2020.08.11 16:10:39.250671 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:10:39.251035 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_36_7, total 267 rows starting from the beginning of the part
2020.08.11 16:10:39.260773 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_37_37_0, total 7 rows starting from the beginning of the part
2020.08.11 16:10:39.270543 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_38_38_0, total 8 rows starting from the beginning of the part
2020.08.11 16:10:39.280360 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_39_39_0, total 7 rows starting from the beginning of the part
2020.08.11 16:10:39.290088 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_40_40_0, total 8 rows starting from the beginning of the part
2020.08.11 16:10:39.299682 [ 128 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_41_41_0, total 8 rows starting from the beginning of the part
2020.08.11 16:10:39.366742 [ 128 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 305 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 2617.18 rows/sec., 4.39 MB/sec.
2020.08.11 16:10:39.378793 [ 128 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_41_8 to 202008_1_41_8.
2020.08.11 16:10:39.381647 [ 128 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_36_7 to 202008_41_41_0
2020.08.11 16:10:46.749424 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:46.755970 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 16:10:46.780553 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_41_41_0 to 202008_42_42_0.
2020.08.11 16:10:54.282707 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:10:54.290997 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 16:10:54.325650 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_42_42_0 to 202008_43_43_0.
2020.08.11 16:11:01.827366 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:11:01.830765 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 16:11:01.853448 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_43_43_0 to 202008_44_44_0.
2020.08.11 16:11:09.358154 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:11:09.375503 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 16:11:09.406110 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_44_44_0 to 202008_45_45_0.
2020.08.11 16:11:16.909308 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:11:16.914088 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.26 GiB.
2020.08.11 16:11:16.951761 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_45_45_0 to 202008_46_46_0.
2020.08.11 16:11:16.952801 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Selected 6 parts from 202008_1_41_8 to 202008_46_46_0
2020.08.11 16:11:16.955302 [ 127 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 16:11:16.955614 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Merging 6 parts: from 202008_1_41_8 to 202008_46_46_0 into tmp_merge_202008_1_46_9 with type Wide
2020.08.11 16:11:16.956113 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Selected MergeAlgorithm: Horizontal
2020.08.11 16:11:16.956470 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_1_41_8, total 305 rows starting from the beginning of the part
2020.08.11 16:11:16.969225 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_42_42_0, total 7 rows starting from the beginning of the part
2020.08.11 16:11:16.978991 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_43_43_0, total 8 rows starting from the beginning of the part
2020.08.11 16:11:16.989101 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_44_44_0, total 7 rows starting from the beginning of the part
2020.08.11 16:11:16.999826 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_45_45_0, total 8 rows starting from the beginning of the part
2020.08.11 16:11:17.011561 [ 127 ] {} <Trace> MergeTreeSequentialBlockInputStream: Reading 2 marks from part 202008_46_46_0, total 7 rows starting from the beginning of the part
2020.08.11 16:11:17.080507 [ 127 ] {} <Debug> system.metric_log (MergerMutator): Merge sorted 342 rows, containing 211 columns (211 merged, 0 gathered) in 0.12 sec., 2738.49 rows/sec., 4.60 MB/sec.
2020.08.11 16:11:17.092449 [ 127 ] {} <Trace> system.metric_log: Renaming temporary part tmp_merge_202008_1_46_9 to 202008_1_46_9.
2020.08.11 16:11:17.095324 [ 127 ] {} <Trace> system.metric_log (MergerMutator): Merged 6 parts: from 202008_1_41_8 to 202008_46_46_0
2020.08.11 16:11:24.457552 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:11:24.462363 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 16:11:24.485473 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_46_46_0 to 202008_47_47_0.
2020.08.11 16:11:28.351509 [ 111 ] {} <Information> Application: Received termination signal (Terminated)
2020.08.11 16:11:28.352041 [ 1 ] {} <Debug> Application: Received termination signal.
2020.08.11 16:11:28.352279 [ 1 ] {} <Debug> Application: Waiting for current connections to close.
2020.08.11 16:11:29.134886 [ 1 ] {} <Information> Application: Closed all listening sockets.
2020.08.11 16:11:29.135409 [ 1 ] {} <Information> Application: Closed connections.
2020.08.11 16:11:29.142323 [ 1 ] {} <Information> Application: Shutting down storages.
2020.08.11 16:11:29.201212 [ 132 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2020.08.11 16:11:29.209316 [ 132 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 15.25 GiB.
2020.08.11 16:11:29.235218 [ 132 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202008_47_47_0 to 202008_48_48_0.
2020.08.11 16:11:29.236638 [ 1 ] {} <Trace> system.metric_log: Found 54 old parts to remove.
2020.08.11 16:11:29.236872 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_1_0
2020.08.11 16:11:29.249757 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_6_1
2020.08.11 16:11:29.261949 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_11_2
2020.08.11 16:11:29.274029 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_16_3
2020.08.11 16:11:29.287248 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_21_4
2020.08.11 16:11:29.299379 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_26_5
2020.08.11 16:11:29.311692 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_31_6
2020.08.11 16:11:29.323946 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_36_7
2020.08.11 16:11:29.336470 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_1_41_8
2020.08.11 16:11:29.348813 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_2_2_0
2020.08.11 16:11:29.361458 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_3_3_0
2020.08.11 16:11:29.374100 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_4_4_0
2020.08.11 16:11:29.386557 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_5_5_0
2020.08.11 16:11:29.398979 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_6_6_0
2020.08.11 16:11:29.411293 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_7_7_0
2020.08.11 16:11:29.423695 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_8_8_0
2020.08.11 16:11:29.436468 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_9_9_0
2020.08.11 16:11:29.449226 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_10_10_0
2020.08.11 16:11:29.461835 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_11_11_0
2020.08.11 16:11:29.474772 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_12_12_0
2020.08.11 16:11:29.487387 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_13_13_0
2020.08.11 16:11:29.500090 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_14_14_0
2020.08.11 16:11:29.512598 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_15_15_0
2020.08.11 16:11:29.525150 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_16_16_0
2020.08.11 16:11:29.539143 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_17_17_0
2020.08.11 16:11:29.556154 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_18_18_0
2020.08.11 16:11:29.571608 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_19_19_0
2020.08.11 16:11:29.585055 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_20_20_0
2020.08.11 16:11:29.598413 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_21_21_0
2020.08.11 16:11:29.610697 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_22_22_0
2020.08.11 16:11:29.622788 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_23_23_0
2020.08.11 16:11:29.635470 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_24_24_0
2020.08.11 16:11:29.648204 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_25_25_0
2020.08.11 16:11:29.660283 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_26_26_0
2020.08.11 16:11:29.672650 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_27_27_0
2020.08.11 16:11:29.684945 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_28_28_0
2020.08.11 16:11:29.698086 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_29_29_0
2020.08.11 16:11:29.710058 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_30_30_0
2020.08.11 16:11:29.722514 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_31_31_0
2020.08.11 16:11:29.734879 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_32_32_0
2020.08.11 16:11:29.746971 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_33_33_0
2020.08.11 16:11:29.759257 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_34_34_0
2020.08.11 16:11:29.771741 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_35_35_0
2020.08.11 16:11:29.785151 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_36_36_0
2020.08.11 16:11:29.799204 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_37_37_0
2020.08.11 16:11:29.811790 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_38_38_0
2020.08.11 16:11:29.824186 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_39_39_0
2020.08.11 16:11:29.836440 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_40_40_0
2020.08.11 16:11:29.850291 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_41_41_0
2020.08.11 16:11:29.862771 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_42_42_0
2020.08.11 16:11:29.875123 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_43_43_0
2020.08.11 16:11:29.887848 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_44_44_0
2020.08.11 16:11:29.900621 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_45_45_0
2020.08.11 16:11:29.913194 [ 1 ] {} <Debug> system.metric_log: Removing part from filesystem 202008_46_46_0
2020.08.11 16:11:29.938727 [ 1 ] {} <Trace> BackgroundSchedulePool: Waiting for threads to finish.
2020.08.11 16:11:29.939189 [ 1 ] {} <Debug> Application: Shut down storages.
2020.08.11 16:11:29.940805 [ 1 ] {} <Debug> Application: Destroyed global context.
2020.08.11 16:11:29.941110 [ 1 ] {} <Information> Application: shutting down
2020.08.11 16:11:29.941316 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2020.08.11 16:11:29.941621 [ 111 ] {} <Information> BaseDaemon: Stop SignalListener thread
2020.08.11 16:28:34.423866 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:28:34.429314 [ 39 ] {} <Information> : Starting ClickHouse 20.3.16.165 with revision 54433
2020.08.11 16:28:34.429771 [ 39 ] {} <Information> Application: starting up
2020.08.11 16:28:34.436963 [ 39 ] {} <Debug> Application: rlimit on number of file descriptors is 1048576
2020.08.11 16:28:34.437424 [ 39 ] {} <Debug> Application: Initializing DateLUT.
2020.08.11 16:28:34.437942 [ 39 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2020.08.11 16:28:34.438511 [ 39 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2020.08.11 16:28:34.439560 [ 39 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'cd0b35b0ae1c' as replica host.
2020.08.11 16:28:34.443797 [ 39 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2020.08.11 16:28:34.447186 [ 39 ] {} <Information> Application: Uncompressed cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:28:34.447815 [ 39 ] {} <Information> Application: Mark cache size was lowered to 996.09 MiB because the system has low amount of memory
2020.08.11 16:28:34.448348 [ 39 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2020.08.11 16:28:34.452034 [ 39 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2020.08.11 16:28:34.452387 [ 39 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2020.08.11 16:28:34.452818 [ 39 ] {} <Debug> Application: Loaded metadata.
2020.08.11 16:28:34.453318 [ 39 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2020.08.11 16:28:34.453859 [ 39 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2020.08.11 16:28:34.459059 [ 39 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2020.08.11 16:28:34.459579 [ 39 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2020.08.11 16:28:34.463918 [ 39 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.464829 [ 39 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.465624 [ 39 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.466521 [ 39 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.16.165 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2020.08.11 16:28:34.467186 [ 39 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2020.08.11 16:28:34.467493 [ 39 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2020.08.11 16:28:34.467819 [ 39 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2020.08.11 16:28:34.468993 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library::No such file or directory (version 20.3.16.165 (official build))
2020.08.11 16:28:34.469619 [ 39 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 76, e.displayText() = DB::Exception: Cannot open certificate file: /etc/clickhouse-server/server.crt. (version 20.3.16.165 (official build))
2020.08.11 16:28:34.469926 [ 39 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2020.08.11 16:28:34.560026 [ 39 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2020.08.11 16:28:34.562623 [ 39 ] {} <Information> Application: Available RAM: 1.95 GiB; physical cores: 8; logical cores: 8.
2020.08.11 16:28:34.563024 [ 39 ] {} <Information> Application: Ready for connections.
2020.08.11 16:28:35.404438 [ 73 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: HEAD, Address: 127.0.0.1:41742, User-Agent: Wget/1.19.4 (linux-gnu), Content Type: , Transfer Encoding: identity
2020.08.11 16:28:35.446718 [ 74 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:44196
2020.08.11 16:28:35.447255 [ 74 ] {} <Debug> TCPHandler: Connected ClickHouse client version 20.3.0, revision: 54433, user: default.
2020.08.11 16:28:35.453395 [ 74 ] {86267bed-d778-448a-bc47-2e0a7aabe892} <Debug> executeQuery: (from 127.0.0.1:44196) CREATE DICTIONARY IF NOT EXISTS default.dict_prod_mb2_params (`partner_id` UInt32, `order` UInt8, `name` String, `display_name` String, `is_forwarded` UInt8) PRIMARY KEY partner_id, display_name SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict_prod_mb2_params.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 16:28:35.453722 [ 74 ] {86267bed-d778-448a-bc47-2e0a7aabe892} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:28:35.453989 [ 74 ] {86267bed-d778-448a-bc47-2e0a7aabe892} <Trace> AccessRightsContext (default): Settings: readonly=0, allow_ddl=1, allow_introspection_functions=0
2020.08.11 16:28:35.454271 [ 74 ] {86267bed-d778-448a-bc47-2e0a7aabe892} <Trace> AccessRightsContext (default): List of all grants: GRANT SHOW, EXISTS, SELECT, INSERT, ALTER, CREATE, CREATE TEMPORARY TABLE, DROP, TRUNCATE, OPTIMIZE, KILL, CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, CREATE POLICY, ALTER POLICY, DROP POLICY, CREATE QUOTA, ALTER QUOTA, DROP QUOTA, ROLE ADMIN, SYSTEM, dictGet(), TABLE FUNCTIONS ON *.*, SELECT ON system.*
2020.08.11 16:28:35.454484 [ 74 ] {86267bed-d778-448a-bc47-2e0a7aabe892} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict_prod_mb2_params
2020.08.11 16:28:35.457639 [ 74 ] {86267bed-d778-448a-bc47-2e0a7aabe892} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:28:35.458057 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:28:35.458225 [ 74 ] {} <Information> TCPHandler: Processed in 0.005 sec.
2020.08.11 16:28:35.459106 [ 74 ] {5a1de998-6310-49b1-b74d-6ba1767f330e} <Debug> executeQuery: (from 127.0.0.1:44196) CREATE DICTIONARY IF NOT EXISTS default.dict_prod_partner_affiliate_links (`id` Int32, `partner_id` Int32, `network` String, `affiliate_id` Int32, `code_affilie` String) PRIMARY KEY code_affilie SOURCE(FILE(PATH '/var/lib/clickhouse/user_files/dict_prod_partner_affiliate_links.txt' FORMAT 'TabSeparated')) LIFETIME(MIN 300 MAX 600) LAYOUT(COMPLEX_KEY_HASHED())
2020.08.11 16:28:35.459410 [ 74 ] {5a1de998-6310-49b1-b74d-6ba1767f330e} <Trace> AccessRightsContext (default): Access granted: CREATE DICTIONARY ON default.dict_prod_partner_affiliate_links
2020.08.11 16:28:35.462022 [ 74 ] {5a1de998-6310-49b1-b74d-6ba1767f330e} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2020.08.11 16:28:35.462371 [ 74 ] {} <Debug> MemoryTracker: Peak memory usage (total): 0.00 B.
2020.08.11 16:28:35.462580 [ 74 ] {} <Information> TCPHandler: Processed in 0.004 sec.
2020.08.11 16:28:35.466037 [ 74 ] {333e8f74-e580-4ce1-b33f-aa896f0d2b1f} <Debug> executeQuery: (from 127.0.0.1:44196) CREATE TABLE IF NOT EXISTS default.table (`event_date` DateTime DEFAULT toDateTime('0000-00-00 00:00:00'), `code_affilie` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `stamp` LowCardinality(Nullable(String)), `md_ad_format` LowCardinality(String) DEFAULT CAST(if(dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format')) > 0, replaceAll(decodeURLComponent(extractAll(extract(coalesce(stamp, ''), '.*_MB:(.*)'), '\\[([^\\[\\]]*)\\]')[dictGetUInt8('default.dict_prod_mb2_params', 'order', (toUInt32(dictGetInt32('default.dict_prod_partner_affiliate_links', 'partner_id', tuple(code_affilie))), 'ad_format'))]), '+', ' '), ''), 'LowCardinality(String)')) ENGINE = MergeTree() ORDER BY tuple()
2020.08.11 16:28:35.466358 [ 74 ] {333e8f74-e580-4ce1-b33f-aa896f0d2b1f} <Trace> AccessRightsContext (default): Access granted: CREATE TABLE ON default.table
2020.08.11 16:28:35.509639 [ 88 ] {} <Error> void DB::ParallelParsingBlockInputStream::onBackgroundException(): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected \t before: network\t1\tcode_affilie: (at row 1)
Row 1:
Column 0, name: code_affilie, type: String, parsed text: "1"
Column 1, name: id, type: Int32, parsed text: "1"
Column 2, name: partner_id, type: Int32, ERROR: text "network<TAB>1<TAB>" is not like Int32
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105a6770 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f9772d in /usr/bin/clickhouse
2. ? @ 0x8fcf643 in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdde2217 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0xdc7cec1 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xdbf7dcb in /usr/bin/clickhouse
6. DB::InputStreamFromInputFormat::readImpl() @ 0xdba3aad in /usr/bin/clickhouse
7. DB::IBlockInputStream::read() @ 0xce9b47f in /usr/bin/clickhouse
8. DB::ParallelParsingBlockInputStream::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xdba7f82 in /usr/bin/clickhouse
9. ? @ 0xdbaab34 in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8fbbe77 in /usr/bin/clickhouse
11. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8fbc4f8 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fbb387 in /usr/bin/cli
View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment