ClickHouse icon indicating copy to clipboard operation
ClickHouse copied to clipboard

system.tables doesn't work in `clickhouse-server:25.3.6.10034.altinitystable` with DataLake type=glue tables

Open Slach opened this issue 5 months ago • 3 comments

Describe the bug system.tables doesn't work in clickhouse-server:25.3.6.10034.altinitystable

To Reproduce Steps to reproduce the behavior:

  1. clickhouse-client -q "SELECT database, name, engine , data_paths , uuid , create_table_query , coalesce(total_bytes, 0) AS total_bytes FROM system.tables WHERE is_temporary = 0 ORDER BY total_bytes DESC SETTINGS show_table_uuid_in_table_create_query_if_not_nil=1"
  2. See error
exception:                             Code: 36. DB::Exception: Unknown Iceberg type: decimal(36. (BAD_ARGUMENTS) (version 25.3.6.10034.altinitystable (altinity build))
stack_trace:                           0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000d5af728
1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x00000000092e5e1c
2. DB::Exception::Exception<String const&>(int, FormatStringHelperImpl<std::type_identity<String const&>::type>, String const&) @ 0x00000000097e7338
3. DB::IcebergSchemaProcessor::getSimpleType(String const&) @ 0x000000000fbfd0dc
4. (anonymous namespace)::getType(String const&, bool, String const&) @ 0x00000000106e36c4
5. (anonymous namespace)::getType(String const&, bool, String const&) @ 0x00000000106e3dc0
6. DataLake::GlueCatalog::getTableMetadata(String const&, String const&, DataLake::TableMetadata&) const @ 0x00000000106e273c
7. DataLake::GlueCatalog::tryGetTableMetadata(String const&, String const&, DataLake::TableMetadata&) const @ 0x00000000106e1d2c
8. DB::DatabaseDataLake::tryGetTableImpl(String const&, std::shared_ptr<DB::Context const>, bool) const @ 0x00000000106b51fc
9. DB::DatabaseDataLake::getLightweightTablesIterator(std::shared_ptr<DB::Context const>, std::function<bool (String const&)> const&, bool) const @ 0x00000000106b8b04
10. DB::detail::getFilteredTables(DB::ActionsDAG::Node const*, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, std::shared_ptr<DB::Context const>, bool) @ 0x000000000f22a8e0
11. DB::ReadFromSystemTables::applyFilters(DB::ActionDAGNodes) @ 0x000000000f2329e0
12. DB::QueryPlanOptimizations::optimizePrimaryKeyConditionAndLimit(std::vector<DB::QueryPlanOptimizations::Frame, std::allocator<DB::QueryPlanOptimizations::Frame>> const&) @ 0x00000000128fac44
13. DB::QueryPlanOptimizations::optimizeTreeSecondPass(DB::QueryPlanOptimizationSettings const&, DB::QueryPlan::Node&, std::list<DB::QueryPlan::Node, std::allocator<DB::QueryPlan::Node>>&) @ 0x00000000128f9a6c
14. DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x0000000012867844
15. DB::InterpreterSelectQueryAnalyzer::buildQueryPipeline() @ 0x0000000010fe5fac
16. DB::InterpreterSelectQueryAnalyzer::execute() @ 0x0000000010fe5a90
17. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr<DB::IAST>&) @ 0x00000000112fdb4c
18. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x00000000112fa134
19. DB::TCPHandler::runImpl() @ 0x00000000123e6e7c
20. DB::TCPHandler::run() @ 0x0000000012401e68
21. Poco::Net::TCPServerConnection::start() @ 0x000000001543bf58
22. Poco::Net::TCPServerDispatcher::run() @ 0x000000001543c474
23. Poco::PooledThread::run() @ 0x00000000154079bc
24. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015405d90
25. ? @ 0x000000000007d5b8
26. ? @ 0x00000000000e5edc
  1. even simple query like SELECT database, table, engine_full FROM system.tables return same error

Expected behavior Just show table schemas

Key information I don't see IceBerg tables

grep -i iceberg -r /var/lib/clickhouse/metadata/

return nothing

grep -C 100 -i lake -r /var/lib/clickhouse/metadata/

/var/lib/clickhouse/metadata/dev_clean.sql-ATTACH DATABASE dev_clean
/var/lib/clickhouse/metadata/dev_clean.sql:ENGINE = DataLakeCatalog
/var/lib/clickhouse/metadata/dev_clean.sql-SETTINGS region = 'eu-central-1', aws_access_key_id = 'XXX', aws_secret_access_key = 'XXX', catalog_type = 'glue'
--
/var/lib/clickhouse/metadata/clean.sql-ATTACH DATABASE clean
/var/lib/clickhouse/metadata/clean.sql:ENGINE = DataLakeCatalog
/var/lib/clickhouse/metadata/clean.sql-SETTINGS region = 'eu-central-1', aws_access_key_id = 'XXX', aws_secret_access_key = 'XXXX', catalog_type = 'glue'

Slach avatar Aug 03 '25 03:08 Slach

@Slach I couldn't reproduce it, I need more details. Can you ask them for full table schema? How do they create Iceberg table?

Plain decimal type is fully supported by clickhouse, I suspect that decimal was used in nested structure. There is known issue: https://github.com/clickhouse/clickhouse/issues/81301

alsugiliazova avatar Aug 04 '25 11:08 alsugiliazova

DB::Exception: Unknown Iceberg type: decimal(36. (BAD_ARGUMENTS) - looks like ClickHouse got type decimal(36 without final ). Type names extracted from JSON from Iceberg catalog. Is any guarantee that catalog sends correct table schema, and this is ClickHouse issue? @Slach, is it possible to check on customer side?

ianton-ru avatar Aug 04 '25 22:08 ianton-ru

@ianton-ru customer updated to 25.6 upstream, and looks like issue is gone away

Slach avatar Aug 22 '25 16:08 Slach