self-hosted
self-hosted copied to clipboard
Uprade from 21.6.3 to latest migrations failed
Self-Hosted Version
21.6.3
CPU Architecture
x86_64
Docker Version
19.03.11
Docker Compose Version
1.28.0
Steps to Reproduce
Upgrade from 21.6.3 to latest Have pulled latest changes from upstream Run install.sh
REPORT_SELF_HOSTED_ISSUES=0 SENTRY_IMAGE=getsentry/sentry:latest SNUBA_IMAGE=getsentry/snuba:latest RELAY_IMAGE=getsentry/relay:latest CLICKHOUSE_IMAGE=yandex/clickhouse-server:latest ./install.sh
Expected Result
Successful upgrade and migration
Actual Result
Bootstrapping and migrating to snuba step failed
Bootstrapping and migrating Snuba ...
Creating sentry_onpremise_redis_1 ...
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_zookeeper_1 ...
Creating sentry_onpremise_redis_1 ... done
Creating sentry_onpremise_zookeeper_1 ... done
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_kafka_1 ...
Creating sentry_onpremise_kafka_1 ... done
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
2022-09-21 21:42:49,585 Attempting to connect to Kafka (attempt 0)...
2022-09-21 21:42:49,615 Connected to Kafka on attempt 0
2022-09-21 21:42:49,616 Creating Kafka topics...
2022-09-21 21:42:49,976 Topic scheduled-subscriptions-generic-metrics-sets created
2022-09-21 21:42:49,977 Topic scheduled-subscriptions-generic-metrics-distributions created
2022-09-21 21:42:49,977 Topic generic-metrics-sets-subscription-results created
2022-09-21 21:42:49,977 Topic generic-metrics-distributions-subscription-results created
2022-09-21 21:42:49,977 Topic processed-profiles created
2022-09-21 21:42:49,977 Topic profiles-call-tree created
2022-09-21 21:42:49,977 Topic ingest-replay-events created
2022-09-21 21:42:49,977 Topic snuba-generic-metrics created
2022-09-21 21:42:49,977 Topic snuba-generic-metrics-sets-commit-log created
2022-09-21 21:42:49,978 Topic snuba-generic-metrics-distributions-commit-log created
2022-09-21 21:42:49,978 Topic snuba-dead-letter-inserts created
2022-09-21 21:42:49,978 Topic snuba-attribution created
2022-09-21 21:42:49,978 Topic snuba-dead-letter-metrics created
2022-09-21 21:42:49,978 Topic snuba-dead-letter-sessions created
2022-09-21 21:42:49,978 Topic snuba-dead-letter-generic-metrics created
2022-09-21 21:42:49,978 Topic snuba-dead-letter-replays created
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
2022-09-21 21:42:57,732 Running migration: 0001_migrations
2022-09-21 21:42:57,741 Finished: 0001_migrations
2022-09-21 21:42:57,761 Running migration: 0001_events_initial
2022-09-21 21:42:57,777 Finished: 0001_events_initial
2022-09-21 21:42:57,789 Running migration: 0002_events_onpremise_compatibility
2022-09-21 21:42:58,004 Finished: 0002_events_onpremise_compatibility
2022-09-21 21:42:58,014 Running migration: 0003_errors
2022-09-21 21:42:58,024 Finished: 0003_errors
2022-09-21 21:42:58,031 Running migration: 0004_errors_onpremise_compatibility
2022-09-21 21:42:58,047 Finished: 0004_errors_onpremise_compatibility
2022-09-21 21:42:58,054 Running migration: 0005_events_tags_hash_map
2022-09-21 21:42:58,084 Finished: 0005_events_tags_hash_map
2022-09-21 21:42:58,091 Running migration: 0006_errors_tags_hash_map
2022-09-21 21:42:58,107 Finished: 0006_errors_tags_hash_map
2022-09-21 21:42:58,114 Running migration: 0007_groupedmessages
2022-09-21 21:42:58,120 Finished: 0007_groupedmessages
2022-09-21 21:42:58,126 Running migration: 0008_groupassignees
2022-09-21 21:42:58,135 Finished: 0008_groupassignees
2022-09-21 21:42:58,142 Running migration: 0009_errors_add_http_fields
2022-09-21 21:42:58,170 Finished: 0009_errors_add_http_fields
2022-09-21 21:42:58,179 Running migration: 0010_groupedmessages_onpremise_compatibility
2022-09-21 21:42:58,188 Finished: 0010_groupedmessages_onpremise_compatibility
2022-09-21 21:42:58,198 Running migration: 0011_rebuild_errors
2022-09-21 21:42:58,236 Finished: 0011_rebuild_errors
2022-09-21 21:42:58,245 Running migration: 0012_errors_make_level_nullable
2022-09-21 21:42:58,266 Finished: 0012_errors_make_level_nullable
2022-09-21 21:42:58,273 Running migration: 0013_errors_add_hierarchical_hashes
2022-09-21 21:42:58,311 Finished: 0013_errors_add_hierarchical_hashes
2022-09-21 21:42:58,320 Running migration: 0014_backfill_errors
2022-09-21 21:42:58,365 Starting migration from 2022-09-19
2022-09-21 21:42:58,417 Migrated 2022-09-19. (1 of 13 partitions done)
2022-09-21 21:42:58,443 Migrated 2022-09-12. (2 of 13 partitions done)
2022-09-21 21:43:04,959 Migrated 2022-09-05. (3 of 13 partitions done)
2022-09-21 21:43:33,495 Migrated 2022-08-29. (4 of 13 partitions done)
2022-09-21 21:43:43,454 Migrated 2022-08-22. (5 of 13 partitions done)
2022-09-21 21:43:50,112 Migrated 2022-08-15. (6 of 13 partitions done)
2022-09-21 21:44:02,152 Migrated 2022-08-08. (7 of 13 partitions done)
2022-09-21 21:44:16,305 Migrated 2022-08-01. (8 of 13 partitions done)
2022-09-21 21:44:29,586 Migrated 2022-07-25. (9 of 13 partitions done)
2022-09-21 21:44:43,254 Migrated 2022-07-18. (10 of 13 partitions done)
2022-09-21 21:44:56,866 Migrated 2022-07-11. (11 of 13 partitions done)
2022-09-21 21:45:15,379 Migrated 2022-07-04. (12 of 13 partitions done)
2022-09-21 21:45:33,756 Migrated 2022-06-27. (13 of 13 partitions done)
2022-09-21 21:45:33,756 Done. Optimizing.
2022-09-21 21:49:31,759 Finished: 0014_backfill_errors
2022-09-21 21:49:31,768 Running migration: 0015_truncate_events
2022-09-21 21:49:32,495 Finished: 0015_truncate_events
2022-09-21 21:49:32,505 Running migration: 0016_drop_legacy_events
2022-09-21 21:49:32,522 Finished: 0016_drop_legacy_events
2022-09-21 21:49:32,531 Running migration: 0001_transactions
2022-09-21 21:49:32,538 Finished: 0001_transactions
2022-09-21 21:49:32,545 Running migration: 0002_transactions_onpremise_fix_orderby_and_partitionby
Traceback (most recent call last):
File "/usr/src/snuba/snuba/clickhouse/native.py", line 192, in execute
result_data = query_execute()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 304, in execute
rv = self.process_ordinary_query(
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 491, in process_ordinary_query
return self.receive_result(with_column_types=with_column_types,
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 151, in receive_result
return result.get_result()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/result.py", line 50, in get_result
for packet in self.packet_generator:
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 167, in packet_generator
packet = self.receive_packet()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 184, in receive_packet
raise packet.exception
clickhouse_driver.errors.ServerException: Code: 241.
DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 8388704 bytes), maximum: 9.31 GiB: (avg_value_size_hint = 40.21224589420831, avg_chars_size = 38.65469507304997, limit = 113078): (while reading column tags.value): (while reading from part /var/lib/clickhouse/data/default/transactions_local/90-20220613_13233969_13575698_2070/ from mark 0 with max_rows_to_read = 8077). Stack trace:
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
2. ? @ 0x8f40ed7 in /usr/bin/clickhouse
3. MemoryTracker::alloc(long) @ 0x8f3eec3 in /usr/bin/clickhouse
4. DB::DataTypeString::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0xcf49bf0 in /usr/bin/clickhouse
5. DB::DataTypeArray::deserializeBinaryBulkWithMultipleStreams(DB::IColumn&, unsigned long, DB::IDataType::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::IDataType::DeserializeBinaryBulkState>&) const @ 0xce9adb5 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readData(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::IDataType const&, DB::IColumn&, unsigned long, bool, unsigned long, bool) @ 0xda42c56 in /usr/bin/clickhouse
7. DB::MergeTreeReaderWide::readRows(unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda430fc in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda61436 in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda620f4 in /usr/bin/clickhouse
10. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda6414e in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xda5babd in /usr/bin/clickhouse
12. DB::MergeTreeBaseSelectProcessor::generate() @ 0xda5c5f7 in /usr/bin/clickhouse
13. DB::ISource::work() @ 0xdb9ac1b in /usr/bin/clickhouse
14. DB::SourceWithProgress::work() @ 0xdeef717 in /usr/bin/clickhouse
15. DB::TreeExecutorBlockInputStream::execute(bool, bool) @ 0xdbe264e in /usr/bin/clickhouse
16. DB::TreeExecutorBlockInputStream::readImpl() @ 0xdbe3e27 in /usr/bin/clickhouse
17. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
18. DB::ExpressionBlockInputStream::readImpl() @ 0xd27448a in /usr/bin/clickhouse
19. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
20. DB::PartialSortingBlockInputStream::readImpl() @ 0xd292f1f in /usr/bin/clickhouse
21. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
22. DB::MergeSortingBlockInputStream::readImpl() @ 0xd2b07dc in /usr/bin/clickhouse
23. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
24. DB::AsynchronousBlockInputStream::calculate() @ 0xce3d518 in /usr/bin/clickhouse
25. ? @ 0xce3ecf8 in /usr/bin/clickhouse
26. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8f6692b in /usr/bin/clickhouse
27. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8f67608 in /usr/bin/clickhouse
28. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8f657eb in /usr/bin/clickhouse
29. ? @ 0x8f63c33 in /usr/bin/clickhouse
30. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
31. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/snuba", line 33, in <module>
sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/src/snuba/snuba/cli/migrations.py", line 64, in migrate
runner.run_all(force=force)
File "/usr/src/snuba/snuba/migrations/runner.py", line 158, in run_all
self._run_migration_impl(migration_key, force=force)
File "/usr/src/snuba/snuba/migrations/runner.py", line 218, in _run_migration_impl
migration.forwards(context, dry_run)
File "/usr/src/snuba/snuba/migrations/migration.py", line 74, in forwards
op.execute(logger)
File "/usr/src/snuba/snuba/migrations/operations.py", line 320, in execute
self.__func(logger)
File "/usr/src/snuba/snuba/snuba_migrations/transactions/0002_transactions_onpremise_fix_orderby_and_partitionby.py", line 104, in forwards
clickhouse.execute(
File "/usr/src/snuba/snuba/clickhouse/native.py", line 268, in execute
raise ClickhouseError(e.message, code=e.code) from e
snuba.clickhouse.errors.ClickhouseError: DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 8388704 bytes), maximum: 9.31 GiB: (avg_value_size_hint = 40.21224589420831, avg_chars_size = 38.65469507304997, limit = 113078): (while reading column tags.value): (while reading from part /var/lib/clickhouse/data/default/transactions_local/90-20220613_13233969_13575698_2070/ from mark 0 with max_rows_to_read = 8077). Stack trace:
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
2. ? @ 0x8f40ed7 in /usr/bin/clickhouse
3. MemoryTracker::alloc(long) @ 0x8f3eec3 in /usr/bin/clickhouse
4. DB::DataTypeString::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0xcf49bf0 in /usr/bin/clickhouse
5. DB::DataTypeArray::deserializeBinaryBulkWithMultipleStreams(DB::IColumn&, unsigned long, DB::IDataType::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::IDataType::DeserializeBinaryBulkState>&) const @ 0xce9adb5 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readData(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::IDataType const&, DB::IColumn&, unsigned long, bool, unsigned long, bool) @ 0xda42c56 in /usr/bin/clickhouse
7. DB::MergeTreeReaderWide::readRows(unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda430fc in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda61436 in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda620f4 in /usr/bin/clickhouse
10. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda6414e in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xda5babd in /usr/bin/clickhouse
12. DB::MergeTreeBaseSelectProcessor::generate() @ 0xda5c5f7 in /usr/bin/clickhouse
13. DB::ISource::work() @ 0xdb9ac1b in /usr/bin/clickhouse
14. DB::SourceWithProgress::work() @ 0xdeef717 in /usr/bin/clickhouse
15. DB::TreeExecutorBlockInputStream::execute(bool, bool) @ 0xdbe264e in /usr/bin/clickhouse
16. DB::TreeExecutorBlockInputStream::readImpl() @ 0xdbe3e27 in /usr/bin/clickhouse
17. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
18. DB::ExpressionBlockInputStream::readImpl() @ 0xd27448a in /usr/bin/clickhouse
19. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
20. DB::PartialSortingBlockInputStream::readImpl() @ 0xd292f1f in /usr/bin/clickhouse
21. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
22. DB::MergeSortingBlockInputStream::readImpl() @ 0xd2b07dc in /usr/bin/clickhouse
23. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
24. DB::AsynchronousBlockInputStream::calculate() @ 0xce3d518 in /usr/bin/clickhouse
25. ? @ 0xce3ecf8 in /usr/bin/clickhouse
26. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8f6692b in /usr/bin/clickhouse
27. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8f67608 in /usr/bin/clickhouse
28. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8f657eb in /usr/bin/clickhouse
29. ? @ 0x8f63c33 in /usr/bin/clickhouse
30. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
31. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
1
Error in bootstrap-snuba.sh:4.
'$dcr snuba-api migrations migrate --force' exited with status 1
-> ./install.sh:main:33
--> bootstrap-snuba.sh:source:4
Cleaning up...
And a manual snuba migration run with debug output
Creating sentry_onpremise_snuba-api_run ... done
2022-09-21 22:16:02,545 Attempting to connect to Clickhouse cluster clickhouse:9000 (attempt 0)
2022-09-21 22:16:02,547 Connecting. Database: default. User: default
2022-09-21 22:16:02,548 Connecting to clickhouse:9000
2022-09-21 22:16:02,549 Connected to ClickHouse server version 20.3.9, revision: 54433
2022-09-21 22:16:02,549 Query: SELECT version()
2022-09-21 22:16:02,549 Block "" send time: 0.000054
2022-09-21 22:16:02,551 Query: SELECT group, migration_id, status FROM migrations_local FINAL WHERE group IN ('system', 'events', 'transactions', 'discover', 'outcomes', 'metrics', 'sessions')
2022-09-21 22:16:02,552 Block "" send time: 0.000128
2022-09-21 22:16:02,556 Running migration: 0002_transactions_onpremise_fix_orderby_and_partitionby
2022-09-21 22:16:02,557 Query: SELECT version FROM migrations_local FINAL WHERE group = 'transactions' AND migration_id = '0002_transactions_onpremise_fix_orderby_and_partitionby';
2022-09-21 22:16:02,557 Block "" send time: 0.000087
2022-09-21 22:16:02,559 Query: INSERT INTO migrations_local FORMAT JSONEachRow
2022-09-21 22:16:02,559 Block "" send time: 0.000032
/usr/local/lib/python3.8/site-packages/clickhouse_driver/columns/datetimecolumn.py:199: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
local_timezone = get_localzone().zone
2022-09-21 22:16:02,564 Block "" send time: 0.004293
2022-09-21 22:16:02,565 Block "" send time: 0.000038
2022-09-21 22:16:02,567 Query: SELECT sampling_key, partition_key, primary_key FROM system.tables WHERE name = 'transactions_local' AND database = 'default'
2022-09-21 22:16:02,567 Block "" send time: 0.000126
2022-09-21 22:16:02,569 Query: SHOW CREATE TABLE default.transactions_local
2022-09-21 22:16:02,570 Block "" send time: 0.000036
2022-09-21 22:16:02,573 Query: CREATE TABLE default.transactions_local_new (`project_id` UInt64, `event_id` UUID, `trace_id` UUID, `span_id` UInt64, `transaction_name` LowCardinality(String), `transaction_hash` UInt64 MATERIALIZED CAST(cityHash64(transaction_name), 'UInt64'), `transaction_op` LowCardinality(String), `transaction_status` UInt8 DEFAULT 2, `start_ts` DateTime, `start_ms` UInt16, `_start_date` Date MATERIALIZED toDate(start_ts), `finish_ts` DateTime, `finish_ms` UInt16, `_finish_date` Date MATERIALIZED toDate(finish_ts), `duration` UInt32, `platform` LowCardinality(String), `environment` LowCardinality(Nullable(String)), `release` LowCardinality(Nullable(String)), `dist` LowCardinality(Nullable(String)), `ip_address_v4` Nullable(IPv4), `ip_address_v6` Nullable(IPv6), `user` String DEFAULT '', `user_hash` UInt64 MATERIALIZED cityHash64(user), `user_id` Nullable(String), `user_name` Nullable(String), `user_email` Nullable(String), `sdk_name` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `sdk_version` LowCardinality(String) DEFAULT CAST('', 'LowCardinality(String)'), `tags.key` Array(String), `tags.value` Array(String), `_tags_flattened` String, `contexts.key` Array(String), `contexts.value` Array(String), `_contexts_flattened` String, `partition` UInt16, `offset` UInt64, `message_timestamp` DateTime, `retention_days` UInt16, `deleted` UInt8) ENGINE = ReplacingMergeTree(deleted) PARTITION BY (retention_days, toMonday(finish_ts)) ORDER BY (project_id, toStartOfDay(finish_ts), transaction_name, cityHash64(span_id)) SAMPLE BY cityHash64(span_id) SETTINGS index_granularity = 8192
2022-09-21 22:16:02,573 Block "" send time: 0.000043
2022-09-21 22:16:02,583 Query: SELECT count() FROM transactions_local
2022-09-21 22:16:02,583 Block "" send time: 0.000037
2022-09-21 22:16:02,586 Query:
INSERT INTO transactions_local_new
SELECT * FROM transactions_local
ORDER BY toStartOfDay(finish_ts), project_id, event_id
LIMIT 100000
OFFSET 0;
2022-09-21 22:16:02,586 Block "" send time: 0.000032
2022-09-21 22:18:27,024 Query:
INSERT INTO transactions_local_new
SELECT * FROM transactions_local
ORDER BY toStartOfDay(finish_ts), project_id, event_id
LIMIT 100000
OFFSET 100000;
2022-09-21 22:18:27,024 Block "" send time: 0.000050
Traceback (most recent call last):
File "/usr/src/snuba/snuba/clickhouse/native.py", line 187, in execute
result_data = query_execute()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 286, in execute
rv = self.process_ordinary_query(
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 479, in process_ordinary_query
return self.receive_result(with_column_types=with_column_types,
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 136, in receive_result
return result.get_result()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/result.py", line 50, in get_result
for packet in self.packet_generator:
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 152, in packet_generator
packet = self.receive_packet()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 169, in receive_packet
raise packet.exception
clickhouse_driver.errors.ServerException: Code: 241.
DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 5242976 bytes), maximum: 9.31 GiB: (avg_value_size_hint = 44.97507401602229, avg_chars_size = 44.370088819226744, limit = 73044): (while reading column contexts.value): (while reading from part /var/lib/clickhouse/data/default/transactions_local/90-20220613_13233969_13575698_2070/ from mark 24 with max_rows_to_read = 8116). Stack trace:
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
2. ? @ 0x8f40ed7 in /usr/bin/clickhouse
3. MemoryTracker::alloc(long) @ 0x8f3eec3 in /usr/bin/clickhouse
4. DB::DataTypeString::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0xcf49bf0 in /usr/bin/clickhouse
5. DB::DataTypeArray::deserializeBinaryBulkWithMultipleStreams(DB::IColumn&, unsigned long, DB::IDataType::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::IDataType::DeserializeBinaryBulkState>&) const @ 0xce9adb5 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readData(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::IDataType const&, DB::IColumn&, unsigned long, bool, unsigned long, bool) @ 0xda42c56 in /usr/bin/clickhouse
7. DB::MergeTreeReaderWide::readRows(unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda430fc in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda61436 in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda620f4 in /usr/bin/clickhouse
10. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda6414e in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xda5babd in /usr/bin/clickhouse
12. DB::MergeTreeBaseSelectProcessor::generate() @ 0xda5c5f7 in /usr/bin/clickhouse
13. DB::ISource::work() @ 0xdb9ac1b in /usr/bin/clickhouse
14. DB::SourceWithProgress::work() @ 0xdeef717 in /usr/bin/clickhouse
15. DB::TreeExecutorBlockInputStream::execute(bool, bool) @ 0xdbe264e in /usr/bin/clickhouse
16. DB::TreeExecutorBlockInputStream::readImpl() @ 0xdbe3e27 in /usr/bin/clickhouse
17. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
18. DB::ExpressionBlockInputStream::readImpl() @ 0xd27448a in /usr/bin/clickhouse
19. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
20. DB::PartialSortingBlockInputStream::readImpl() @ 0xd292f1f in /usr/bin/clickhouse
21. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
22. DB::MergeSortingBlockInputStream::readImpl() @ 0xd2b07dc in /usr/bin/clickhouse
23. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
24. DB::AsynchronousBlockInputStream::calculate() @ 0xce3d518 in /usr/bin/clickhouse
25. ? @ 0xce3ecf8 in /usr/bin/clickhouse
26. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8f6692b in /usr/bin/clickhouse
27. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8f67608 in /usr/bin/clickhouse
28. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8f657eb in /usr/bin/clickhouse
29. ? @ 0x8f63c33 in /usr/bin/clickhouse
30. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
31. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/snuba", line 33, in <module>
sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/src/snuba/snuba/cli/migrations.py", line 64, in migrate
runner.run_all(force=force)
File "/usr/src/snuba/snuba/migrations/runner.py", line 157, in run_all
self._run_migration_impl(migration_key, force=force)
File "/usr/src/snuba/snuba/migrations/runner.py", line 215, in _run_migration_impl
migration.forwards(context, dry_run)
File "/usr/src/snuba/snuba/migrations/migration.py", line 74, in forwards
op.execute(logger)
File "/usr/src/snuba/snuba/migrations/operations.py", line 317, in execute
self.__func(logger)
File "/usr/src/snuba/snuba/migrations/snuba_migrations/transactions/0002_transactions_onpremise_fix_orderby_and_partitionby.py", line 104, in forwards
clickhouse.execute(
File "/usr/src/snuba/snuba/clickhouse/native.py", line 243, in execute
raise ClickhouseError(e.message, code=e.code) from e
snuba.clickhouse.errors.ClickhouseError: DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 5242976 bytes), maximum: 9.31 GiB: (avg_value_size_hint = 44.97507401602229, avg_chars_size = 44.370088819226744, limit = 73044): (while reading column contexts.value): (while reading from part /var/lib/clickhouse/data/default/transactions_local/90-20220613_13233969_13575698_2070/ from mark 24 with max_rows_to_read = 8116). Stack trace:
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
2. ? @ 0x8f40ed7 in /usr/bin/clickhouse
3. MemoryTracker::alloc(long) @ 0x8f3eec3 in /usr/bin/clickhouse
4. DB::DataTypeString::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0xcf49bf0 in /usr/bin/clickhouse
5. DB::DataTypeArray::deserializeBinaryBulkWithMultipleStreams(DB::IColumn&, unsigned long, DB::IDataType::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::IDataType::DeserializeBinaryBulkState>&) const @ 0xce9adb5 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readData(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::IDataType const&, DB::IColumn&, unsigned long, bool, unsigned long, bool) @ 0xda42c56 in /usr/bin/clickhouse
7. DB::MergeTreeReaderWide::readRows(unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda430fc in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda61436 in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda620f4 in /usr/bin/clickhouse
10. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda6414e in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xda5babd in /usr/bin/clickhouse
12. DB::MergeTreeBaseSelectProcessor::generate() @ 0xda5c5f7 in /usr/bin/clickhouse
13. DB::ISource::work() @ 0xdb9ac1b in /usr/bin/clickhouse
14. DB::SourceWithProgress::work() @ 0xdeef717 in /usr/bin/clickhouse
15. DB::TreeExecutorBlockInputStream::execute(bool, bool) @ 0xdbe264e in /usr/bin/clickhouse
16. DB::TreeExecutorBlockInputStream::readImpl() @ 0xdbe3e27 in /usr/bin/clickhouse
17. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
18. DB::ExpressionBlockInputStream::readImpl() @ 0xd27448a in /usr/bin/clickhouse
19. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
20. DB::PartialSortingBlockInputStream::readImpl() @ 0xd292f1f in /usr/bin/clickhouse
21. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
22. DB::MergeSortingBlockInputStream::readImpl() @ 0xd2b07dc in /usr/bin/clickhouse
23. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
24. DB::AsynchronousBlockInputStream::calculate() @ 0xce3d518 in /usr/bin/clickhouse
25. ? @ 0xce3ecf8 in /usr/bin/clickhouse
26. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8f6692b in /usr/bin/clickhouse
27. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8f67608 in /usr/bin/clickhouse
28. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8f657eb in /usr/bin/clickhouse
29. ? @ 0x8f63c33 in /usr/bin/clickhouse
30. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
31. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
ERROR: 1
I have played around a bit and added a modified 0002_transactions_onpremise_fix_orderby_and_partitionby.py to the snuba-api and play around different values of max_memory_usage and batch size but always ended up getting the error around 120000 offset.
Let me know if I can provide any other information to you.
Thanks, Joseph
Hi there Joseph,
We actually provide a specific version of the clickhouse server to use so the latest version might have some issues we haven't looked into yet.
If you look into install/detect-platform.sh, we're using yandex/clickhouse-server:20.3.9.70. Could you try that instead?
Hi @hubertdeng123, Thank you for your reply, I've tried your suggestion but got the same issue. Here are the logs:
▶ Bootstrapping and migrating Snuba ...
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_redis_1 ...
Creating sentry_onpremise_zookeeper_1 ...
Creating sentry_onpremise_zookeeper_1 ... done
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_redis_1 ... done
Creating sentry_onpremise_kafka_1 ...
Creating sentry_onpremise_kafka_1 ... done
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
2022-09-22 22:01:00,684 Attempting to connect to Kafka (attempt 0)...
2022-09-22 22:01:00,715 Connected to Kafka on attempt 0
2022-09-22 22:01:00,716 Creating Kafka topics...
2022-09-22 22:01:01,109 Topic scheduled-subscriptions-generic-metrics-sets created
2022-09-22 22:01:01,110 Topic scheduled-subscriptions-generic-metrics-distributions created
2022-09-22 22:01:01,110 Topic generic-metrics-sets-subscription-results created
2022-09-22 22:01:01,110 Topic generic-metrics-distributions-subscription-results created
2022-09-22 22:01:01,110 Topic processed-profiles created
2022-09-22 22:01:01,110 Topic profiles-call-tree created
2022-09-22 22:01:01,111 Topic ingest-replay-events created
2022-09-22 22:01:01,111 Topic snuba-generic-metrics created
2022-09-22 22:01:01,111 Topic snuba-generic-metrics-sets-commit-log created
2022-09-22 22:01:01,111 Topic snuba-generic-metrics-distributions-commit-log created
2022-09-22 22:01:01,111 Topic snuba-dead-letter-inserts created
2022-09-22 22:01:01,112 Topic snuba-attribution created
2022-09-22 22:01:01,112 Topic snuba-dead-letter-metrics created
2022-09-22 22:01:01,112 Topic snuba-dead-letter-sessions created
2022-09-22 22:01:01,112 Topic snuba-dead-letter-generic-metrics created
2022-09-22 22:01:01,112 Topic snuba-dead-letter-replays created
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
2022-09-22 22:01:09,825 Running migration: 0001_migrations
2022-09-22 22:01:09,835 Finished: 0001_migrations
2022-09-22 22:01:09,852 Running migration: 0001_events_initial
2022-09-22 22:01:09,861 Finished: 0001_events_initial
2022-09-22 22:01:09,869 Running migration: 0002_events_onpremise_compatibility
2022-09-22 22:01:10,088 Finished: 0002_events_onpremise_compatibility
2022-09-22 22:01:10,097 Running migration: 0003_errors
2022-09-22 22:01:10,106 Finished: 0003_errors
2022-09-22 22:01:10,113 Running migration: 0004_errors_onpremise_compatibility
2022-09-22 22:01:10,127 Finished: 0004_errors_onpremise_compatibility
2022-09-22 22:01:10,134 Running migration: 0005_events_tags_hash_map
2022-09-22 22:01:10,157 Finished: 0005_events_tags_hash_map
2022-09-22 22:01:10,164 Running migration: 0006_errors_tags_hash_map
2022-09-22 22:01:10,178 Finished: 0006_errors_tags_hash_map
2022-09-22 22:01:10,185 Running migration: 0007_groupedmessages
2022-09-22 22:01:10,192 Finished: 0007_groupedmessages
2022-09-22 22:01:10,198 Running migration: 0008_groupassignees
2022-09-22 22:01:10,204 Finished: 0008_groupassignees
2022-09-22 22:01:10,211 Running migration: 0009_errors_add_http_fields
2022-09-22 22:01:10,240 Finished: 0009_errors_add_http_fields
2022-09-22 22:01:10,250 Running migration: 0010_groupedmessages_onpremise_compatibility
2022-09-22 22:01:10,259 Finished: 0010_groupedmessages_onpremise_compatibility
2022-09-22 22:01:10,268 Running migration: 0011_rebuild_errors
2022-09-22 22:01:10,305 Finished: 0011_rebuild_errors
2022-09-22 22:01:10,312 Running migration: 0012_errors_make_level_nullable
2022-09-22 22:01:10,328 Finished: 0012_errors_make_level_nullable
2022-09-22 22:01:10,335 Running migration: 0013_errors_add_hierarchical_hashes
2022-09-22 22:01:10,368 Finished: 0013_errors_add_hierarchical_hashes
2022-09-22 22:01:10,376 Running migration: 0014_backfill_errors
2022-09-22 22:01:10,390 Starting migration from 2022-09-19
2022-09-22 22:01:10,438 Migrated 2022-09-19. (1 of 13 partitions done)
2022-09-22 22:01:10,462 Migrated 2022-09-12. (2 of 13 partitions done)
2022-09-22 22:01:18,447 Migrated 2022-09-05. (3 of 13 partitions done)
2022-09-22 22:01:48,363 Migrated 2022-08-29. (4 of 13 partitions done)
2022-09-22 22:02:00,298 Migrated 2022-08-22. (5 of 13 partitions done)
2022-09-22 22:02:07,997 Migrated 2022-08-15. (6 of 13 partitions done)
2022-09-22 22:02:21,104 Migrated 2022-08-08. (7 of 13 partitions done)
2022-09-22 22:02:36,367 Migrated 2022-08-01. (8 of 13 partitions done)
2022-09-22 22:02:49,331 Migrated 2022-07-25. (9 of 13 partitions done)
2022-09-22 22:03:02,519 Migrated 2022-07-18. (10 of 13 partitions done)
2022-09-22 22:03:16,095 Migrated 2022-07-11. (11 of 13 partitions done)
2022-09-22 22:03:33,569 Migrated 2022-07-04. (12 of 13 partitions done)
2022-09-22 22:03:52,593 Migrated 2022-06-27. (13 of 13 partitions done)
2022-09-22 22:03:52,593 Done. Optimizing.
2022-09-22 22:07:55,073 Finished: 0014_backfill_errors
2022-09-22 22:07:55,096 Running migration: 0015_truncate_events
2022-09-22 22:07:56,031 Finished: 0015_truncate_events
2022-09-22 22:07:56,055 Running migration: 0016_drop_legacy_events
2022-09-22 22:07:56,087 Finished: 0016_drop_legacy_events
2022-09-22 22:07:56,101 Running migration: 0001_transactions
2022-09-22 22:07:56,110 Finished: 0001_transactions
2022-09-22 22:07:56,130 Running migration: 0002_transactions_onpremise_fix_orderby_and_partitionby
Traceback (most recent call last):
File "/usr/src/snuba/snuba/clickhouse/native.py", line 192, in execute
result_data = query_execute()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 304, in execute
rv = self.process_ordinary_query(
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 491, in process_ordinary_query
return self.receive_result(with_column_types=with_column_types,
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 151, in receive_result
return result.get_result()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/result.py", line 50, in get_result
for packet in self.packet_generator:
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 167, in packet_generator
packet = self.receive_packet()
File "/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py", line 184, in receive_packet
raise packet.exception
clickhouse_driver.errors.ServerException: Code: 241.
DB::Exception: Memory limit (for query) exceeded: would use 9.32 GiB (attempt to allocate chunk of 9437072 bytes), maximum: 9.31 GiB: (avg_value_size_hint = 554.3526645768025, avg_chars_size = 655.623197492163, limit = 8189): (while reading column _contexts_flattened): (while reading from part /var/lib/clickhouse/data/default/transactions_local/90-20220613_13233969_13575698_2070/ from mark 0 with max_rows_to_read = 8189). Stack trace:
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
2. ? @ 0x8f40ed7 in /usr/bin/clickhouse
3. MemoryTracker::alloc(long) @ 0x8f3eec3 in /usr/bin/clickhouse
4. DB::DataTypeString::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0xcf49bf0 in /usr/bin/clickhouse
5. DB::MergeTreeReaderWide::readData(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::IDataType const&, DB::IColumn&, unsigned long, bool, unsigned long, bool) @ 0xda42c56 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readRows(unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda430fc in /usr/bin/clickhouse
7. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda61436 in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda620f4 in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda6414e in /usr/bin/clickhouse
10. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xda5babd in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::generate() @ 0xda5c5f7 in /usr/bin/clickhouse
12. DB::ISource::work() @ 0xdb9ac1b in /usr/bin/clickhouse
13. DB::SourceWithProgress::work() @ 0xdeef717 in /usr/bin/clickhouse
14. DB::TreeExecutorBlockInputStream::execute(bool, bool) @ 0xdbe264e in /usr/bin/clickhouse
15. DB::TreeExecutorBlockInputStream::readImpl() @ 0xdbe3e27 in /usr/bin/clickhouse
16. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
17. DB::ExpressionBlockInputStream::readImpl() @ 0xd27448a in /usr/bin/clickhouse
18. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
19. DB::PartialSortingBlockInputStream::readImpl() @ 0xd292f1f in /usr/bin/clickhouse
20. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
21. DB::MergeSortingBlockInputStream::readImpl() @ 0xd2b07dc in /usr/bin/clickhouse
22. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
23. DB::AsynchronousBlockInputStream::calculate() @ 0xce3d518 in /usr/bin/clickhouse
24. ? @ 0xce3ecf8 in /usr/bin/clickhouse
25. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8f6692b in /usr/bin/clickhouse
26. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8f67608 in /usr/bin/clickhouse
27. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8f657eb in /usr/bin/clickhouse
28. ? @ 0x8f63c33 in /usr/bin/clickhouse
29. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
30. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/snuba", line 33, in <module>
sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/src/snuba/snuba/cli/migrations.py", line 64, in migrate
runner.run_all(force=force)
File "/usr/src/snuba/snuba/migrations/runner.py", line 158, in run_all
self._run_migration_impl(migration_key, force=force)
File "/usr/src/snuba/snuba/migrations/runner.py", line 218, in _run_migration_impl
migration.forwards(context, dry_run)
File "/usr/src/snuba/snuba/migrations/migration.py", line 74, in forwards
op.execute(logger)
File "/usr/src/snuba/snuba/migrations/operations.py", line 320, in execute
self.__func(logger)
File "/usr/src/snuba/snuba/snuba_migrations/transactions/0002_transactions_onpremise_fix_orderby_and_partitionby.py", line 104, in forwards
clickhouse.execute(
File "/usr/src/snuba/snuba/clickhouse/native.py", line 268, in execute
raise ClickhouseError(e.message, code=e.code) from e
snuba.clickhouse.errors.ClickhouseError: DB::Exception: Memory limit (for query) exceeded: would use 9.32 GiB (attempt to allocate chunk of 9437072 bytes), maximum: 9.31 GiB: (avg_value_size_hint = 554.3526645768025, avg_chars_size = 655.623197492163, limit = 8189): (while reading column _contexts_flattened): (while reading from part /var/lib/clickhouse/data/default/transactions_local/90-20220613_13233969_13575698_2070/ from mark 0 with max_rows_to_read = 8189). Stack trace:
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
2. ? @ 0x8f40ed7 in /usr/bin/clickhouse
3. MemoryTracker::alloc(long) @ 0x8f3eec3 in /usr/bin/clickhouse
4. DB::DataTypeString::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0xcf49bf0 in /usr/bin/clickhouse
5. DB::MergeTreeReaderWide::readData(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::IDataType const&, DB::IColumn&, unsigned long, bool, unsigned long, bool) @ 0xda42c56 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readRows(unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda430fc in /usr/bin/clickhouse
7. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda61436 in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda620f4 in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda6414e in /usr/bin/clickhouse
10. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xda5babd in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::generate() @ 0xda5c5f7 in /usr/bin/clickhouse
12. DB::ISource::work() @ 0xdb9ac1b in /usr/bin/clickhouse
13. DB::SourceWithProgress::work() @ 0xdeef717 in /usr/bin/clickhouse
14. DB::TreeExecutorBlockInputStream::execute(bool, bool) @ 0xdbe264e in /usr/bin/clickhouse
15. DB::TreeExecutorBlockInputStream::readImpl() @ 0xdbe3e27 in /usr/bin/clickhouse
16. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
17. DB::ExpressionBlockInputStream::readImpl() @ 0xd27448a in /usr/bin/clickhouse
18. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
19. DB::PartialSortingBlockInputStream::readImpl() @ 0xd292f1f in /usr/bin/clickhouse
20. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
21. DB::MergeSortingBlockInputStream::readImpl() @ 0xd2b07dc in /usr/bin/clickhouse
22. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
23. DB::AsynchronousBlockInputStream::calculate() @ 0xce3d518 in /usr/bin/clickhouse
24. ? @ 0xce3ecf8 in /usr/bin/clickhouse
25. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8f6692b in /usr/bin/clickhouse
26. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8f67608 in /usr/bin/clickhouse
27. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8f657eb in /usr/bin/clickhouse
28. ? @ 0x8f63c33 in /usr/bin/clickhouse
29. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
30. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
1
Error in bootstrap-snuba.sh:4.
'$dcr snuba-api migrations migrate --force' exited with status 1
-> ./install.sh:main:33
--> bootstrap-snuba.sh:source:4
Cleaning up...
And the clickhouse server logs, I believe the amount of rows that need to be processed before that error occurred matters here, what do you think?
2022.09.22 22:07:56.113155 [ 95 ] {0e4714e9-8519-44be-a41a-a4ce3f49ed53} <Information> executeQuery: Read 1 rows, 55.00 B in 0.002 sec., 586 rows/sec., 31.52 KiB/sec.
2022.09.22 22:07:56.113400 [ 95 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2022.09.22 22:07:56.115933 [ 95 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2022.09.22 22:07:56.132633 [ 95 ] {} <Information> TCPHandler: Processed in 0.001 sec.
2022.09.22 22:07:56.135470 [ 95 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2022.09.22 22:07:56.138435 [ 95 ] {b1b56646-ff19-4d1c-a2ed-93dcf9b1a2f8} <Information> executeQuery: Read 11 rows, 1.31 KiB in 0.002 sec., 5269 rows/sec., 629.74 KiB/sec.
2022.09.22 22:07:56.138676 [ 95 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2022.09.22 22:07:56.150629 [ 95 ] {1d7f92b8-70a4-4dc0-a57a-332f67ea3bda} <Information> executeQuery: Read 1 rows, 1.56 KiB in 0.011 sec., 89 rows/sec., 139.19 KiB/sec.
2022.09.22 22:07:56.150898 [ 95 ] {} <Information> TCPHandler: Processed in 0.012 sec.
2022.09.22 22:07:56.164964 [ 95 ] {} <Information> TCPHandler: Processed in 0.013 sec.
2022.09.22 22:07:56.167686 [ 95 ] {1f19c31f-e9b9-46f9-b2c4-2c803258dff3} <Information> executeQuery: Read 1 rows, 4.01 KiB in 0.002 sec., 593 rows/sec., 2.32 MiB/sec.
2022.09.22 22:07:56.167877 [ 95 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2022.09.22 22:10:28.750530 [ 95 ] {6a3424bd-efd4-4ade-bde0-4155529f0cd0} <Information> executeQuery: Read 93906556 rows, 176.17 GiB in 152.582 sec., 615451 rows/sec., 1.15 GiB/sec.
2022.09.22 22:10:28.778317 [ 95 ] {} <Information> TCPHandler: Processed in 152.610 sec.
2022.09.22 22:10:50.720420 [ 95 ] {817421f4-7bb2-468d-b083-e88e0cb0ff80} <Error> executeQuery: Code: 241, e.displayText() = DB::Exception: Memory limit (for query) exceeded: would use 9.32 GiB (attempt to allocate chunk of 9437072 bytes), maximum: 9.31 GiB: (avg_value_size_hint = 554.3526645768025, avg_chars_size = 655.623197492163, limit = 8189): (while reading column _contexts_flattened): (while reading from part /var/lib/clickhouse/data/default/transactions_local/90-20220613_13233969_13575698_2070/ from mark 0 with max_rows_to_read = 8189) (version 20.3.9.70 (official build)) (from 169.254.2.6:34220) (in query: INSERT INTO transactions_local_new SELECT * FROM transactions_local ORDER BY toStartOfDay(finish_ts), project_id, event_id LIMIT 100000 OFFSET 100000; ), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
2. ? @ 0x8f40ed7 in /usr/bin/clickhouse
3. MemoryTracker::alloc(long) @ 0x8f3eec3 in /usr/bin/clickhouse
4. DB::DataTypeString::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0xcf49bf0 in /usr/bin/clickhouse
5. DB::MergeTreeReaderWide::readData(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::IDataType const&, DB::IColumn&, unsigned long, bool, unsigned long, bool) @ 0xda42c56 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readRows(unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda430fc in /usr/bin/clickhouse
7. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0xda61436 in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda620f4 in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0xda6414e in /usr/bin/clickhouse
10. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0xda5babd in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::generate() @ 0xda5c5f7 in /usr/bin/clickhouse
12. DB::ISource::work() @ 0xdb9ac1b in /usr/bin/clickhouse
13. DB::SourceWithProgress::work() @ 0xdeef717 in /usr/bin/clickhouse
14. DB::TreeExecutorBlockInputStream::execute(bool, bool) @ 0xdbe264e in /usr/bin/clickhouse
15. DB::TreeExecutorBlockInputStream::readImpl() @ 0xdbe3e27 in /usr/bin/clickhouse
16. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
17. DB::ExpressionBlockInputStream::readImpl() @ 0xd27448a in /usr/bin/clickhouse
18. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
19. DB::PartialSortingBlockInputStream::readImpl() @ 0xd292f1f in /usr/bin/clickhouse
20. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
21. DB::MergeSortingBlockInputStream::readImpl() @ 0xd2b07dc in /usr/bin/clickhouse
22. DB::IBlockInputStream::read() @ 0xce48ccf in /usr/bin/clickhouse
23. DB::AsynchronousBlockInputStream::calculate() @ 0xce3d518 in /usr/bin/clickhouse
24. ? @ 0xce3ecf8 in /usr/bin/clickhouse
25. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8f6692b in /usr/bin/clickhouse
26. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() const @ 0x8f67608 in /usr/bin/clickhouse
27. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8f657eb in /usr/bin/clickhouse
28. ? @ 0x8f63c33 in /usr/bin/clickhouse
29. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
30. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
2022.09.22 22:11:01.234137 [ 95 ] {} <Information> TCPHandler: Processed in 32.400 sec.
2022.09.22 22:11:01.315810 [ 95 ] {} <Information> TCPHandler: Done processing connection.
2022.09.22 22:11:16.472259 [ 45 ] {} <Information> Application: Received termination signal (Terminated)
2022.09.22 22:11:17.363645 [ 1 ] {} <Information> Application: Closed all listening sockets.
2022.09.22 22:11:17.364686 [ 1 ] {} <Information> Application: Closed connections.
2022.09.22 22:11:17.405474 [ 1 ] {} <Information> Application: Shutting down storages.
2022.09.22 22:11:21.877578 [ 1 ] {} <Information> Application: shutting down
2022.09.22 22:11:21.877729 [ 45 ] {} <Information> BaseDaemon: Stop SignalListener thread
oh after some more online searching, I was able to finish the migrations by adding these to the 0002_transactions_onpremise_fix_orderby_and_partitionby.py migration file.
clickhouse.execute(f"set max_memory_usage = 16000000000;")
clickhouse.execute(f"set max_memory_usage_for_user = 16000000000;")
clickhouse.execute(f"set max_bytes_before_external_group_by = 1000000000;")
clickhouse.execute(f"set max_bytes_before_external_sort = 1000000000;")
clickhouse.execute(f"set max_block_size=512, max_threads=1")
oh awesome, glad you were able to figure this out!
If you run into additional issues, feel free to reopen this issue!