nodetool repair command failed with exit code3 during drop keyspace
Packages
Scylla version: 5.5.0~dev-20240419.a5dae74aee4b with build-id 13e359437a7035b8b40bcd744a70b6abddbc491b
Kernel Version: 5.15.0-1060-aws
Issue description
- [x] This issue is a regression.
- [ ] It is unknown if this issue is a regression.
Issue looks like a regression.
During nemesis no_corrupt_repair, nodetool repair is started in background and while it running keyspaces are dropped.
The drop keyspace failed with error:
Traceback (most recent call last):
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 1594, in disrupt_no_corrupt_repair
session.execute(SimpleStatement(
File "/home/ubuntu/scylla-cluster-tests/sdcm/utils/common.py", line 1824, in execute_verbose
return execute_orig(*args, **kwargs)
File "cassandra/cluster.py", line 2699, in cassandra.cluster.Session.execute
File "cassandra/cluster.py", line 5018, in cassandra.cluster.ResponseFuture.result
cassandra.InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot drop non existing table 'standard1' in keyspace 'drop_table_during_repair_ks_8'."
Similar to issue https://github.com/scylladb/scylladb/issues/18479, but also nodetool repair failed with error:
ommand: '/usr/bin/nodetool repair '
Exit code: 3
Stdout:
[2024-04-20 21:54:22,765] Repair session 5
[2024-04-20 21:54:25,387] Repair session 5 finished
[2024-04-20 21:54:25,392] Starting repair command #6, repairing 1 ranges for keyspace drop_table_during_repair_ks_6 (parallelism=SEQUENTIAL, full=true)
[2024-04-20 21:54:25,392] Repair session 6
[2024-04-20 21:54:27,813] Repair session 6 finished
[2024-04-20 21:54:27,817] Starting repair command #7, repairing 1 ranges for keyspace drop_table_during_repair_ks_9 (parallelism=SEQUENTIAL, full=true)
[2024-04-20 21:54:27,817] Repair session 7
[2024-04-20 21:54:30,438] Repair session 7 finished
[2024-04-20 21:54:30,440] Starting repair command #8, repairing 1 ranges for keyspace scylla_bench (parallelism=SEQUENTIAL, full=true)
[2024-04-20 21:54:30,440] Repair session 8
Stderr:
Repair session 8 failed
and messages found in log of node 1:
2024-04-20T22:19:30.955+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:stmt] migration_manager - Drop table 'drop_table_during_repair_ks_8.standard1'
2024-04-20T22:19:30.955+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:main] schema_tables - Dropping drop_table_during_repair_ks_8.standard1 id=473b5c80-ff60-11ee-be67-1bff0286bc41 version=473b5c81-ff60-11ee-be67-1bff0286bc41
2024-04-20T22:19:30.955+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:main] database - Dropping drop_table_during_repair_ks_8.standard1 with auto-snapshot
2024-04-20T22:19:30.955+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:main] database - Truncating drop_table_during_repair_ks_8.standard1 with auto-snapshot
2024-04-20T22:20:09.706+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:main] database - Truncated drop_table_during_repair_ks_8.standard1
2024-04-20T22:20:09.706+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_8.standard1 compaction_group=0 due to table removal
2024-04-20T22:20:09.706+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 5:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_8.standard1 compaction_group=0 due to table removal
2024-04-20T22:20:09.706+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 2:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_8.standard1 compaction_group=0 due to table removal
2024-04-20T22:20:09.706+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 4:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_8.standard1 compaction_group=0 due to table removal
2024-04-20T22:20:09.706+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 1:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_8.standard1 compaction_group=0 due to table removal
2024-04-20T22:20:09.707+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 6:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_8.standard1 compaction_group=0 due to table removal
2024-04-20T22:20:09.707+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 2:comp] compaction - [Compact system.truncated 261a14c0-ff64-11ee-b6cb-e72f0264c52b] Compacted 2 sstables to [/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/me-3gfg_1q1l_3n6812s2nk97nu7p63-big-Data.db:level=0]. 28kB to 22kB (~80% of original) in 31ms = 908kB/s. ~256 total partitions merged to 2.
2024-04-20T22:20:27.705+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !NOTICE | sudo[29145]: scyllaadm : PWD=/home/scyllaadm ; USER=root ; COMMAND=/usr/bin/coredumpctl -q --json=short
2024-04-20T22:20:27.705+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | sudo[29145]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
2024-04-20T22:20:27.705+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | sudo[29145]: pam_unix(sudo:session): session closed for user root
2024-04-20T22:20:30.955+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !WARNING | scylla[18995]: [shard 0:stmt] raft_group_registry - group [052c5d30-fec5-11ee-9de8-51e611cfd09d] raft operation [add_entry] timed out; timeout requested at [service/migration_manager.cc(972:109) `future<> service::migration_manager::announce_with_raft(std::vector<mutation>, group0_guard, std::string_view)`], original error raft::request_aborted (Request is aborted by a caller)
2024-04-20T22:20:32.955+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:main] migration_manager - Gossiping my schema version 0ed04c6c-ff64-11ee-cfaa-7002afcf9bf5
2024-04-20T22:20:32.955+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 0:main] schema_tables - Schema version changed to 0ed04c6c-ff64-11ee-cfaa-7002afcf9bf5
2024-04-20T22:20:37.705+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 6:strm] repair - repair[53e665c5-7f37-4502-9837-020eabbdbb48]: Started to repair 2 out of 2 tables in keyspace=scylla_bench, table=test_counters, table_id=42977d90-fec9-11ee-8e05-18b903b642ca, repair_reason=repair
2024-04-20T22:20:38.455+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !INFO | scylla[18995]: [shard 6:strm] repair - repair[53e665c5-7f37-4502-9837-020eabbdbb48]: stats: repair_reason=repair, keyspace=scylla_bench, tables={test, test_counters}, ranges_nr=798, round_nr=3799, round_nr_fast_path_already_synced=3628, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=171, rpc_call_nr=21042, tx_hashes_nr=2107, rx_hashes_nr=29516253, duration=1567.6217 seconds, tx_row_nr=3863, rx_row_nr=2107, tx_row_bytes=1993308, rx_row_bytes=2075395, row_from_disk_bytes={{10.4.8.101, 40136524572}, {10.4.9.183, 42518831460}, {10.4.9.17, 61499014392}, {10.4.10.107, 40342853868}}, row_from_disk_nr={{10.4.8.101, 118068917}, {10.4.9.183, 124976285}, {10.4.9.17, 180975837}, {10.4.10.107, 118906823}}, row_from_disk_bytes_per_sec={{10.4.8.101, 24.4174}, {10.4.9.183, 25.8666}, {10.4.9.17, 37.4134}, {10.4.10.107, 24.5429}} MiB/s, row_from_disk_rows_per_sec={{10.4.8.101, 75317.2}, {10.4.9.183, 79723.5}, {10.4.9.17, 115446}, {10.4.10.107, 75851.7}} Rows/s, tx_row_nr_peer={{10.4.8.101, 1540}, {10.4.9.183, 1207}, {10.4.10.107, 1116}}, rx_row_nr_peer={{10.4.8.101, 1395}, {10.4.9.183, 208}, {10.4.10.107, 504}}
2024-04-20T22:38:00.455+00:00 longevity-twcs-48h-master-db-node-e5517f29-1 !WARNING | scylla[18995]: [shard 0:strm] repair - repair[53e665c5-7f37-4502-9837-020eabbdbb48]: user-requested repair failed: std::runtime_error ({shard 5: std::runtime_error (repair[53e665c5-7f37-4502-9837-020eabbdbb48]: 0 out of 1596 ranges failed, keyspace=scylla_bench, tables={test, test_counters}, repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=service::raft_operation_timeout_error (group [052c5d30-fec5-11ee-9de8-51e611cfd09d] raft operation [read_barrier] timed out))})
Impact
Describe the impact this issue causes to the user.
How frequently does it reproduce?
Describe the frequency with how this issue can be reproduced.
Installation details
Cluster size: 4 nodes (i3en.2xlarge)
Scylla Nodes used in this run:
- longevity-twcs-48h-master-db-node-e5517f29-5 (34.240.166.199 | 10.4.10.107) (shards: 7)
- longevity-twcs-48h-master-db-node-e5517f29-4 (52.210.172.137 | 10.4.9.76) (shards: 7)
- longevity-twcs-48h-master-db-node-e5517f29-3 (54.75.94.63 | 10.4.8.101) (shards: 7)
- longevity-twcs-48h-master-db-node-e5517f29-2 (18.201.32.18 | 10.4.9.183) (shards: 7)
- longevity-twcs-48h-master-db-node-e5517f29-1 (54.194.79.51 | 10.4.9.17) (shards: 7)
OS / Image: ami-00f7c6bc1f946636b (aws: undefined_region)
Test: longevity-twcs-48h-test
Test id: e5517f29-ad39-4fcf-8203-8a8404416893
Test name: scylla-master/tier1/longevity-twcs-48h-test
Test config file(s):
Logs and commands
- Restore Monitor Stack command:
$ hydra investigate show-monitor e5517f29-ad39-4fcf-8203-8a8404416893 - Restore monitor on AWS instance using Jenkins job
- Show all stored logs command:
$ hydra investigate show-logs e5517f29-ad39-4fcf-8203-8a8404416893
Logs:
- db-cluster-e5517f29.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/e5517f29-ad39-4fcf-8203-8a8404416893/20240421_040240/db-cluster-e5517f29.tar.gz
- sct-runner-events-e5517f29.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/e5517f29-ad39-4fcf-8203-8a8404416893/20240421_040240/sct-runner-events-e5517f29.tar.gz
- sct-e5517f29.log.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/e5517f29-ad39-4fcf-8203-8a8404416893/20240421_040240/sct-e5517f29.log.tar.gz
- loader-set-e5517f29.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/e5517f29-ad39-4fcf-8203-8a8404416893/20240421_040240/loader-set-e5517f29.tar.gz
- monitor-set-e5517f29.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/e5517f29-ad39-4fcf-8203-8a8404416893/20240421_040240/monitor-set-e5517f29.tar.gz
@asias @Deexie what raft operation times out during repair? Is this the raft barrier we execute to check whether the schema is present?
@asias @Deexie what raft operation times out during repair? Is this the raft barrier we execute to check whether the schema is present?
Yes, it looks like this. That means that repair failed anyway, but probably with some meaningful expected error. I think the raft error should be caught and the original one rethrown, right?
Yes, it looks like this. That means that repair failed anyway, but probably with some meaningful expected error.
Possibly yes.
I think the raft error should be caught and the original one rethrown, right?
The original error is probably due to the keyspace being dropped. In which case we don't want to raise it either.
Yes, it looks like this. That means that repair failed anyway, but probably with some meaningful expected error.
Possibly yes.
I think the raft error should be caught and the original one rethrown, right?
The original error is probably due to the keyspace being dropped. In which case we don't want to raise it either.
But if the barrier timeouts, we can still not be updated and have no way to find it out
But if the barrier timeouts, we can still not be updated and have no way to find it out
Right, so I think we should swallow the barrier being timed out, and just work with what we have (local schema) to check for dropped keyspace/table.
read barrier timeout happened also when replacing node during add-drop column nemesis:
2024-05-12 12:33:46.575 <2024-05-12 12:33:46.481>: (DatabaseLogEvent Severity.ERROR) period_type=one-time event_id=28b31151-2096-4c2d-802c-9f41916770fb during_nemesis=TerminateAndReplaceNode,AddDropColumn: type=DATABASE_ERROR regex=(^ERROR|!\s*?ERR).*\[shard.*\] line_number=1526 node=parallel-topology-schema-changes-mu-db-node-073570cd-22
2024-05-12T12:33:46.481+00:00 parallel-topology-schema-changes-mu-db-node-073570cd-22 !ERR | scylla[6913]: [shard 0:main] init - Startup failed: service::raft_operation_timeout_error (group [18715100-0fed-11ef-b82b-6db5dbca6ea9] raft operation [read_barrier] timed out)
Can it be the same?
Packages
Scylla version: 5.5.0~dev-20240510.28791aa2c1d3 with build-id 893c2a68becf3d3bcbbf076980b1b831b9b76e29
Kernel Version: 5.15.0-1060-aws
Installation details
Cluster size: 12 nodes (i3en.2xlarge)
Scylla Nodes used in this run:
- parallel-topology-schema-changes-mu-db-node-073570cd-9 (35.179.74.159 | 10.3.11.84) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-8 (18.170.45.142 | 10.3.10.116) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-7 (13.40.167.163 | 10.3.8.204) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-6 (54.217.131.124 | 10.4.9.104) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-5 (52.214.96.24 | 10.4.8.209) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-4 (52.211.178.138 | 10.4.10.241) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-3 (54.171.53.112 | 10.4.8.213) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-22 (3.249.54.85 | 10.4.8.19) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-21 (35.177.195.76 | 10.3.8.19) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-20 (3.252.75.186 | 10.4.10.186) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-2 (54.247.18.121 | 10.4.11.145) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-19 (13.40.16.198 | 10.3.10.45) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-18 (18.171.247.192 | 10.3.11.53) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-17 (18.171.205.64 | 10.3.10.204) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-16 (35.178.49.116 | 10.3.8.209) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-15 (34.244.129.171 | 10.4.8.157) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-14 (3.255.158.84 | 10.4.8.131) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-13 (3.254.85.235 | 10.4.11.100) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-12 (3.10.9.135 | 10.3.9.139) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-11 (18.134.7.128 | 10.3.10.108) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-10 (18.170.50.64 | 10.3.8.100) (shards: 7)
- parallel-topology-schema-changes-mu-db-node-073570cd-1 (3.253.60.148 | 10.4.8.4) (shards: 7)
OS / Image: ami-0b7480423a402aa95 ami-044c35ee6970271fc (aws: undefined_region)
Test: longevity-multidc-schema-topology-changes-12h-test
Test id: 073570cd-26dd-48aa-9601-5f141fd862ba
Test name: scylla-master/tier1/longevity-multidc-schema-topology-changes-12h-test
Test config file(s):
Logs and commands
- Restore Monitor Stack command:
$ hydra investigate show-monitor 073570cd-26dd-48aa-9601-5f141fd862ba - Restore monitor on AWS instance using Jenkins job
- Show all stored logs command:
$ hydra investigate show-logs 073570cd-26dd-48aa-9601-5f141fd862ba
Logs:
- db-cluster-073570cd.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/073570cd-26dd-48aa-9601-5f141fd862ba/20240512_131116/db-cluster-073570cd.tar.gz
- sct-runner-events-073570cd.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/073570cd-26dd-48aa-9601-5f141fd862ba/20240512_131116/sct-runner-events-073570cd.tar.gz
- sct-073570cd.log.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/073570cd-26dd-48aa-9601-5f141fd862ba/20240512_131116/sct-073570cd.log.tar.gz
- loader-set-073570cd.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/073570cd-26dd-48aa-9601-5f141fd862ba/20240512_131116/loader-set-073570cd.tar.gz
- monitor-set-073570cd.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/073570cd-26dd-48aa-9601-5f141fd862ba/20240512_131116/monitor-set-073570cd.tar.gz
Can it be the same?
Yes, it looks like it is the same. @tgrabiec said we cannot just swallow this error, so not sure how should we proceed here. Do we need to investigate each such timeout?
What's the timeout currently? Maybe it's too short.
What's the timeout currently? Maybe it's too short.
It's 10 seconds
We've seen schema changes take more than that in the past, which could explain why barrier times out.
What's the reasoning behind 10s timeout? I think repair doesn't have to time out on group0 barrier at all, provided that it reliably eventually completes. If user wants to give up, he should cancel repair.
It was meant to be a fallback in case half of the cluster was down. But if we do not want to proceed with local data anyway, there is no point. I will delete that.
The issue again reproduced.
Packages
Scylla version: 6.0.0~rc3-20240605.c6f0a3267ef0 with build-id 19d3e81fcfd8d4fb2ce39328f17a042906e89f5b
Kernel Version: 5.15.0-1062-aws
Issue description
When repair is running, drop table was executed and node1 is triggered coredump:
2024-06-07T02:57:16.515+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 0:main] database - Truncated drop_table_during_repair_ks_0.standard1
2024-06-07T02:57:16.515+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 0:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=11 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 1:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=31 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 5:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=19 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 20:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=29 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 9:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=3 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 12:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=18 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 14:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=10 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 27:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=30 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 8:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=27 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 8:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=6 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 21:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=28 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 24:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=21 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 16:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=13 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 4:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=4 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 16:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=8 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 23:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=23 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 2:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=15 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 26:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=17 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 18:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=5 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 13:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=26 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 6:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=16 due to table removal
2024-06-07T02:57:16.516+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 11:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=22 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 19:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=2 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 27:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=0 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 29:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=9 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 28:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=14 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 17:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=12 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 13:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=1 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 17:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=7 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 25:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=20 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 10:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=25 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 22:main] compaction_manager - Stopping 1 tasks for 0 ongoing compactions for table drop_table_during_repair_ks_0.standard1 compaction_group=24 due to table removal
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 0:main] schema_tables - Tablet metadata changed
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 8:comp] compaction - [Compact system.truncated a5e53740-2479-11ef-9884-8f1b6f53acb6] Compacted 2 sstables to [/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2sits2bhwldkyta1ti-big-Data.db:level=0]. 11kB to 5959 bytes (~53% of original) in 24ms = 463kB/s. ~256 total partitions merged to 1.
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 8:comp] compaction - [Compact system.truncated a5e955f0-2479-11ef-9884-8f1b6f53acb6] Compacting [/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2mqhs2bhwldkyta1ti-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2qlds2bhwldkyta1ti-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2onxs2bhwldkyta1ti-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2nlcw2bhwldkyta1ti-big-Data.db:level=0:origin=memtable]
2024-06-07T02:57:16.517+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 8:comp] compaction - [Compact system.truncated a5e955f0-2479-11ef-9884-8f1b6f53acb6] Compacted 4 sstables to [/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2y3g12bhwldkyta1ti-big-Data.db:level=0]. 22kB to 6415 bytes (~28% of original) in 16ms = 1MB/s. ~512 total partitions merged to 1.
2024-06-07T02:57:16.774+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 8:comp] compaction - [Compact system.truncated a5ec6330-2479-11ef-9884-8f1b6f53acb6] Compacting [/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2sits2bhwldkyta1ti-big-Data.db:level=0:origin=compaction,/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_2y3g12bhwldkyta1ti-big-Data.db:level=0:origin=compaction]
2024-06-07T02:57:16.774+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 0:main] migration_manager - Gossiping my schema version a59d8e18-2479-11ef-4d5f-67e659d5f650
2024-06-07T02:57:16.774+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 0:main] schema_tables - Schema version changed to a59d8e18-2479-11ef-4d5f-67e659d5f650
2024-06-07T02:57:16.774+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 8:comp] compaction - [Compact system.truncated a5ec6330-2479-11ef-9884-8f1b6f53acb6] Compacted 2 sstables to [/var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/md-3ggs_087g_330ww2bhwldkyta1ti-big-Data.db:level=0]. 12kB to 6257 bytes (~50% of original) in 16ms = 773kB/s. ~256 total partitions merged to 1.
2024-06-07T02:57:17.259+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 25:comp] compaction - [Compact sec_index.users_address_ind_index a5914ae0-2479-11ef-aed8-8f026f53acb6] Compacted 2 sstables to [/var/lib/scylla/data/sec_index/users_address_ind_index-52752da1244111ef9e1df3af6f6c0bf8/md-3ggs_087f_5gz1s2npykoe3ujohi-big-Data.db:level=0]. 10MB to 9MB (~87% of original) in 1011ms = 10MB/s. ~101248 total partitions merged to 88813.
2024-06-07T02:57:17.259+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 25:comp] compaction - [Compact mview.users_by_password a62c0440-2479-11ef-aed8-8f026f53acb6] Compacting [/var/lib/scylla/data/mview/users_by_password-dec69b50243111ef8a952d4bba60ba7b/md-3ggs_087f_5wmbk2npykoe3ujohi-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/mview/users_by_password-dec69b50243111ef8a952d4bba60ba7b/md-3ggs_07rn_0c03k2npykoe3ujohi-big-Data.db:level=0:origin=compaction]
2024-06-07T02:57:17.259+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 4:comp] compaction - [Compact mview.users a5db4c30-2479-11ef-90ce-8f156f53acb6] Compacted 2 sstables to [/var/lib/scylla/data/mview/users-d9ad37a0243111efb5333cb02fe4250e/md-3ggs_087g_2et00279xmei1zwv5i-big-Data.db:level=0]. 167MB to 104MB (~62% of original) in 530ms = 315MB/s. ~16000 total partitions merged to 9841.
2024-06-07T02:57:17.259+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 4:comp] compaction - [Compact mview.users a62d15b0-2479-11ef-90ce-8f156f53acb6] Compacting [/var/lib/scylla/data/mview/users-d9ad37a0243111efb5333cb02fe4250e/md-3ggs_087g_2et00279xmei1zwv5i-big-Data.db:level=0:origin=compaction,/var/lib/scylla/data/mview/users-d9ad37a0243111efb5333cb02fe4250e/md-3ggs_083o_3ztgx279xmei1zwv5i-big-Data.db:level=0:origin=compaction]
2024-06-07T02:57:17.259+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: [shard 6:comp] compaction - [Compact mview.users a5c52c20-2479-11ef-bc6c-8f066f53acb6] Compacted 2 sstables to [/var/lib/scylla/data/mview/users-d9ad37a0243111efb5333cb02fe4250e/md-3ggs_087g_1jq682v5kd0xhmi3li-big-Data.db:level=0]. 186MB to 113MB (~60% of original) in 734ms = 254MB/s. ~17792 total partitions merged to 10735.
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !ERR | scylla[23122]: [shard 8:stmt] tablets - SSTable set wasn't found for tablet 8 of table sec_index.users, at: 0x6402d6e 0x6403380 0x6403668 0x5ebb3a7 0x1d46eec 0x24c04c1 0x1cc94a5 0x20b92eb 0x20b8549 0x20d0f81 0x20cfab0 0x3c1f15c 0x3c1ff5e 0x3caeaa3 0x1c6f562 0x1c6d469 0x1ad0747 0x1b67e01 0x1b8853b 0x1b88108 0x5e91db2 0x5efba1f 0x5efcd07 0x5f20c70 0x5ebc0da /opt/scylladb/libreloc/libc.so.6+0x8c946 /opt/scylladb/libreloc/libc.so.6+0x11296f
--------
seastar::lambda_task<seastar::execution_stage::flush()::$_0>
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: Aborting on shard 8.
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: Backtrace:
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5ee9de8
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5f20671
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: /opt/scylladb/libreloc/libc.so.6+0x3dbaf
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: /opt/scylladb/libreloc/libc.so.6+0x8e883
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: /opt/scylladb/libreloc/libc.so.6+0x3dafd
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: /opt/scylladb/libreloc/libc.so.6+0x2687e
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5ebb427
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1d46eec
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x24c04c1
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1cc94a5
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x20b92eb
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x20b8549
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x20d0f81
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x20cfab0
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x3c1f15c
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x3c1ff5e
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x3caeaa3
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1c6f562
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1c6d469
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1ad0747
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1b67e01
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1b8853b
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x1b88108
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5e91db2
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5efba1f
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5efcd07
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5f20c70
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: 0x5ebc0da
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: /opt/scylladb/libreloc/libc.so.6+0x8c946
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: /opt/scylladb/libreloc/libc.so.6+0x11296f
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]: Backtrace:
2024-06-07T02:57:17.524+00:00 longevity-mv-si-4d-6-0-db-node-bb37566d-1 !INFO | scylla[23122]:
[Backtrace #0]
void seastar::backtrace<seastar::backtrace_buffer::append_backtrace()::{lambda(seastar::frame)#1}>(seastar::backtrace_buffer::append_backtrace()::{lambda(seastar::frame)#1}&&) at ./build/release/seastar/./seastar/include/seastar/util/backtrace.hh:68
(inlined by) seastar::backtrace_buffer::append_backtrace() at ./build/release/seastar/./seastar/src/core/reactor.cc:825
(inlined by) seastar::print_with_backtrace(seastar::backtrace_buffer&, bool) at ./build/release/seastar/./seastar/src/core/reactor.cc:855
seastar::print_with_backtrace(char const*, bool) at ./build/release/seastar/./seastar/src/core/reactor.cc:867
(inlined by) seastar::sigabrt_action() at ./build/release/seastar/./seastar/src/core/reactor.cc:4071
(inlined by) operator() at ./build/release/seastar/./seastar/src/core/reactor.cc:4047
(inlined by) __invoke at ./build/release/seastar/./seastar/src/core/reactor.cc:4043
/data/scylla-s3-reloc.cache/by-build-id/19d3e81fcfd8d4fb2ce39328f17a042906e89f5b/extracted/scylla/libreloc/libc.so.6: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=9148cab1b932d44ef70e306e9c02ee38d06cad51, for GNU/Linux 3.2.0, not stripped
__GI___sigaction at :?
__pthread_kill_implementation at ??:?
__GI_raise at :?
__GI_abort at :?
seastar::on_internal_error(seastar::logger&, std::basic_string_view<char, std::char_traits<char> >) at ./build/release/seastar/./seastar/src/core/on_internal_error.cc:57
replica::tablet_sstable_set::find_sstable_set(unsigned long) const at ./replica/tablets.cc:429
(inlined by) replica::tablet_sstable_set::create_single_key_sstable_reader(replica::table*, seastar::lw_shared_ptr<schema const>, reader_permit, utils::estimated_histogram&, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>, seastar::noncopyable_function<bool (sstables::sstable const&)> const&) const at ./replica/tablets.cc:647
sstables::sstable_set::create_single_key_sstable_reader(replica::table*, seastar::lw_shared_ptr<schema const>, reader_permit, utils::estimated_histogram&, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>, seastar::noncopyable_function<bool (sstables::sstable const&)> const&) const at ./sstables/sstable_set.cc:1268
replica::table::make_sstable_reader(seastar::lw_shared_ptr<schema const>, reader_permit, seastar::lw_shared_ptr<sstables::sstable_set const>, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>, seastar::noncopyable_function<bool (sstables::sstable const&)> const&) const at ./replica/table.cc:109
(inlined by) operator() at ./replica/table.cc:2248
(inlined by) flat_mutation_reader_v2 std::__invoke_impl<flat_mutation_reader_v2, replica::table::sstables_as_snapshot_source()::$_0::operator()() const::{lambda(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>)#1}&, seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag> >(std::__invoke_other, replica::table::sstables_as_snapshot_source()::$_0::operator()() const::{lambda(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>)#1}&, seastar::lw_shared_ptr<schema const>&&, reader_permit&&, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr&&, seastar::bool_class<streamed_mutation::forwarding_tag>&&, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>&&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:61
(inlined by) std::enable_if<is_invocable_r_v<flat_mutation_reader_v2, replica::table::sstables_as_snapshot_source()::$_0::operator()() const::{lambda(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>)#1}&, seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag> >, std::enable_if>::type std::__invoke_r<flat_mutation_reader_v2, replica::table::sstables_as_snapshot_source()::$_0::operator()() const::{lambda(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>)#1}&, seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag> >(flat_mutation_reader_v2&&, (replica::table::sstables_as_snapshot_source()::$_0::operator()() const::{lambda(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>)#1}&)...) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:114
(inlined by) std::_Function_handler<flat_mutation_reader_v2 (seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>), replica::table::sstables_as_snapshot_source()::$_0::operator()() const::{lambda(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>)#1}>::_M_invoke(std::_Any_data const&, seastar::lw_shared_ptr<schema const>&&, reader_permit&&, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr&&, seastar::bool_class<streamed_mutation::forwarding_tag>&&, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>&&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/std_function.h:290
std::function<flat_mutation_reader_v2 (seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>)>::operator()(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>) const at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/std_function.h:591
(inlined by) mutation_source::make_reader_v2(seastar::lw_shared_ptr<schema const>, reader_permit, interval<dht::ring_position> const&, query::partition_slice const&, tracing::trace_state_ptr, seastar::bool_class<streamed_mutation::forwarding_tag>, seastar::bool_class<mutation_reader::partition_range_forwarding_tag>) const at ././readers/mutation_source.hh:142
(inlined by) row_cache::create_underlying_reader(cache::read_context&, mutation_source&, interval<dht::ring_position> const&) at ./row_cache.cc:49
(inlined by) operator() at ././read_context.hh:98
(inlined by) seastar::future<void> seastar::futurize<void>::invoke<cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}>(cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}&&) at ././seastar/include/seastar/core/future.hh:2032
(inlined by) seastar::future<void> seastar::futurize<void>::invoke<cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}>(cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}&&, seastar::internal::monostate) at ././seastar/include/seastar/core/future.hh:1879
(inlined by) seastar::future<void> seastar::future<void>::then_impl<cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}, seastar::future<void> >(cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}&&) at ././seastar/include/seastar/core/future.hh:1503
(inlined by) seastar::future<void> seastar::future<void>::then<cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}, seastar::future<void> >(cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long)::{lambda()#1}&&) at ././seastar/include/seastar/core/future.hh:1429
(inlined by) cache::autoupdating_underlying_reader::fast_forward_to(interval<dht::ring_position>&&, mutation_source&, unsigned long) at ././read_context.hh:97
cache::read_context::create_underlying() at ./row_cache.cc:399
single_partition_populating_reader::create_reader() at ./row_cache.cc:421
single_partition_populating_reader::fill_buffer() at ./row_cache.cc:454
flat_mutation_reader_v2::impl::operator()() at ././readers/flat_mutation_reader_v2.hh:194
(inlined by) flat_mutation_reader_v2::operator()() at ././readers/flat_mutation_reader_v2.hh:410
(inlined by) db::view::view_update_builder::advance_all() at ./db/view/view.cc:1240
db::view::view_update_builder::build_some() at ./db/view/view.cc:1270
db::view::view_update_generator::generate_and_propagate_view_updates(replica::table const&, seastar::lw_shared_ptr<schema const> const&, reader_permit, std::vector<db::view::view_and_base, std::allocator<db::view::view_and_base> >&&, mutation&&, seastar::optimized_optional<flat_mutation_reader_v2>, tracing::trace_state_ptr, std::chrono::time_point<gc_clock, std::chrono::duration<long, std::ratio<1l, 1l> > >, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >) at ./db/view/view_update_generator.cc:412
replica::table::do_push_view_replica_updates(seastar::shared_ptr<db::view::view_update_generator>, seastar::lw_shared_ptr<schema const>, mutation, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, mutation_source, tracing::trace_state_ptr, reader_concurrency_semaphore&, enum_set<super_enum<query::partition_slice::option, (query::partition_slice::option)0, (query::partition_slice::option)1, (query::partition_slice::option)2, (query::partition_slice::option)3, (query::partition_slice::option)4, (query::partition_slice::option)5, (query::partition_slice::option)6, (query::partition_slice::option)7, (query::partition_slice::option)8, (query::partition_slice::option)9, (query::partition_slice::option)10, (query::partition_slice::option)11, (query::partition_slice::option)12, (query::partition_slice::option)13> >) const at ./replica/table.cc:3249
replica::table::push_view_replica_updates(seastar::shared_ptr<db::view::view_update_generator>, seastar::lw_shared_ptr<schema const> const&, mutation&&, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, tracing::trace_state_ptr, reader_concurrency_semaphore&) const at ./replica/table.cc:3260
(inlined by) replica::table::push_view_replica_updates(seastar::shared_ptr<db::view::view_update_generator>, seastar::lw_shared_ptr<schema const> const&, frozen_mutation const&, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, tracing::trace_state_ptr, reader_concurrency_semaphore&) const at ./replica/table.cc:3194
replica::database::do_apply(seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>) at ./replica/database.cc:2010
seastar::future<void> std::__invoke_impl<seastar::future<void>, seastar::future<void> (replica::database::* const&)(seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>), replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >(std::__invoke_memfun_deref, seastar::future<void> (replica::database::* const&)(seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>), replica::database*&&, seastar::lw_shared_ptr<schema const>&&, frozen_mutation const&, tracing::trace_state_ptr&&, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >&&, seastar::bool_class<db::force_sync_tag>&&, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>&&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:74
(inlined by) std::__invoke_result<seastar::future<void> (replica::database::* const&)(seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>), replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >::type std::__invoke<seastar::future<void> (replica::database::* const&)(seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>), replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >(seastar::future<void> (replica::database::* const&)(seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>), replica::database*&&, seastar::lw_shared_ptr<schema const>&&, frozen_mutation const&, tracing::trace_state_ptr&&, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >&&, seastar::bool_class<db::force_sync_tag>&&, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>&&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:96
(inlined by) _ZNKSt12_Mem_fn_baseIMN7replica8databaseEFN7seastar6futureIvEENS2_13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS2_12lowres_clockENSE_8durationIlSt5ratioILl1ELl1000000000EEEEEENS2_10bool_classIN2db14force_sync_tagEEESt7variantIJSt9monostateNSN_24per_partition_rate_limit12account_onlyENSS_19account_and_enforceEEEELb1EEclIJPS1_S8_SB_SD_SL_SP_SV_EEEDTclsr3stdE8__invokedtdefpT6_M_pmfspclsr3stdE7forwardIT_Efp_EEEDpOS11_ at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/functional:170
(inlined by) seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>::direct_vtable_for<std::_Mem_fn<seastar::future<void> (replica::database::*)(seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)> >::call(seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)> const*, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>) at ././seastar/include/seastar/util/noncopyable_function.hh:129
seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>::operator()(replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>) const at ././seastar/include/seastar/util/noncopyable_function.hh:215
(inlined by) operator() at ././seastar/include/seastar/core/execution_stage.hh:340
(inlined by) seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>::direct_vtable_for<seastar::inheriting_concrete_execution_stage<seastar::future<void>, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >::make_stage_for_group(seastar::scheduling_group)::{lambda(replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)#1}>::call(seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)> const*, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>) at ././seastar/include/seastar/util/noncopyable_function.hh:129
seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>::operator()(replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>) const at ././seastar/include/seastar/util/noncopyable_function.hh:215
(inlined by) seastar::future<void> std::__invoke_impl<seastar::future<void>, seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >(std::__invoke_other, seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, replica::database*&&, seastar::lw_shared_ptr<schema const>&&, frozen_mutation const&, tracing::trace_state_ptr&&, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >&&, seastar::bool_class<db::force_sync_tag>&&, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>&&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:61
(inlined by) std::__invoke_result<seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >::type std::__invoke<seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >(seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, replica::database*&&, seastar::lw_shared_ptr<schema const>&&, frozen_mutation const&, tracing::trace_state_ptr&&, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >&&, seastar::bool_class<db::force_sync_tag>&&, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>&&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:96
(inlined by) decltype(auto) std::__apply_impl<seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, std::tuple<replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>(seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, std::tuple<replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >&&, std::integer_sequence<unsigned long, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/tuple:2288
(inlined by) decltype(auto) std::apply<seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, std::tuple<replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> > >(seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, std::tuple<replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >&&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/tuple:2299
(inlined by) seastar::future<void> seastar::futurize<seastar::future<void> >::apply<seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >(seastar::noncopyable_function<seastar::future<void> (replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>)>&, std::tuple<replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >&&) at ././seastar/include/seastar/core/future.hh:2003
(inlined by) seastar::concrete_execution_stage<seastar::future<void>, replica::database*, seastar::lw_shared_ptr<schema const>, frozen_mutation const&, tracing::trace_state_ptr, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::bool_class<db::force_sync_tag>, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce> >::do_flush() at ././seastar/include/seastar/core/execution_stage.hh:249
operator() at ./build/release/seastar/./seastar/src/core/execution_stage.cc:149
(inlined by) seastar::future<void> seastar::futurize<void>::invoke<seastar::execution_stage::flush()::$_0&>(seastar::execution_stage::flush()::$_0&) at ./build/release/seastar/./seastar/include/seastar/core/future.hh:2032
(inlined by) seastar::lambda_task<seastar::execution_stage::flush()::$_0>::run_and_dispose() at ./build/release/seastar/./seastar/include/seastar/core/make_task.hh:44
seastar::reactor::run_tasks(seastar::reactor::task_queue&) at ./build/release/seastar/./seastar/src/core/reactor.cc:2690
(inlined by) seastar::reactor::run_some_tasks() at ./build/release/seastar/./seastar/src/core/reactor.cc:3152
seastar::reactor::do_run() at ./build/release/seastar/./seastar/src/core/reactor.cc:3320
operator() at ./build/release/seastar/./seastar/src/core/reactor.cc:4563
(inlined by) void std::__invoke_impl<void, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&>(std::__invoke_other, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:61
(inlined by) std::enable_if<is_invocable_r_v<void, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&>, void>::type std::__invoke_r<void, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&>(seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:111
(inlined by) std::_Function_handler<void (), seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0>::_M_invoke(std::_Any_data const&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/std_function.h:290
std::function<void ()>::operator()() const at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/std_function.h:591
(inlined by) seastar::posix_thread::start_routine(void*) at ./build/release/seastar/./seastar/src/core/posix.cc:90
start_thread at ??:?
__clone3 at :?
Impact
Describe the impact this issue causes to the user.
How frequently does it reproduce?
Describe the frequency with how this issue can be reproduced.
Installation details
Cluster size: 5 nodes (i4i.8xlarge)
Scylla Nodes used in this run:
- longevity-mv-si-4d-6-0-db-node-bb37566d-6 (54.216.125.36 | 10.4.2.245) (shards: 30)
- longevity-mv-si-4d-6-0-db-node-bb37566d-5 (3.252.167.127 | 10.4.1.167) (shards: 30)
- longevity-mv-si-4d-6-0-db-node-bb37566d-4 (54.72.22.19 | 10.4.0.124) (shards: 30)
- longevity-mv-si-4d-6-0-db-node-bb37566d-3 (3.253.88.137 | 10.4.2.51) (shards: 30)
- longevity-mv-si-4d-6-0-db-node-bb37566d-2 (34.246.134.185 | 10.4.1.134) (shards: 30)
- longevity-mv-si-4d-6-0-db-node-bb37566d-1 (3.250.65.243 | 10.4.3.208) (shards: 30)
OS / Image: ami-0285417768d770308 (aws: undefined_region)
Test: longevity-mv-si-4days-test
Test id: bb37566d-e973-4868-88f0-afa770fdca9b
Test name: scylla-6.0/tier1/longevity-mv-si-4days-test
Test config file(s):
Logs and commands
- Restore Monitor Stack command:
$ hydra investigate show-monitor bb37566d-e973-4868-88f0-afa770fdca9b - Restore monitor on AWS instance using Jenkins job
- Show all stored logs command:
$ hydra investigate show-logs bb37566d-e973-4868-88f0-afa770fdca9b
Logs:
- core.scylla-longevity-mv-si-4d-6-0-db-node-bb37566d-1-2024-06-07_04-23-18.gz - https://storage.cloud.google.com/upload.scylladb.com/core.scylla.112.d7396791c6394b94b3a0ded6fc9a3185.31343.1717729903000000/core.scylla.112.d7396791c6394b94b3a0ded6fc9a3185.31343.1717729903000000.gz
- core.scylla-longevity-mv-si-4d-6-0-db-node-bb37566d-2-2024-06-07_04-46-32.gz - https://storage.cloud.google.com/upload.scylladb.com/core.scylla.112.de29574b384042deb7addf38cc40f307.13639.1717729891000000/core.scylla.112.de29574b384042deb7addf38cc40f307.13639.1717729891000000.gz
- core.scylla-longevity-mv-si-4d-6-0-db-node-bb37566d-6-2024-06-07_05-36-37.gz - https://storage.cloud.google.com/upload.scylladb.com/core.scylla.112.8a2090b394fc4cfba393636fa7ea075e.7261.1717737992000000/core.scylla.112.8a2090b394fc4cfba393636fa7ea075e.7261.1717737992000000.gz
- core.scylla-longevity-mv-si-4d-6-0-db-node-bb37566d-6-2024-06-07_06-39-36.gz - https://storage.cloud.google.com/upload.scylladb.com/core.scylla.112.8a2090b394fc4cfba393636fa7ea075e.9393.1717740731000000/core.scylla.112.8a2090b394fc4cfba393636fa7ea075e.9393.1717740731000000.gz
- db-cluster-bb37566d.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/bb37566d-e973-4868-88f0-afa770fdca9b/20240607_064451/db-cluster-bb37566d.tar.gz
- sct-runner-events-bb37566d.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/bb37566d-e973-4868-88f0-afa770fdca9b/20240607_064451/sct-runner-events-bb37566d.tar.gz
- sct-bb37566d.log.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/bb37566d-e973-4868-88f0-afa770fdca9b/20240607_064451/sct-bb37566d.log.tar.gz
- loader-set-bb37566d.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/bb37566d-e973-4868-88f0-afa770fdca9b/20240607_064451/loader-set-bb37566d.tar.gz
- monitor-set-bb37566d.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/bb37566d-e973-4868-88f0-afa770fdca9b/20240607_064451/monitor-set-bb37566d.tar.gz
The issue again reproduced.
That's a different issue. I opened a separate issue for that (https://github.com/scylladb/scylladb/issues/19207)