`Unable to complete the operation against any hosts` and `Host has been marked down or removed` on simple query
Driver version: 3.29.5
Code is simple, just create a session are run a query. Test was run with fips and encription.
with self.cluster.cql_connection_patient(self.target_node) as session:
query_result = session.execute('SELECT keyspace_name FROM system_schema.keyspaces;')
Error
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 2074, in _get_all_tables_with_no_compact_storage
query_result = session.execute('SELECT keyspace_name FROM system_schema.keyspaces;')
File "cassandra/cluster.py", line 2679, in cassandra.cluster.Session.execute
File "cassandra/cluster.py", line 5054, in cassandra.cluster.ResponseFuture.result
cassandra.cluster.NoHostAvailable: ('Unable to complete the operation against any hosts', {<Host: 10.4.1.84:9042 eu-west-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.4.1.3:9042 eu-west-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.4.2.116:9042 eu-west-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.4.1.31:9042 eu-west-1>: ConnectionException('Host has been marked down or removed')})
In node logs there is no indication that any node sees another node as down.
Replicated multiple times(example1, example2, example3), but not all the time (example4)
Scylla version: 2026.1.0~dev-20251208.a213e41250df with build-id e914baa69768b13de7b77c016533dd2edd6df40f
Kernel Version: 5.4.0-1021-aws-fips
Extra information
Installation details
Cluster size: 6 nodes (i4i.4xlarge)
Scylla Nodes used in this run:
- longevity-fips-master-db-node-aec4e38c-1 (3.252.199.184 | 10.4.3.157) (shards: 14)
- longevity-fips-master-db-node-aec4e38c-2 (54.246.41.230 | 10.4.1.31) (shards: 14)
- longevity-fips-master-db-node-aec4e38c-3 (3.253.18.89 | 10.4.3.24) (shards: 14)
- longevity-fips-master-db-node-aec4e38c-4 (34.246.223.89 | 10.4.1.3) (shards: 14)
- longevity-fips-master-db-node-aec4e38c-5 (3.250.35.76 | 10.4.2.116) (shards: 14)
- longevity-fips-master-db-node-aec4e38c-6 (54.154.41.223 | 10.4.1.84) (shards: 14)
OS / Image: ami-0a1dfe766d508f280 (aws: N/A)
Test: longevity-100gb-4h-fips-test
Test id: aec4e38c-f1a6-4104-8ef1-2dab603563d4
Test name: scylla-master/features/FIPS/longevity-100gb-4h-fips-test
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):
Logs:
In the logs I see a lot of bad file descriptor errors, so it may be just a duplicate of https://github.com/scylladb/python-driver/issues/614
In the logs I see a lot of bad file descriptor errors, so it may be just a duplicate of #614
No, that one is regarding ControlConnection reconnection.
@cezarmoise , please next time include whole stack trace, not just peace of it:
sdcm.nemesis.SisyphusMonkey: Unhandled exception in method <function Nemesis.disrupt_add_drop_column at 0x78104855e140> sdcm.nemesis.SisyphusMonkey: Unhandled exception in method <function Nemesis.disrupt_add_drop_column at 0x78104855e140>
Traceback (most recent call last):
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 5835, in wrapper
result = method(*args, **kwargs)
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 2510, in disrupt_add_drop_column
self._add_drop_column_run_in_cycle()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 2212, in _add_drop_column_run_in_cycle
self._add_drop_column()
~~~~~~~~~~~~~~~~~~~~~^^
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 2175, in _add_drop_column
self._add_drop_column_target_table = self._add_drop_column_get_target_table(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self._add_drop_column_target_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 2099, in _add_drop_column_get_target_table
current_tables = self._get_all_tables_with_no_compact_storage(self._add_drop_column_tables_to_ignore)
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 2074, in _get_all_tables_with_no_compact_storage
query_result = session.execute('SELECT keyspace_name FROM system_schema.keyspaces;')
File "cassandra/cluster.py", line 2679, in cassandra.cluster.Session.execute
File "cassandra/cluster.py", line 5054, in cassandra.cluster.ResponseFuture.result
cassandra.cluster.NoHostAvailable: ('Unable to complete the operation against any hosts', {<Host: 10.4.1.84:9042 eu-west-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.4.1.3:9042 eu-west-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.4.2.116:9042 eu-west-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.4.1.31:9042 eu-west-1>: ConnectionException('Host has been marked down or removed')})
Probably clue is here:
< t:2025-12-09 00:59:10,844 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > Connecting to cluster, contact points: ['10.4.3.157', '10.4.1.31', '10.4.3.24', '10.4.1.3', '10.4.2.116', '10.4.1.84']; protocol version: 3
< t:2025-12-09 00:59:10,844 f:cluster.py l:3723 c:cassandra.pool p:DEBUG > Host 10.4.3.157:9042 is now marked up
< t:2025-12-09 00:59:10,844 f:cluster.py l:3723 c:cassandra.pool p:DEBUG > Host 10.4.1.31:9042 is now marked up
< t:2025-12-09 00:59:10,844 f:cluster.py l:3723 c:cassandra.pool p:DEBUG > Host 10.4.3.24:9042 is now marked up
< t:2025-12-09 00:59:10,844 f:cluster.py l:3723 c:cassandra.pool p:DEBUG > Host 10.4.1.3:9042 is now marked up
< t:2025-12-09 00:59:10,844 f:cluster.py l:3723 c:cassandra.pool p:DEBUG > Host 10.4.2.116:9042 is now marked up
< t:2025-12-09 00:59:10,844 f:cluster.py l:3723 c:cassandra.pool p:DEBUG > Host 10.4.1.84:9042 is now marked up
....
< t:2025-12-09 00:59:10,891 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Established new connection <LibevConnection(132010748001360) 10.4.3.157:9042>, registering watchers and refreshing schema and topology
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Refreshing node list and token map using preloaded results
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Removing host not found in peers metadata: <Host: 10.4.3.157:9042 eu-west-1>
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Removing host not found in peers metadata: <Host: 10.4.1.31:9042 eu-west-1>
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Removing host not found in peers metadata: <Host: 10.4.3.24:9042 eu-west-1>
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Removing host not found in peers metadata: <Host: 10.4.1.3:9042 eu-west-1>
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Removing host not found in peers metadata: <Host: 10.4.2.116:9042 eu-west-1>
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Removing host not found in peers metadata: <Host: 10.4.1.84:9042 eu-west-1>
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Finished fetching ring info
< t:2025-12-09 00:59:10,895 f:cluster.py l:3723 c:cassandra.cluster p:DEBUG > [control connection] Rebuilding token map due to topology changes
Somehow driver got empty system.local and system.peers result and endup dropping all the nodepools.
And right underneeth it there is the following:
< t:2025-12-09 00:59:10,899 f:libevreactor.py l:292 c:cassandra.io.libevreactor p:DEBUG > Closing connection (132010748001360) to 10.4.3.157:9042
< t:2025-12-09 00:59:10,899 f:libevreactor.py l:296 c:cassandra.io.libevreactor p:DEBUG > Closed socket to 10.4.3.157:9042
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > [control connection] Error connecting to 10.4.3.157:9042: < t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > [control connection] Error connecting to 10.4.3.157:9042:
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > Traceback (most recent call last):
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/cluster.py", line 3546, in cassandra.cluster.ControlConnection._connect_host_in_lbp
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/cluster.py", line 3662, in cassandra.cluster.ControlConnection._try_connect
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/cluster.py", line 3659, in cassandra.cluster.ControlConnection._try_connect
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/cluster.py", line 3761, in cassandra.cluster.ControlConnection._refresh_schema
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/metadata.py", line 147, in cassandra.metadata.Metadata.refresh
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/metadata.py", line 165, in cassandra.metadata.Metadata._rebuild_all
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/metadata.py", line 2610, in get_all_keyspaces
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/metadata.py", line 2087, in get_all_keyspaces
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/metadata.py", line 2832, in cassandra.metadata.SchemaParserV3._query_all
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > File "cassandra/metadata.py", line 2011, in cassandra.metadata._SchemaParser._handle_results
< t:2025-12-09 00:59:10,900 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > cassandra.connection.ConnectionShutdown: [Errno 9] Bad file descriptor
< t:2025-12-09 00:59:10,901 f:cluster.py l:3723 c:cassandra.cluster p:WARNING > Host 10.4.3.157:9042 has been marked down
Not sure if it related to this issue.
Scylla version: 2026.1.0~dev-20251211.f7ffa395a8fd with build-id 6ed9dbb170d6894329ed88a93e118dd68cbd62a9
Logs are full of following entries:
< t:2025-12-13 05:56:48,977 f:file_logger.py l:101 c:sdcm.sct_events.file_logger p:INFO > 2025-12-13 05:56:48.970: (FullScanAggregateEvent Severity.NORMAL) period_type=end event_id=deca7903-92ab-4644-adc8-e61bfc4d6a1c duration=42s node=longevity-50gb-12h-master-db-node-70283809-6 select_from=keyspace1.standard1 message=FullScanAggregatesOperation operation ended successfully: result Row(count=247387913)
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > Thread stats:
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > +------------------------+--------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------+-------------------------+---------+---------------------------------------------------------------------------+
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | op_type | duration | exceptions | nemesis_at_start | nemesis_at_end | success | cmd |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > +------------------------+--------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------+-------------------------+---------+---------------------------------------------------------------------------+
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanAggregateEvent | 21.70745015144348 | | None | None | True | SELECT count(*) FROM keyspace1.standard1 BYPASS CACHE USING TIMEOUT 1800s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanEvent | 31.429630756378174 | NoHostAvailable('Unable to complete the operation against any hosts', {<Host: 10.12.10.198:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.8.104:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.66:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.9.87:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.228:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.9.171:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor')}) | None | None | False | SELECT * from keyspace1.standard1 BYPASS CACHE USING TIMEOUT 300s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanAggregateEvent | 23.213011026382446 | | None | None | True | SELECT count(*) FROM keyspace1.standard1 BYPASS CACHE USING TIMEOUT 1800s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanEvent | 1.1222093105316162 | NoHostAvailable('Unable to complete the operation against any hosts', {<Host: 10.12.9.171:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.10.198:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.8.66:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.9.87:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.8.104:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.8.228:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor')}) | None | None | False | SELECT * from keyspace1.standard1 BYPASS CACHE USING TIMEOUT 300s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanAggregateEvent | 24.206324815750122 | | None | None | True | SELECT count(*) FROM keyspace1.standard1 BYPASS CACHE USING TIMEOUT 1800s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanAggregateEvent | 24.87922716140747 | | None | None | True | SELECT count(*) FROM keyspace1.standard1 BYPASS CACHE USING TIMEOUT 1800s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanEvent | 64.28893303871155 | NoHostAvailable('Unable to complete the operation against any hosts', {<Host: 10.12.9.87:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.104:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.228:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.9.171:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.10.198:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.66:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor')}) | None | None | False | SELECT * from keyspace1.standard1 BYPASS CACHE USING TIMEOUT 300s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanEvent | 31.928694009780884 | NoHostAvailable('Unable to complete the operation against any hosts', {<Host: 10.12.8.66:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.9.87:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.104:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.228:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.9.171:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.10.198:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor')}) | None | None | False | SELECT * from keyspace1.standard1 BYPASS CACHE USING TIMEOUT 300s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanEvent | 31.986307859420776 | NoHostAvailable('Unable to complete the operation against any hosts', {<Host: 10.12.10.198:9042 us-east-1>: ConnectionException('Host has been marked down or removed'), <Host: 10.12.8.104:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.66:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.9.87:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.228:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.9.171:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor')}) | None | None | False | SELECT * from keyspace1.standard1 BYPASS CACHE USING TIMEOUT 300s |
< t:2025-12-13 05:56:48,971 f:operations_thread.py l:157 c:ScanOperationThread p:DEBUG > | FullScanEvent | 64.32574701309204 | NoHostAvailable('Unable to complete the operation against any hosts', {<Host: 10.12.9.171:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.10.198:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.66:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.9.87:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.104:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor'), <Host: 10.12.8.228:9042 us-east-1>: ConnectionShutdown('[Errno 9] Bad file descriptor')}) | None | None | False | SELECT * from keyspace1.standard1 BYPASS CACHE USING TIMEOUT 300s |
also during whole run driver tried to connect to node1 and reported:
Connection error: ('Unable to connect to any servers', {'10.12.9.87:9042': OperationTimedOut('errors=Timed out creating connection (60 seconds), last_host=None')})
But node was alive
Kernel Version: 6.14.0-1018-aws
Extra information
Installation details
Cluster size: 6 nodes (i7i.2xlarge)
Scylla Nodes used in this run:
- longevity-50gb-12h-master-db-node-70283809-1 (13.218.127.161 | 10.12.9.87) (shards: 4)
- longevity-50gb-12h-master-db-node-70283809-2 (54.91.187.43 | 10.12.8.66) (shards: 4)
- longevity-50gb-12h-master-db-node-70283809-3 (18.212.86.250 | 10.12.8.228) (shards: 6)
- longevity-50gb-12h-master-db-node-70283809-4 (54.92.211.214 | 10.12.10.198) (shards: 6)
- longevity-50gb-12h-master-db-node-70283809-5 (98.84.134.241 | 10.12.8.104) (shards: 5)
- longevity-50gb-12h-master-db-node-70283809-6 (54.160.211.56 | 10.12.9.171) (shards: 4)
OS / Image: ami-02ad235f4c4336f6c (aws: N/A)
Test: longevity-150gb-asymmetric-cluster-12h-test
Test id: 70283809-37aa-4be5-9ebc-d891e1a2d6aa
Test name: scylla-master/tier1/longevity-150gb-asymmetric-cluster-12h-test
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):
Logs:
Scylla version: 2026.1.0~dev-20251219.f65db4e8eba5 with build-id 683ff5b7a4a313ea6094e72fd639c906693ece37
Kernel Version: 6.14.0-1018-aws
Extra information
Installation details
Cluster size: 6 nodes (i7i.4xlarge)
Scylla Nodes used in this run:
- longevity-tls-50gb-3d-master-db-node-c6beb17a-1 (98.87.193.30 | 10.12.35.220) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-2 (52.6.69.201 | 10.12.34.173) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-3 (100.49.143.61 | 10.12.32.22) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-4 (52.203.20.179 | 10.12.34.56) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-5 (44.193.182.223 | 10.12.32.49) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-6 (50.17.245.62 | 10.12.34.86) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-7 (44.209.62.120 | 10.12.32.166) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-8 (54.152.201.38 | 10.12.33.224) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-9 (98.95.22.145 | 10.12.33.230) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-10 (3.231.75.179 | 10.12.32.60) (shards: 14)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-11 (100.49.20.125 | 10.12.33.136) (shards: -1)
- longevity-tls-50gb-3d-master-db-node-c6beb17a-12 (3.215.138.198 | 10.12.34.73) (shards: 14)
OS / Image: ami-048249cf3c5bfc84f (aws: N/A)
Test: longevity-50gb-3days-test
Test id: c6beb17a-d0b9-43b6-ad05-2fbd45c4201d
Test name: scylla-master/tier1/longevity-50gb-3days-test
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):
Logs:
Scylla version: 2026.1.0~dev-20251219.f65db4e8eba5 with build-id 683ff5b7a4a313ea6094e72fd639c906693ece37
Kernel Version: 6.14.0-1018-aws
Extra information
Installation details
Cluster size: 6 nodes (i7i.2xlarge)
Scylla Nodes used in this run:
- longevity-50gb-12h-master-db-node-e2d8a05c-1 (34.201.94.66 | 10.12.8.210) (shards: 6)
- longevity-50gb-12h-master-db-node-e2d8a05c-2 (18.234.51.11 | 10.12.8.248) (shards: 6)
- longevity-50gb-12h-master-db-node-e2d8a05c-3 (52.54.112.48 | 10.12.10.136) (shards: 5)
- longevity-50gb-12h-master-db-node-e2d8a05c-4 (13.222.190.127 | 10.12.11.124) (shards: 7)
- longevity-50gb-12h-master-db-node-e2d8a05c-5 (54.145.225.121 | 10.12.10.222) (shards: 7)
- longevity-50gb-12h-master-db-node-e2d8a05c-6 (18.208.221.26 | 10.12.8.44) (shards: 4)
OS / Image: ami-048249cf3c5bfc84f (aws: N/A)
Test: longevity-150gb-asymmetric-cluster-12h-test
Test id: e2d8a05c-55b0-4025-b3bb-00712401b844
Test name: scylla-master/tier1/longevity-150gb-asymmetric-cluster-12h-test
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):
Logs:
It should be fixed by https://github.com/scylladb/python-driver/pull/623, new version is not yet released