scylla-cluster-tests
scylla-cluster-tests copied to clipboard
C-s load consistently timed out in the longevity-10gb-3h-gce test
C-s load consistently timed out in the longevity-10gb-3h-gce test despite https://github.com/scylladb/scylla-cluster-tests/pull/7108 its soft timeout was increased. See discussion here https://argus.scylladb.com/test/23a9f4cc-4d94-42d4-a427-6ece4fd9a487/runs?additionalRuns[]=0899a8dd-bbd3-45c3-a8b8-df838402ca9d
Looks like we need to investigate it
Packages
Scylla version: 2023.1.5-20240213.08fd6aec7a43
with build-id 448979e99e198eeab4a3b0e1b929397d337d2724
Kernel Version: 5.15.0-1051-gcp
Issue description
- [ ] This issue is a regression.
- [ ] It is unknown if this issue is a regression.
Describe your issue in detail and steps it took to produce it.
Impact
Describe the impact this issue causes to the user.
How frequently does it reproduce?
Describe the frequency with how this issue can be reproduced.
Installation details
Cluster size: 6 nodes (n2-highmem-16)
Scylla Nodes used in this run:
- longevity-10gb-3h-2023-1-db-node-903763b1-0-8 (34.73.78.45 | 10.142.0.77) (shards: 14)
- longevity-10gb-3h-2023-1-db-node-903763b1-0-7 (35.196.25.245 | 10.142.0.111) (shards: 14)
- longevity-10gb-3h-2023-1-db-node-903763b1-0-6 (34.74.38.182 | 10.142.0.86) (shards: 14)
- longevity-10gb-3h-2023-1-db-node-903763b1-0-5 (34.138.54.195 | 10.142.0.85) (shards: 14)
- longevity-10gb-3h-2023-1-db-node-903763b1-0-4 (34.148.3.49 | 10.142.0.84) (shards: 14)
- longevity-10gb-3h-2023-1-db-node-903763b1-0-3 (35.196.203.88 | 10.142.0.83) (shards: 14)
- longevity-10gb-3h-2023-1-db-node-903763b1-0-2 (104.196.15.175 | 10.142.0.63) (shards: 14)
- longevity-10gb-3h-2023-1-db-node-903763b1-0-1 (34.73.114.130 | 10.142.0.60) (shards: 14)
OS / Image: https://www.googleapis.com/compute/v1/projects/scylla-images/global/images/1433372650157216341
(gce: undefined_region)
Test: longevity-10gb-3h-gce-test
Test id: 903763b1-3b7c-488b-8b96-589d05ec5d31
Test name: enterprise-2023.1/longevity/longevity-10gb-3h-gce-test
Test config file(s):
Logs and commands
- Restore Monitor Stack command:
$ hydra investigate show-monitor 903763b1-3b7c-488b-8b96-589d05ec5d31
- Restore monitor on AWS instance using Jenkins job
- Show all stored logs command:
$ hydra investigate show-logs 903763b1-3b7c-488b-8b96-589d05ec5d31
Logs:
- db-cluster-903763b1.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/903763b1-3b7c-488b-8b96-589d05ec5d31/20240214_190342/db-cluster-903763b1.tar.gz
- sct-runner-events-903763b1.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/903763b1-3b7c-488b-8b96-589d05ec5d31/20240214_190342/sct-runner-events-903763b1.tar.gz
- sct-903763b1.log.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/903763b1-3b7c-488b-8b96-589d05ec5d31/20240214_190342/sct-903763b1.log.tar.gz
- loader-set-903763b1.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/903763b1-3b7c-488b-8b96-589d05ec5d31/20240214_190342/loader-set-903763b1.tar.gz
- monitor-set-903763b1.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/903763b1-3b7c-488b-8b96-589d05ec5d31/20240214_190342/monitor-set-903763b1.tar.gz
- parallel-timelines-report-903763b1.tar.gz - https://cloudius-jenkins-test.s3.amazonaws.com/903763b1-3b7c-488b-8b96-589d05ec5d31/20240214_190342/parallel-timelines-report-903763b1.tar.gz
@juliayakovlev
this looks like https://github.com/scylladb/java-driver/issues/258