scylla-manager icon indicating copy to clipboard operation
scylla-manager copied to clipboard

Repair task failed after an hour with zero token nodes in multi dc configuration

Open aleksbykov opened this issue 1 year ago • 5 comments
trafficstars

Packages

Scylla version: 6.2.0-20241013.b8a9fd4e49e8 with build-id a61f658b0408ba10663812f7a3b4d6aea7714fac

Kernel Version: 6.8.0-1016-aws Scylla Manager Agent 3.3.3-0.20240912.924034e0d

Issue description

Cluster configured with zero token nodes and multi dc configuration. There are DC: "eu-west-1" with 3 data nodes, DC: "eu-west-2": 3 data nodes and 1 zero token nodes, DC: "eu-north-1": 1 zero token node.

Nemesis 'disrupt_mgmt_corrupt_then_repair' was failed. This nemesis stops scylla , remove several sstables, start scylla and then trigger repair from scylla manager. Nemesis chose node4 (data node) as target node. It remove sstables after scylla was stopped. And after scylla was started triggered repair from scylla manager: Repair task was failed after an hour:

sdcm.mgmt.common.ScyllaManagerError: Task: repair/362a4112-02b8-47f3-ae49-49c47600de51 final status is: ERROR.
Task progress string: Run:		a1edb893-8c11-11ef-bb82-0a7de1e926c3
Status:		ERROR
Cause:		see more errors in logs: master 10.4.2.208 keyspace keyspace1 table standard1 command 6: status FAILED
Start time:	16 Oct 24 22:54:43 UTC
End time:	17 Oct 24 00:06:52 UTC
Duration:	1h12m9s
Progress:	0%/99%
Intensity:	1
Parallel:	0
Datacenters:	
  - eu-northscylla_node_north
  - eu-west-2scylla_node_west
  - eu-westscylla_node_west

╭───────────────────────────────┬────────────────────────────────┬──────────┬──────────╮
│ Keyspace                      │                          Table │ Progress │ Duration │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ keyspace1                     │                      standard1 │ 0%/100%  │ 1h11m50s │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ system_distributed_everywhere │ cdc_generation_descriptions_v2 │ 100%     │ 0s       │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ system_distributed            │      cdc_generation_timestamps │ 100%     │ 0s       │
│ system_distributed            │    cdc_streams_descriptions_v2 │ 100%     │ 0s       │
│ system_distributed            │                 service_levels │ 100%     │ 0s       │
│ system_distributed            │              view_build_status │ 100%     │ 0s       │
╰───────────────────────────────┴────────────────────────────────┴──────────┴──────────╯

Next error found in scylla manager log in "monitor-set-2bc4de73.tar.gz":

Oct 17 00:06:41 multi-dc-rackaware-with-znode-dc-fe-monitor-node-2bc4de73-1 scylla-manager[7935]: {"L":"ERROR","T":"2024-10-17T00:06:41.197Z","N":"repair.keyspace1.standard1","M":"Repair failed","error":"master 10.4.2.208 keyspace keyspace1 table standard1 command 6: status FAILED","_trace_id":"MQddNqAdRnuC207sElnpJg","errorStack":"github.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).runRepair.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:58\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).runRepair\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:100\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).HandleJob\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:30\ngithub.com/scylladb/scylla-manager/v3/pkg/util/workerpool.(*Pool[...]).spawn.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/[email protected]/workerpool/pool.go:99\nruntime.goexit\n\truntime/asm_amd64.s:1695\n","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/[email protected]/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/[email protected]/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*tableGenerator).processResult\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:334\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*tableGenerator).Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:219\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*generator).Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:148\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*Service).Repair\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/service.go:304\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.Runner.Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/runner.go:26\ngithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler.PolicyRunner.Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler/policy.go:32\ngithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler.(*Service).run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler/service.go:448\ngithub.com/scylladb/scylla-manager/v3/pkg/scheduler.(*Scheduler[...]).asyncRun.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/scheduler/scheduler.go:401"}

This could be related to zero token nodes in cofiguration.

Impact

Repair process failed from scylla manager.

Installation details

Cluster size: 6 nodes (i4i.4xlarge)

Scylla Nodes used in this run:

  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-1 (52.17.239.72 | 10.4.1.1) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2 (52.30.16.60 | 10.4.2.208) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-3 (34.244.15.201 | 10.4.2.21) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-4 (35.179.142.180 | 10.3.0.73) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-5 (35.177.188.187 | 10.3.1.136) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-6 (35.177.134.180 | 10.3.1.62) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-7 (35.177.11.239 | 10.3.1.229) (shards: 4)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-8 (13.61.14.77 | 10.0.0.60) (shards: 4)

OS / Image: ami-01f5cd2cb7c8dbd6f ami-0a32db7034cf41d95 ami-0b2b4e9fba26c7618 (aws: undefined_region)

Test: longevity-multi-dc-rack-aware-zero-token-dc Test id: 2bc4de73-4328-4444-b601-6bd88060fa4d Test name: scylla-staging/abykov/longevity-multi-dc-rack-aware-zero-token-dc Test method: longevity_test.LongevityTest.test_custom_time Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 2bc4de73-4328-4444-b601-6bd88060fa4d
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 2bc4de73-4328-4444-b601-6bd88060fa4d

Logs:

Jenkins job URL Argus

aleksbykov avatar Oct 23 '24 09:10 aleksbykov