k8up
k8up copied to clipboard
Race condition for multiple concurrent backups
Description
When recovering from #859 multiple backup pods are started simultaneously. This itself is not a problem. However, when finishing the backup an error is thrown.
Additional Context
I believe this is cosmetic as the restic repository appears to be intact. As such this should be safe to ignore?
Logs
2023-05-25T06:54:40Z ERROR k8up.restic.restic cannot sync snapshots to the cluster {"error": "snapshots.k8up.io \"c0ba34a1\" already exists"}
github.com/k8up-io/k8up/v2/restic/cli.(*Restic).sendSnapshotList
/home/runner/work/k8up/k8up/restic/cli/backup.go:118
github.com/k8up-io/k8up/v2/restic/cli.(*Restic).Backup
/home/runner/work/k8up/k8up/restic/cli/backup.go:46
github.com/k8up-io/k8up/v2/cmd/restic.doBackup
/home/runner/work/k8up/k8up/cmd/restic/main.go:234
github.com/k8up-io/k8up/v2/cmd/restic.run
/home/runner/work/k8up/k8up/cmd/restic/main.go:129
github.com/k8up-io/k8up/v2/cmd/restic.resticMain
/home/runner/work/k8up/k8up/cmd/restic/main.go:113
github.com/urfave/cli/v2.(*Command).Run
/home/runner/go/pkg/mod/github.com/urfave/cli/[email protected]/command.go:271
github.com/urfave/cli/v2.(*Command).Run
/home/runner/go/pkg/mod/github.com/urfave/cli/[email protected]/command.go:264
github.com/urfave/cli/v2.(*App).RunContext
/home/runner/go/pkg/mod/github.com/urfave/cli/[email protected]/app.go:333
github.com/urfave/cli/v2.(*App).Run
/home/runner/go/pkg/mod/github.com/urfave/cli/[email protected]/app.go:310
main.main
/home/runner/work/k8up/k8up/cmd/k8up/main.go:30
runtime.main
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:250
Expected Behavior
No stacktrace is thrown.
Steps To Reproduce
- Have downtime for a schedule
- Recover from the downtime
- Observer that multiple backup jobs&pods are started
- See the error in all pods that finish after the first one.
Version of K8up
2,7,1/4.2.2
Version of Kubernetes
v1.26.4+k3s1
Distribution of Kubernetes
k3s