csi-test icon indicating copy to clipboard operation
csi-test copied to clipboard

snapshots and pvcs: test and support arbitrary deletion order

Open pohly opened this issue 4 years ago • 8 comments

From https://github.com/kubernetes-csi/csi-test/pull/297#discussion_r568520680:

We discussed this a bit further on Monday. @msau42 pointed out that it is valid when a CSI driver refuses to delete PVCs that have snapshots. It is also valid to refuse deleting snapshots when there is still a PVC. In other words, a CSI driver only needs to support one order of deletions, but not both.

This means that picking LIFO order will work for some drivers, but not all.

The right approach is to retry deletion after it failed once, i.e. try to delete snapshot, delete pvc, then if snapshot deletion failed, try again. I think this can be added relatively easily by iterating twice over all resources instead of just once.

We also discussed writing tests that explicitly cover both deletion orders. One test can do "delete snapshot, delete pvc, delete snapshot again (if necessary)", the other "delete pvc, delete snapshot, delete pvc again (if necessary)". This then works with all drivers and ensures that both is tested. But this should be added in a separate PR.

pohly avatar Feb 03 '21 10:02 pohly

/assign @timoreimann

pohly avatar Feb 03 '21 10:02 pohly

@pohly: GitHub didn't allow me to assign the following users: timoreimann.

Note that only kubernetes-csi members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to this:

/assign @timoreimann

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 03 '21 10:02 k8s-ci-robot

/assign

timoreimann avatar Feb 03 '21 12:02 timoreimann

I'll try to work on this one soon.

timoreimann avatar Apr 05 '21 10:04 timoreimann

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jul 04 '21 11:07 fejta-bot

/remove-lifecycle stale

@timoreimann ping...

pohly avatar Jul 06 '21 09:07 pohly

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 04 '21 10:10 k8s-triage-robot

/remove-lifecycle stale /lifecycle frozen

pohly avatar Oct 04 '21 12:10 pohly