csi-test
csi-test copied to clipboard
snapshots and pvcs: test and support arbitrary deletion order
From https://github.com/kubernetes-csi/csi-test/pull/297#discussion_r568520680:
We discussed this a bit further on Monday. @msau42 pointed out that it is valid when a CSI driver refuses to delete PVCs that have snapshots. It is also valid to refuse deleting snapshots when there is still a PVC. In other words, a CSI driver only needs to support one order of deletions, but not both.
This means that picking LIFO order will work for some drivers, but not all.
The right approach is to retry deletion after it failed once, i.e. try to delete snapshot, delete pvc, then if snapshot deletion failed, try again. I think this can be added relatively easily by iterating twice over all resources instead of just once.
We also discussed writing tests that explicitly cover both deletion orders. One test can do "delete snapshot, delete pvc, delete snapshot again (if necessary)", the other "delete pvc, delete snapshot, delete pvc again (if necessary)". This then works with all drivers and ensures that both is tested. But this should be added in a separate PR.
/assign @timoreimann
@pohly: GitHub didn't allow me to assign the following users: timoreimann.
Note that only kubernetes-csi members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @timoreimann
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign
I'll try to work on this one soon.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
@timoreimann ping...
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen