sig-storage-local-static-provisioner icon indicating copy to clipboard operation
sig-storage-local-static-provisioner copied to clipboard

Dealing with pvs and pvcs on deleted nodes

Open rbtcollins opened this issue 5 years ago • 13 comments

When a node is deleted - either due to failure or due to an autoscale event removing it, local static volumes on it become permanently destroyed. But the metadata for those pv's, and for any pvc's referencing them, are not.

This is frustrating, because we have to take care not to reuse node names, yet in AKS, node names are deterministic and reused commonly, as you can see from these sample names:

aks-fdbl32sv2-26122852-vmss000000   Ready    agent   27h   v1.15.11
aks-fdbl32sv2-26122852-vmss000001   Ready    agent   27h   v1.15.11
aks-fdbl32sv2-26122852-vmss000002   Ready    agent   27h   v1.15.11
aks-fdbl32sv2-26122852-vmss000003   Ready    agent   27h   v1.15.11

This leads to rogue volumes and claims resurrecting themselves back when a nodepool is scaled up again - whether automatically or not.

Describe the solution you'd like in detail

I would like tackle this from two angles:

To prevent this zombie resurrect threat I would like to extend the topology key with some sort of uuid for the node, for instance the Machine ID: (7b59f36627eb48f0af387b76c602cf9b from one of those examples); though this may require changes outside of this sig and thus be a future work - since we cannot benefit from things that we can't deploy in AKS.

To prevent the leak in the first place, I would like to have a controller which watches nodes, and when a node is removed from the cluster, removes all the claims and pvs that were on that node. This would be a separate controller, so that it can be opt-in rather than run implicitly in the existing daemonsets.

Describe alternatives you've considered

I have not considered any alternatives.

Additional context

rbtcollins avatar Jun 19 '20 02:06 rbtcollins

Basically I never want to write thing again:

kubectl get pv | awk '/^local/ { print $1}' | xargs kubectl get pv -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]}{"\n"}{end}' | awk '/aks-fdbl16sv2/ { print $1 }' | xargs kubectl delete pv

rbtcollins avatar Jun 19 '20 02:06 rbtcollins

yes, this is a general issue.

To prevent the leak in the first place, I would like to have a controller which watches nodes, and when a node is removed from the cluster, removes all the claims and pvs that were on that node. This would be a separate controller, so that it can be opt-in rather than run implicitly in the existing daemonsets.

I agree a cloud controller is required. I've written up a proposal: https://docs.google.com/document/d/1SA9epEwA3jPwibRV0ccQwJ2UfZXoeUYKyNxNegt0vn4 for this.

cofyc avatar Jun 19 '20 03:06 cofyc

Thank you for the link, I've added a thought to the document there. Is a prototype under way?

rbtcollins avatar Jun 22 '20 10:06 rbtcollins

As an intermediate step, would it help if we put an OwnerRef on the PV pointing to the Node object? PVC protection will prevent the PV from actually being deleted, but if a user comes in a deletes the PVC, then the PV object will get cleaned up instead of taking up cycles from the PV controller (which every few seconds goes through all PVs in the cluster).

msau42 avatar Sep 15 '20 01:09 msau42

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Dec 14 '20 02:12 fejta-bot

/remove-lifecycle stale

On Mon, 14 Dec 2020, 03:03 fejta-bot, [email protected] wrote:

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle stale

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/issues/201#issuecomment-744120803, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADZ7XSHS24ULCLXAHNEVALSUVXA3ANCNFSM4OCKP5MA .

rbtcollins avatar Dec 14 '20 06:12 rbtcollins

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Mar 14 '21 07:03 fejta-bot

/remove-lifecycle stale. /lifecycle frozen

cofyc avatar Mar 14 '21 14:03 cofyc

/help /kind feature

cdenneen avatar Aug 27 '21 18:08 cdenneen

@cdenneen: This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to this:

/help /kind feature

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Aug 27 '21 18:08 k8s-ci-robot

A maybe better scripts, depending on your needs:

kubectl get pv  -o custom-columns=NAME:metadata.name,PATH:spec.local.path,PVC:spec.claimRef.name,NODE:spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]  | grep ${NODE_NAME} |  awk '{ print $1}' > /tmp/delete_pv

kubectl get pv  -o custom-columns=NAME:metadata.name,PATH:spec.local.path,NAMESPACE:spec.claimRef.namespace,PVC:spec.claimRef.name,NODE:spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]  | grep ${NODE_NAME}  | grep -v none | awk '{ print $3" "$4}' > /tmp/delete_pvc

while read PVC; do
kubectl delete pvc -n $PVC
done </tmp/delete_pvc

for PV in `cat /tmp/delete_pv`;do kubectl delete pv $PV; done

pierreozoux avatar Sep 13 '22 15:09 pierreozoux

From k8s 1.23 there a feature for gracefull shutdown of node and from 1.24 none-gracefull shutdown. So may be, we can use it to delete pvc from nodes?

GrigorievNick avatar Nov 06 '22 09:11 GrigorievNick