sig-storage-local-static-provisioner
sig-storage-local-static-provisioner copied to clipboard
Deletion of PVCs on statefulsetscale-up due to low intervals
Issue with node-cleanup-controller
Automatic deletion of PVC (PersistentVolumeClaim) /PV (PersistentVolume) on scale-up of statefulset pods. Each time a statefulset pod is evicted, a PVC is required to be deleted by setting a low discovery interval and deletion delay time. However, the issue arises when the system is scaled up. During a statefulset pod scale-up, the PVC/PV are first created and due to the short discovery interval and deletion delay time, they are automatically deleted.
Expected Behavior:
When a statefulset pod scaling-up operation occurs, there should be no deletion of PVC/PV.
Steps to Reproduce:
To reproduce the issue, please follow the steps below:
Set a discovery interval to 5 seconds and deletion delay to 1 second. Deploy a statefulset with PVC. Attempt to scale it up. After the scale-up, you will notice that the newly created PVC/PV are deleted automatically due to low interval times. Anything else we need to know?:
Environment:
- CSI Driver version: we are using a local nvme disk on ec2
- Kubernetes version (use
kubectl version): 1.23 - OS (e.g. from /etc/os-release): "Amazon Linux 2"
- Kernel (e.g.
uname -a): 5.10.186-179.751.amzn2.x86_64
Update: it seems like a bug in version 2.6.0, the real problem is that the controller tried to delete a pv from active node
/cc @msau42
The deletion logic should only be triggered when the Node doesn't exist. Can you clarify in the scale up case, when does the Node object get created?
It is possible that there could also be a race condition where the controller does not see a new Node yet when it's processing a PV.
One thing that may help is to make sure we only process PVCs that are Bound.
cc @mattcary
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten