azuredisk-csi-driver
azuredisk-csi-driver copied to clipboard
ultradisk pvc leaves doesnt cleanup condition
What happened:
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 8Gi
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-08-10T23:12:10Z"
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
phase: Bound
What you expected to happen: When the driver has completed a resized, and there is no pod bound to this PVC yet then it still does not remove the above condition. Our controller checking these conditions here continues to wait without spinning up the pod.
If the resize is completed the above condition should be removed by the controller.
How to reproduce it:
- Create a azure ultradisk PVC
- Create a deployment claiming the PVC
- Resize the PVC size
- scale the deployment to '0' replicas
- wait for the csi-driver to complete resize
- check the pvc's condition.
Anything else we need to know?:
Environment:
- CSI Driver version: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.20.0
- Kubernetes version (use
kubectl version): AKS: v1.22.11 - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a): - Install tools:
- Others:
hi @dhilipkumars after disk PVC resize complete, there should be a resizing operation on the node to resizefs the mounted disk device, so I think the message is expected.
@andyzhangx when will that condition go away or what action removes that?
@andyzhangx when will that condition go away or what action removes that?
@dhilipkumars if a pod with that disk pvc mounted on the node, that condition go away since it would perform resizefs on the mounted disk device on the node, and that's just a info msg.
@andyzhangx thanks for the explanation
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale