azuredisk-csi-driver icon indicating copy to clipboard operation
azuredisk-csi-driver copied to clipboard

ultradisk pvc leaves doesnt cleanup condition

Open dhilipkumars opened this issue 3 years ago • 4 comments

What happened:

status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-08-10T23:12:10Z"
    message: Waiting for user to (re-)start a pod to finish file system resize of
      volume on node.
    status: "True"
    type: FileSystemResizePending
  phase: Bound

What you expected to happen: When the driver has completed a resized, and there is no pod bound to this PVC yet then it still does not remove the above condition. Our controller checking these conditions here continues to wait without spinning up the pod.

If the resize is completed the above condition should be removed by the controller.

How to reproduce it:

  • Create a azure ultradisk PVC
  • Create a deployment claiming the PVC
  • Resize the PVC size
  • scale the deployment to '0' replicas
  • wait for the csi-driver to complete resize
  • check the pvc's condition.

Anything else we need to know?:

Environment:

  • CSI Driver version: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.20.0
  • Kubernetes version (use kubectl version): AKS: v1.22.11
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

dhilipkumars avatar Aug 10 '22 23:08 dhilipkumars

hi @dhilipkumars after disk PVC resize complete, there should be a resizing operation on the node to resizefs the mounted disk device, so I think the message is expected.

andyzhangx avatar Aug 11 '22 02:08 andyzhangx

@andyzhangx when will that condition go away or what action removes that?

dhilipkumars avatar Aug 11 '22 14:08 dhilipkumars

@andyzhangx when will that condition go away or what action removes that?

@dhilipkumars if a pod with that disk pvc mounted on the node, that condition go away since it would perform resizefs on the mounted disk device on the node, and that's just a info msg.

andyzhangx avatar Aug 11 '22 14:08 andyzhangx

@andyzhangx thanks for the explanation

dhilipkumars avatar Aug 12 '22 12:08 dhilipkumars

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 10 '22 12:11 k8s-triage-robot