csi-driver-smb icon indicating copy to clipboard operation
csi-driver-smb copied to clipboard

Unmount fails with target is busy error

Open MattPOlson opened this issue 2 years ago • 4 comments

What happened:

Pods with volume mounts that use this driver are getting stuck in a terminating phase because the mounts are failing to unmount with this error:

Unmounting arguments: /var/lib/kubelet/pods/dfe8b984-0e62-4111-bc65-4720358a42a1/volumes/kubernetes.io~csi/pvc-ad1d4cda-a2a8-453a-a0af-069c72a68e44/mount
Output: umount: /var/lib/kubelet/pods/dfe8b984-0e62-4111-bc65-4720358a42a1/volumes/kubernetes.io~csi/pvc-ad1d4cda-a2a8-453a-a0af-069c72a68e44/mount: target is busy.
I0424 19:31:00.963678       1 utils.go:76] GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0424 19:31:00.963704       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/52b576ca-4a27-4e12-b1f7-42f5bb3b4dc1/volumes/kubernetes.io~csi/pvc-d8160b18-54a3-4acd-884c-07cd7a47363c/mount","volume_id":"bdreporting.local/dev#pvc-d8160b18-54a3-4acd-884c-07cd7a47363c#"}
I0424 19:31:00.963756       1 nodeserver.go:100] NodeUnpublishVolume: unmounting volume bdreporting.local/dev#pvc-d8160b18-54a3-4acd-884c-07cd7a47363c# on /var/lib/kubelet/pods/52b576ca-4a27-4e12-b1f7-42f5bb3b4dc1/volumes/kubernetes.io~csi/pvc-d8160b18-54a3-4acd-884c-07cd7a47363c/mount
I0424 19:31:00.963788       1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/pods/52b576ca-4a27-4e12-b1f7-42f5bb3b4dc1/volumes/kubernetes.io~csi/pvc-d8160b18-54a3-4acd-884c-07cd7a47363c/mount" (corruptedMount: false, mounterCanSkipMountPointChecks: true)
I0424 19:31:00.963797       1 mount_linux.go:362] Unmounting /var/lib/kubelet/pods/52b576ca-4a27-4e12-b1f7-42f5bb3b4dc1/volumes/kubernetes.io~csi/pvc-d8160b18-54a3-4acd-884c-07cd7a47363c/mount
E0424 19:31:00.971541       1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to unmount target "/var/lib/kubelet/pods/52b576ca-4a27-4e12-b1f7-42f5bb3b4dc1/volumes/kubernetes.io~csi/pvc-d8160b18-54a3-4acd-884c-07cd7a47363c/mount": unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/52b576ca-4a27-4e12-b1f7-42f5bb3b4dc1/volumes/kubernetes.io~csi/pvc-d8160b18-54a3-4acd-884c-07cd7a47363c/mount
Output: umount: /var/lib/kubelet/pods/52b576ca-4a27-4e12-b1f7-42f5bb3b4dc1/volumes/kubernetes.io~csi/pvc-d8160b18-54a3-4acd-884c-07cd7a47363c/mount: target is busy.

Also while this is occurring we see pods running on the same node experience mount failures with a "Broken Pipe" error, not sure if this is related out not.

What you expected to happen:

I expected the mounts to unmount with the pod is terminating.

How to reproduce it:

We have not been able to produce it reliably, is seems to happen with the disk the mounts are on are under heavy pressure.

Anything else we need to know?:

Environment:

  • CSI Driver version: v1.10.0
  • Kubernetes version (use kubectl version): 1.23
  • OS (e.g. from /etc/os-release): Fedora CoreOS 35
  • Kernel (e.g. uname -a): 5.18.5-100.fc35.x86_64
  • Install tools:
  • Others:

MattPOlson avatar Apr 25 '23 19:04 MattPOlson

It's expected for CSI driver to return "target is busy" error when files in file share are still open, to mitigate the state issue, force delete pod should work. And this also indicates that user should set preStop hook to terminiate the application when it receives SIGTERM signal from kubernetes, use force unmount is not the graceful way.

andyzhangx avatar Apr 27 '23 14:04 andyzhangx

Have you seen the issue where the mounts fail with this error? "Broken Pipe" It seems to occur around the same time.

MattPOlson avatar Apr 27 '23 17:04 MattPOlson

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 19 '24 07:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 18 '24 07:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 19 '24 08:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 19 '24 08:03 k8s-ci-robot