csi-driver-smb icon indicating copy to clipboard operation
csi-driver-smb copied to clipboard

Stale NFS file handle

Open alexyao2015 opened this issue 3 years ago • 18 comments

What happened:

When the smb server is terminated while still mounted to pods, the csi driver node does not umount the stale mount and create a new mount. This prevents new pods from starting even when the server is back online. Instead an error results. This can be resolved manually by restarting all the csi-smb-node pods.

Error: failed to generate container "88d821046af27afe3710f0ffc413529b7eb46f2844156cfb154099dc94d75984"
spec: failed to generate spec: failed to stat "/var/lib/kubelet/pods/7244575c-5464-4faa-98f1-a80a2366287f/volumes/kubernetes.io~csi/smb-pv/mount": 
stat /var/lib/kubelet/pods/7244575c-5464-4faa-98f1-a80a2366287f/volumes/kubernetes.io~csi/smb-pv/mount: stale NFS file handle

What you expected to happen:

Not have to restart the csi-smb-node pod manually to mount.

How to reproduce it:

  1. Start pod with smb mount.
  2. Kill smb server.
  3. Restart pod (shouldn't attach because server is down).
  4. Start smb server.
  5. Observe pod is never able to start even when server is back up.

Anything else we need to know?:

This may be related to #164, but it is showing a different error message.

Environment:

  • CSI Driver version: v1.5.0
  • Kubernetes version (use kubectl version): v1.20
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

alexyao2015 avatar Feb 10 '22 22:02 alexyao2015

the error shows stale NFS file handle while this is smb server

andyzhangx avatar Feb 11 '22 13:02 andyzhangx

Yes, this is correct. It's perplexing why it's showing stale nfs when it is smb. I am definitely using the smb csi driver and not NFS.

alexyao2015 avatar Feb 11 '22 14:02 alexyao2015

I have the same issue using the smb csi driver. For some reason the error "stale NFS file handle" only affects Linux pods trying to mount the smb share. Windows pods show no error message and continue to mount the share successfully after smb server restart.

Another issue - restarting csi-smb-node pods didn't seem to fix the problem...

jrbe228 avatar Apr 25 '22 04:04 jrbe228

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 24 '22 05:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 23 '22 06:08 k8s-triage-robot

/remove-lifecycle rotten

alexyao2015 avatar Aug 24 '22 14:08 alexyao2015

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 22 '22 15:11 k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 05 '23 14:03 k8s-triage-robot

/remove-lifecycle stale

alexyao2015 avatar Mar 05 '23 17:03 alexyao2015

Restarting csi-smb-node pods didn't recover the stale mount for us either.

Is there another way to recover from this issue without rebooting the node?

rmoreas avatar Apr 18 '23 11:04 rmoreas

From time to time we have the same stale NFS file handle error message when restarting a pod that use SMB mounts. I found a workaround by a scaling down/up the application... I know it's not good for production environments. Also I noticed that the same share is mounted twice:

  • once into the pod: /var/lib/kubelet/pods/xxx/volumes/kubernetes.io~csi/xxx/mount
  • and another into the driver: /var/lib/kubelet/plugins/kubernetes.io/csi/smb.csi.k8s.io/xxx/globalmount

ThomVivet avatar May 23 '23 16:05 ThomVivet

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 21 '24 02:01 k8s-triage-robot

/remove-lifecycle stale

DarkFM avatar Feb 13 '24 22:02 DarkFM

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 13 '24 23:05 k8s-triage-robot