csi-driver-smb
csi-driver-smb copied to clipboard
Stale NFS file handle
What happened:
When the smb server is terminated while still mounted to pods, the csi driver node does not umount the stale mount and create a new mount. This prevents new pods from starting even when the server is back online. Instead an error results. This can be resolved manually by restarting all the csi-smb-node
pods.
Error: failed to generate container "88d821046af27afe3710f0ffc413529b7eb46f2844156cfb154099dc94d75984"
spec: failed to generate spec: failed to stat "/var/lib/kubelet/pods/7244575c-5464-4faa-98f1-a80a2366287f/volumes/kubernetes.io~csi/smb-pv/mount":
stat /var/lib/kubelet/pods/7244575c-5464-4faa-98f1-a80a2366287f/volumes/kubernetes.io~csi/smb-pv/mount: stale NFS file handle
What you expected to happen:
Not have to restart the csi-smb-node
pod manually to mount.
How to reproduce it:
- Start pod with smb mount.
- Kill smb server.
- Restart pod (shouldn't attach because server is down).
- Start smb server.
- Observe pod is never able to start even when server is back up.
Anything else we need to know?:
This may be related to #164, but it is showing a different error message.
Environment:
- CSI Driver version: v1.5.0
- Kubernetes version (use
kubectl version
): v1.20 - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
the error shows stale NFS file handle
while this is smb server
Yes, this is correct. It's perplexing why it's showing stale nfs when it is smb. I am definitely using the smb csi driver and not NFS.
I have the same issue using the smb csi driver. For some reason the error "stale NFS file handle" only affects Linux pods trying to mount the smb share. Windows pods show no error message and continue to mount the share successfully after smb server restart.
Another issue - restarting csi-smb-node pods didn't seem to fix the problem...
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Restarting csi-smb-node pods didn't recover the stale mount for us either.
Is there another way to recover from this issue without rebooting the node?
From time to time we have the same stale NFS file handle
error message when restarting a pod that use SMB mounts.
I found a workaround by a scaling down/up the application... I know it's not good for production environments.
Also I noticed that the same share is mounted twice:
- once into the pod: /var/lib/kubelet/pods/xxx/volumes/kubernetes.io~csi/xxx/mount
- and another into the driver: /var/lib/kubelet/plugins/kubernetes.io/csi/smb.csi.k8s.io/xxx/globalmount
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale