ESTALE 116 Error
Hi,
Currently observing issues with respect to the following scenarios:
- When the NFS server pod restarts due to disruptions in the connectivity between the NFS server pod and the EBS backend volume.
- When the worker-node that is hosting the NFS server pod reboots
- When the worker-node that is hosting the NFS server pod is re-created as a result of EC2 maintenance and the NFS server pod is recreated on a fresh EC2 worker node.
In all these cases what we notice is that application PODs that are using the mounts goes into stale like the one below
user:~$ cd /mounted
-bash: cd: /mounted: Stale file handle
Currently manual intervention is always needed by restarting the application pods for them to recover.
Any idea how to resolve this?
Hello @infinitydon,
Have you managed to resolve that issue? I'm experiencing the same problem as well.
Hi again, adding the parameter "-device-based-fsids=false" to the deployment/statefulset image's args section solves my problem.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten