nfs-subdir-external-provisioner
nfs-subdir-external-provisioner copied to clipboard
mountOptions on EKS
Hi folks:
I am trying to add mountOptions to an existing PVC using this provisioner on our k8s cluster. We're using EFS on EKS, and AWS has recommended adding to "noresvport" option to the mount.
What I've done:
Edit the values to add:
nfs-subdir-external-provisioner:
nfs:
mountOptions:
- rsize=1048576
- wsize=1048576
- hard
- timeo=600
- retrans=2
- noresvport
- _netdev
(these are all AWS' recommendations)
What I see:
mountOptions are added to StorageClass and PV created from StorageClass.
NOTE: PVs and PVCs need to be recreated for this all to take effect (which isn't ideal, but not a huge issue).
What I don't see:
When running "mount" in a pod, I don't see the option in the mount options:
xxxxxxxxxxxxx.amazonaws.com:/xxxxxxx-pvc-xxxxxxxxx on /xxxxxxxx type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=xx.xx.xx.xxlocal_lock=none,addr=xx.xx.xx.xx)
I've seen issues posted in the old chart about mountOptions not being propagated. Is a setting missing? Is that still not working?
@yonatankahana This might be a critical issue as AWS has notified users that on 10/1 some changes are being made and the noresvport
option should be installed by then. Where are these mount options actually set on the pods? Happy to submit a PR if I can.
there was an issue with mountOptions and it was fixed 2 years ago (#28). if you are using the latest version, it seems that the issue not related to the provisioner itself but the options. if you see that the provisioner did copy the mountOtions from the StorageClass to the PV - there his responsibility ends.
I see noresvport
being effective only together with nfsvers=3
(you are using 4.1), not that I can say exactly why tho...
Hi!
I'm seeing the same behaviour. Once adding the mountOptions the storage class gets modified ok:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: nfs-subdir-external-provisioner
app.kubernetes.io/managed-by: Helm
chart: nfs-subdir-external-provisioner-4.0.18
heritage: Helm
release: nfs
name: nfs-client
mountOptions:
- nfsvers=4.1
- rsize=1048576
- wsize=1048576
- hard
- timeo=600
- retrans=2
- noresvport
parameters:
archiveOnDelete: "true"
provisioner: cluster.local/nfs-nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
In fact, when adding the options, the chart creates a new pv/pvc for the nfs provisioner that didn't exist before:
❯ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-nfs-subdir-external-provisioner Bound pv-nfs-nfs-subdir-external-provisioner 10Mi RWO 29m
❯ k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-nfs-subdir-external-provisioner 10Mi RWO Retain Bound nfs/pvc-nfs-nfs-subdir-external-provisioner 29m
I'm assuming this is created to force the mount options, but I'm not sure. The options appear on the PV, but they don't reflect on the mountpoints in the server.
❯ k get pv pv-nfs-nfs-subdir-external-provisioner -o yaml | grep noresvport
- noresvport
docker@minikube:~$ mount | grep noresvport # nothing comes out here
docker@minikube:~$
I've tried modifying other options, like:
- timeo=300
And they don't work either. The options appear on the storage class, but doesn't get mounted with that option (the mount point I see running mount
shows timeo=600
, the default value.
@yonatankahana do you think this is a kubernetes issue? I'm running my tests on minikube and can reproduce what @dstieglitz saw on AWS. I'm using kubernetes 1.26.7, but I did not find any issue regarding moutOptions.
EDIT: I've also tried to mount the mountpoint using the same options in the server and it failed as well. It's very strange, those are the mount options that AWS recommends on the EFS service.
$ mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport IP:/ /mnt/
$ mount | grep mnt
IP:/ on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.49.2,local_lock=none,addr=IP)
Thanks
What component takes the PV mountOptions and actually calls the mount command on the pod? Is that in this provisioner or somewhere else?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.