dynamic-nfs-provisioner
dynamic-nfs-provisioner copied to clipboard
nfs mount failed Not supported
Describe the bug: A clear and concise description of what the bug is. installed openebs on top of k3d 5.4.4 as an helmrelease using the nfs-provisioner chart, with this values (indentation is because of the helmrelease yaml, this block is under spec.values):
ndm:
enabled: false
ndmOperator:
enabled: false
localprovisioner:
enabled: false
openebsNDM:
enabled: false
nfsProvisioner:
enabled: true
nfsStorageClass:
name: nfs
reclaimPolicy: Retain
backendStorageClass: "local-path"
I then create my deployments (always via helm releases) which request a couple of pvc using the given storageclass name "nfs", i can see the pvc correctly created, while they can't be then mounted from the requesting pods, with these errors in event logs:
Mounting arguments: -t nfs 10.43.4.138:/ /var/lib/kubelet/pods/c2027ba1-b6c0-4e85-a842-c336a357992b/volumes/kubernetes.io~nfs/pvc-32ae907a-2045-4658-aa86-33512a6b9867
Output: mount: mounting 10.43.4.138:/ on /var/lib/kubelet/pods/c2027ba1-b6c0-4e85-a842-c336a357992b/volumes/kubernetes.io~nfs/pvc-32ae907a-2045-4658-aa86-33512a6b9867 failed: Not supported
10.43.x.x is the service network, pods are on the 10.42.x.x network...
Expected behaviour: pods starting with nfs pvc mounted
The output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
NAME READY STATUS RESTARTS AGE LABELS
openebs-nfs-provisioner-78fbfdd8c6-nvhtq 1/1 Running 0 50m app=nfs-provisioner,chart=nfs-provisioner-0.9.0,component=nfs-provisioner,heritage=Helm,name=openebs-nfs-provisioner,openebs.io/component-name=openebs-nfs-provisioner,openebs.io/version=0.9.0,pod-template-hash=78fbfdd8c6,release=openebs
nfs-pvc-b46d0e46-30e7-4f59-ba56-fecdfb57a3bf-6777f6d4dd-zs9tr 1/1 Running 0 46m openebs.io/nfs-server=nfs-pvc-b46d0e46-30e7-4f59-ba56-fecdfb57a3bf,pod-template-hash=6777f6d4dd
nfs-pvc-32ae907a-2045-4658-aa86-33512a6b9867-68c65f964d-p8zm7 1/1 Running 0 46m openebs.io/nfs-server=nfs-pvc-32ae907a-2045-4658-aa86-33512a6b9867,pod-template-hash=68c65f964d
kubectl get pvc -n <openebs_namespace>
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc-b46d0e46-30e7-4f59-ba56-fecdfb57a3bf Bound pvc-03d661be-d385-4abf-9d9b-5f05c1d062d3 1Gi RWO local-path 47m
nfs-pvc-32ae907a-2045-4658-aa86-33512a6b9867 Bound pvc-da5f9eca-8610-4927-9cb4-ab60625688d7 1Gi RWO local-path 47m
kubectl get pvc -n <application_namespace>
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-common-mongodb-0 Bound pvc-fe2b5244-5bd7-4fbd-bd3b-a079547ead14 8Gi RWO local-path 50m
inst1-core-data-transient Bound pvc-b46d0e46-30e7-4f59-ba56-fecdfb57a3bf 1Gi RWX nfs 47m
inst1-core-data-lts Bound pvc-32ae907a-2045-4658-aa86-33512a6b9867 1Gi RWX nfs 47m
Anything else we need to know?: Add any other context about the problem here.
Environment details:
- OpenEBS version (use
kubectl get po -n openebs --show-labels):
NAME READY STATUS RESTARTS AGE LABELS
openebs-nfs-provisioner-78fbfdd8c6-nvhtq 1/1 Running 0 52m app=nfs-provisioner,chart=nfs-provisioner-0.9.0,component=nfs-provisioner,heritage=Helm,name=openebs-nfs-provisioner,openebs.io/component-name=openebs-nfs-provisioner,openebs.io/version=0.9.0,pod-template-hash=78fbfdd8c6,release=openebs
nfs-pvc-b46d0e46-30e7-4f59-ba56-fecdfb57a3bf-6777f6d4dd-zs9tr 1/1 Running 0 48m openebs.io/nfs-server=nfs-pvc-b46d0e46-30e7-4f59-ba56-fecdfb57a3bf,pod-template-hash=6777f6d4dd
nfs-pvc-32ae907a-2045-4658-aa86-33512a6b9867-68c65f964d-p8zm7 1/1 Running 0 48m openebs.io/nfs-server=nfs-pvc-32ae907a-2045-4658-aa86-33512a6b9867,pod-template-hash=68c65f964d
- Kubernetes version (use
kubectl version):
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14", GitCommit:"0f77da5bd4809927e15d1658fb4aa8f13ad890a5", GitTreeState:"clean", BuildDate:"2022-06-15T14:17:29Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14+k3s1", GitCommit:"982252d747f7e50701da7052383d9fd788d2b20e", GitTreeState:"clean", BuildDate:"2022-06-27T22:44:37Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
-
Cloud provider or hardware configuration: k3d 5.4.4
-
OS (e.g:
cat /etc/os-release): macos bigsur latest
more details... going inside 1 of the openebs nfs-pvc pods, i get this:
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # exportfs
/nfsshare <world>
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # cat /etc/exports
/nfsshare *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # mkdir -p t
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # mount -t nfs 127.0.0.1:/nfsshare t
mount.nfs: mounting 127.0.0.1:/nfsshare failed, reason given by server: No such file or directory
mount: mounting 127.0.0.1:/nfsshare on t failed: Not supported
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # ls /
Dockerfile README.md bin dev etc home lib media mnt nfsshare opt proc root run sbin srv sys usr var
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # ls /nfsshare/
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ #
following this guide i have now working writing and reading pods, thanks to mountOptions vers: 4.1 parameter... but, how to set this using the values in a helmrelease? Don't see a section in templates...
Submitted PR to fix this thing, it bothered me too on k3d local cluster.
@pentago thanks! Hope pr will be merged soon