nfs-subdir-external-provisioner
nfs-subdir-external-provisioner copied to clipboard
Use provisioner for existing NFS shares
Is it possible to use the provisioner for existing NFS shares? I have an NFS server with folders configured already. All I need is to create PVs with server/path/mountingoptions dynamically from PVC without creating any subfolders folder in mounted volume. Is it possible with this provisioner?
Hi @abinet, I'm new to this project, but your question sounds very similar to something I was trying to do.
I'm curious what is the use case where you need to create PVs with server/path/mount-options dynamically instead of creating it once, statically, and sharing a PVC with all your pods?
Hi @vicyap, thank you for the question.
I am aware about the discussions here: https://github.com/kubernetes/kubernetes/issues/60729 https://github.com/kubernetes/community/pull/321#issuecomment-279822350
Hovewer, it is all regarding responsibilities: creating PVs manually must be done by cluster admin only. Creating PVs dynamically from PVC can be done by cluster user with namespace limited permissions. We can not use NFS v1Volumes because of specific mount options (ver3 etc. ) And as cluster admin I don't want to create PV any time when sobody needs an existing NFS share. Instead of that it would be great just to create a StorageClass with necessary mountOptions and let users create PVCs.
Is this project closer to what you're looking for? https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/deploy/example
unfortunatelly the csi-driver has same limitation- for every PVC it creates a subfolder in NFSs server shared folder and does not allow re-usage of the existing one.
Hi, I've come looking for the same feature and my use case is as follows:
I have torn down my entire cluster and rebuilt it because I'm experimenting with IaC. I can recreate my deployments from YAML in a git repo. But I cannot reconnect them to the existing PVCs that are still on my NFS server. I would like to be able to update my deployments to have a permanent shared folder name so that re-creating them from scratch connects them to the same data.
I believe this would match the behaviour of the existingClaim feature of grafana PVCs described here https://medium.com/@kevincoakley/reusable-persistent-volumes-with-the-existingclaim-option-for-the-grafana-prometheus-operator-84568b96315
I has the same need for both NFS and SMB protocols.
Whereas for SMB it is still a work in progress (see #https://github.com/kubernetes-csi/csi-driver-smb/issues/398), for NFS I've solved it by using the pathPattern parameter in the StorageClass definition and by mounting /persistentvolumes on an EmptyDir volume in the provisioner.
@ppenzo can you please elaborate on your workaround?
I thought one can set customPath
to an empty string to explicitly not create the directory, but it won't work.
Will use a default name instead. (implemented in https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/83)
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/0bd4e87b346018eb588cf7e23e6962f195b68a0a/cmd/nfs-subdir-external-provisioner/provisioner.go#L105-L112
@vavdoshka: In the provisioner deployment (see: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/deploy/deployment.yaml) mount /persistentvolumes to an ephemeral volume, i.e. put
volumes:
- emptyDir: {}
name: nfs-client-root
in the deployment. Then define the corresponding storageclass with these parmeters
parameters:
archiveOnDelete: "false"
pathPattern: ${.PVC.annotations.nfs.io/storage-path}
Hence define the PVC with the annotation nfs.io/storage-path referring the NFS path on the NFS server/Filer referenced by your provisioner:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
annotations:
nfs.io/storage-path: my_share/path/on/filer
name: mypvc
spec:
storageClassName: my_filer_sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
In this way the provisioner, which AFAIK is not meant for using existing shares, creates/deletes the corresponding nfs path into the ephemeral volume and not on the filer. Nevertheless as far as the path exists on the NFS server everithing works fine.
Obviously you need a separate storage class and provisioner for each NFS server/NAS filer but this shouldn't be an issue.
thanks @ppenzo
Interestingly enough but with the last officially released version of the provisioner - v4.0.2
works for me without the "ephemeral volume" patch. The behavior is if there is no annotations.nfs.io/storage-path
in the PVC then no namespaced directory gets created, the PV is mounted to the root. But for sure that will not work starting with the version 4.0.8
because of this change https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/83/commits/b8e203661b0b2d3be35342be8869de2125782ebc. And it seems like "ephemeral volume" patch indeed will be the only option until some specific option is implemented probably.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Your (my) usecase does work with the cs-driver-nfs provisioner is you use the Static Provisioning config. Their docs are pretty rough, but there's a decent example here.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten