aws-efs-csi-driver
aws-efs-csi-driver copied to clipboard
Make EFS path creation behavior configurable
Is your feature request related to a problem? Please describe. With the subpath feature implemented, user will be able to mount a subpath of EFS into container as persistent volume. However, user must create each subpath beforehand before they consume the PV from container. This creates a hard user experience to use the feature.
Describe the solution you'd like in detail
Similar to hostPath volume, add a optional type
field to the PV's volumeAttributes
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
...
csi:
driver: efs.csi.aws.com
volumeHandle: fs-c2a43e69:/a/b
volumeAttributes:
type: "Directory"
volumeAttributes.type
usage table:
Value | Behavior |
---|---|
Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the EFS volume | |
DirectoryOrCreate | If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Pod. |
Directory | A directory must exist at the given path |
/cc @wongma7
Additional Information
We will need to consider leveraging Pod Info on Mount to populate pod uid/gid to the driver when creating the directory dynamically.
Good point about the permissions. It looks like Pod Info on Mount won't give us anything useful though, the UID there is just the pod unique UID: https://github.com/kubernetes/kubernetes/blob/665e76d97623447d13bb3b33b68591a985b49c9d/pkg/volume/csi/csi_mounter.go#L324.
We can't leverage fsgroup to change the GID owner of the directory on mount either, because EFS is RWX not RWO so fsgroup will have no effect (imagine if pod A mounted it then pod B mounted it and pod A lost permission).
EFS has no root squashing so on vanilla clusters where containers run as root (i.e. PSP is not configured with runAsUser: MustRunAsNonRoot
https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups), all containers will have full access. https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-permissions.html#accessing-fs-nfs-permissions-root-user
To be clear, this was the original intent of https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/46 . Requiring the directory to exist prior to volume creation is really not helpful from a kubernetes standpoint.
@wreed4 Yep. We will address the issue here. Speaking of creating the directory dynamically. There are several use cases I can think of:
- User manually create the PV and specify the path in the
spec.csi.VolumeAttributes
and path will be created dynamically even if it is not on EFS filesystem yet. - User creates PVC which will trigger PV creation dynamically. And the path will be set to a random UUID (eg. PV name) and the path will be created dynamically if it is missing.
Which one is your original intent?
Also for creating the path dynamically, what's the expected uid/gid and file permission mode for the directory?
I think the original intent is to mimic the NFS behavior and simply be able to set a path to mount in the target https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml, which is option 1 in your comment @leakingtapan
I suppose uid/gid and mode should be 0:0 0777, otherwise I do not see how it could work.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
@leakingtapan I actually believe both use-cases you outline above are useful and should both be possible, where the second is a special case of the first.
You bring up a really good point with the permissions though. We will be mounting these volumes as different customers, so it is important that custA cannot read custB's data. However we intend to solve this problem at the k8s level with PVCs and namespaces (ie, pods from custA cannot mount custB's PVC because it's in a different namespace). Granted it's theoretically possible for pvA to point to the same underlying information as pvB, but I think as long as we don't do that and can demonstrate how we avoid that situation, we'd be okay.
The ability to create two PVs pointing at the same underlying data may actually end up proving useful to us as it allows us to be able to copy files between two environments (namespaces) of the same customer in a single pod. If we couldn't do this, we'd have to launch a pod in ns1, mount PVC1, copy files to a transport PV, then launch pod2 in ns2, mount PVC2 and copy files from the transport PV into pvc2.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Hello,
I just want to know if there are news regarding the fact that the user must create each subpath beforehand before they consume the PV from container, is it possible to do an auto creation of the directory or not yet ?
thank you
I have the same question.
Hi,
We are also looking for this feature.
Thanks
Hi,
Waiting the feature to be implemented.
Thanks Sergiu
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Can we get this issue reopened? Currently waiting on this before switching from the deprecated efs-provisioner.
/reopen /lifecycle frozen
@leakingtapan: Reopened this issue.
In response to this:
/reopen /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Needing to manually create the Access Points (directories) beforehand in order to use subPath
was a hard unexpected surprise 👾
User creates PVC which will trigger PV creation dynamically. And the path will be set to a random UUID (eg. PV name) and the path will be created dynamically if it is missing.
This seems similar to NFS Subdir External Provisioner's pathPattern
. This is an old issue, but I'm not aware of any way to do this in the EFS CSI driver today. Dynamic provisioning using access points will be preferred for most users, but it's not exactly the same.
Auto creation of subpath is very missing feature
Ref: https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/538
+1
Also looking for this.
/kind feature
Any update on this, looking for this feature as well