examples
examples copied to clipboard
fsGroup securityContext does not apply to nfs mount
The example https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs works fine if the container using nfs mount is running as root user. If I use securityContext to run not as root user then I have no write access to the mounted volume.
How to reproduce: here is the nfs-busybox-rc.yaml with securityContext:
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
securityContext:
runAsUser: 10000
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
Actual result:
kubectl exec nfs-busybox-2w9bp -t -- id
uid=10000 gid=0(root) groups=10000
kubectl exec nfs-busybox-2w9bp -t -- ls -l /
total 48
<..>
drwxr-xr-x 3 root root 4096 Aug 2 12:27 mnt
Expected result: the group ownership of /mnt folder should be user 10000
The mount options in nfs pv are not allowed except rw
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.23.137.115
path: "/"
mountOptions:
# - rw // is allowed
# - root_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
# - all_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
# - anonuid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
# - anongid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.3-rancher1", GitCommit:"f6320ca7027d8244abb6216fbdb73a2b3eb2f4f9", GitTreeState:"clean", BuildDate:"2018-05-29T22:28:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Why did this get closed with no resolution? I have this same issue. If there is a better solution than an init container please someone fill me in.
Yeah... I'm having the same issue with NFS too. securityContext.fsGroup seems to have no affect on NFS volume mounts, so you kinda have to use the initContainer approach :(
I'm having the same problem.
same issue able to write but not able to read from nfs mounted volume . kubernetes shows success in mounting process but no luck .
/reopen
@varun-da: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@kmarokas: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
thanks @kmarokas!
/remove-lifecycle rotten
Would love for this to be addressed! In the mean time here's how we're dealing with it...
In this example there are two pods that are mounting an AWS EFS volume via nfs. To enable a non-root user, we make the mount point accessible via an initContainer.
---
apiVersion: v1
kind: Pod
metadata:
name: alpine-efs-1
labels:
name: alpine
spec:
volumes:
- name: nfs-test
nfs:
server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
path: /
securityContext:
fsGroup: 100
runAsGroup: 100
runAsUser: 405
initContainers:
- name: nfs-fixer
image: alpine
securityContext:
runAsUser: 0
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- sh
- -c
- (chmod 0775 /nfs; chgrp 100 /nfs)
containers:
- name: alpine
image: alpine
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- tail
- -f
- /dev/null
---
apiVersion: v1
kind: Pod
metadata:
name: alpine-efs-2
labels:
name: alpine
spec:
volumes:
- name: nfs-test
nfs:
server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
path: /
securityContext:
supplementalGroups:
- 100
fsGroup: 100
# runAsGroup: 100
runAsUser: 405
initContainers:
- name: nfs-fixer
image: alpine
securityContext:
runAsUser: 0
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- sh
- -c
- (chmod 0775 /nfs; chgrp 100 /nfs)
containers:
- name: alpine
image: alpine
volumeMounts:
- name: nfs-test
mountPath: /nfs
command:
- tail
- -f
- /dev/null
The same seems to be true for cifs mounts created through a custom volume driver: https://github.com/juliohm1978/kubernetes-cifs-volumedriver/issues/8
Edit: Looks like there is very little magic that Kubernetes does when mounting the volumes. The individual volume drivers have to respect the fsGroup configuration set in the pod. Looks like the NFS provider doesn't do that as of now.
Is https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client the place where this could be fixed?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
no solution since around 1 1/2 years? cant believe it.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Maybe this issue needs to be taken to another repository. Is https://github.com/kubernetes-incubator/external-storage the right place for it?
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
fsGroupChangePolicy: "Always"
refer the above link. But it seems that the feature is available only from k8-1.18 version. Guess if i'm not wrong.
fsGroupChangePolicy: "Always"
The docs are not totally clear about this, but I understand that this is already the default behaviour.
By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted.
The section also indicates that not every volume type necessarily supports changing permissions:
This field only applies to volume types that support fsGroup controlled ownership and permissions.
+1
The same issue for AWS EBS gp2 volumes
+1
I just ran into this issue today as well. Is there any workaround yet besides using an initContainer?
+1 - facing this issue too!
+1 - facing this issue