examples icon indicating copy to clipboard operation
examples copied to clipboard

fsGroup securityContext does not apply to nfs mount

Open kmarokas opened this issue 7 years ago • 60 comments

The example https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs works fine if the container using nfs mount is running as root user. If I use securityContext to run not as root user then I have no write access to the mounted volume.

How to reproduce: here is the nfs-busybox-rc.yaml with securityContext:

# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.

apiVersion: v1
kind: ReplicationController
metadata:
  name: nfs-busybox
spec:
  replicas: 2
  selector:
    name: nfs-busybox
  template:
    metadata:
      labels:
        name: nfs-busybox
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      containers:
      - image: busybox
        command:
          - sh
          - -c
          - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
        imagePullPolicy: IfNotPresent
        name: busybox
        securityContext:
          runAsUser: 10000
        volumeMounts:
          # name must match the volume name below
          - name: nfs
            mountPath: "/mnt"
      volumes:
      - name: nfs
        persistentVolumeClaim:
          claimName: nfs

Actual result:

kubectl exec nfs-busybox-2w9bp -t -- id
uid=10000 gid=0(root) groups=10000

kubectl exec nfs-busybox-2w9bp -t -- ls -l /
total 48
<..>
drwxr-xr-x    3 root     root          4096 Aug  2 12:27 mnt

Expected result: the group ownership of /mnt folder should be user 10000

The mount options in nfs pv are not allowed except rw

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 10.23.137.115
    path: "/"
  mountOptions:
#    - rw // is allowed
#    - root_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
#    - all_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
#    - anonuid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
#    - anongid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.3-rancher1", GitCommit:"f6320ca7027d8244abb6216fbdb73a2b3eb2f4f9", GitTreeState:"clean", BuildDate:"2018-05-29T22:28:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

kmarokas avatar Aug 03 '18 07:08 kmarokas

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Nov 01 '18 08:11 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Dec 01 '18 09:12 fejta-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

fejta-bot avatar Dec 31 '18 09:12 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 31 '18 09:12 k8s-ci-robot

Why did this get closed with no resolution? I have this same issue. If there is a better solution than an init container please someone fill me in.

jefflaplante avatar Mar 14 '19 16:03 jefflaplante

Yeah... I'm having the same issue with NFS too. securityContext.fsGroup seems to have no affect on NFS volume mounts, so you kinda have to use the initContainer approach :(

geerlingguy avatar Apr 26 '19 01:04 geerlingguy

I'm having the same problem.

mlensment avatar Apr 28 '19 21:04 mlensment

same issue able to write but not able to read from nfs mounted volume . kubernetes shows success in mounting process but no luck .

komaldhiman112 avatar Jul 08 '19 09:07 komaldhiman112

/reopen

varun-da avatar Jul 22 '19 18:07 varun-da

@varun-da: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 22 '19 18:07 k8s-ci-robot

/reopen

kmarokas avatar Jul 23 '19 21:07 kmarokas

@kmarokas: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 23 '19 21:07 k8s-ci-robot

thanks @kmarokas!

varun-da avatar Jul 23 '19 21:07 varun-da

/remove-lifecycle rotten

varun-da avatar Jul 23 '19 21:07 varun-da

Would love for this to be addressed! In the mean time here's how we're dealing with it...

In this example there are two pods that are mounting an AWS EFS volume via nfs. To enable a non-root user, we make the mount point accessible via an initContainer.

---
apiVersion: v1
kind: Pod
metadata:
  name: alpine-efs-1
  labels:
    name: alpine
spec:
  volumes:
  - name: nfs-test
    nfs:
      server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
      path: /
  securityContext:
    fsGroup: 100
    runAsGroup: 100
    runAsUser: 405
  initContainers:
    - name: nfs-fixer
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: nfs-test
        mountPath: /nfs
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chgrp 100 /nfs)
  containers:
  - name: alpine
    image: alpine
    volumeMounts:
      - name: nfs-test
        mountPath: /nfs
    command:
      - tail
      - -f
      - /dev/null
---
apiVersion: v1
kind: Pod
metadata:
  name: alpine-efs-2
  labels:
    name: alpine
spec:
  volumes:
  - name: nfs-test
    nfs:
      server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
      path: /
  securityContext:
    supplementalGroups:
      - 100
    fsGroup: 100
    # runAsGroup: 100
    runAsUser: 405
  initContainers:
    - name: nfs-fixer
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: nfs-test
        mountPath: /nfs
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chgrp 100 /nfs)
  containers:
  - name: alpine
    image: alpine
    volumeMounts:
      - name: nfs-test
        mountPath: /nfs
    command:
      - tail
      - -f
      - /dev/null

leopoldodonnell avatar Sep 23 '19 15:09 leopoldodonnell

The same seems to be true for cifs mounts created through a custom volume driver: https://github.com/juliohm1978/kubernetes-cifs-volumedriver/issues/8

Edit: Looks like there is very little magic that Kubernetes does when mounting the volumes. The individual volume drivers have to respect the fsGroup configuration set in the pod. Looks like the NFS provider doesn't do that as of now.

Is https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client the place where this could be fixed?

spawnia avatar Oct 31 '19 12:10 spawnia

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jan 29 '20 13:01 fejta-bot

/remove-lifecycle stale

varun-da avatar Jan 29 '20 15:01 varun-da

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Apr 28 '20 15:04 fejta-bot

no solution since around 1 1/2 years? cant believe it.

slayer01 avatar Apr 30 '20 19:04 slayer01

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar May 30 '20 20:05 fejta-bot

/remove-lifecycle rotten

Maybe this issue needs to be taken to another repository. Is https://github.com/kubernetes-incubator/external-storage the right place for it?

spawnia avatar May 30 '20 21:05 spawnia

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

fsGroupChangePolicy: "Always"

refer the above link. But it seems that the feature is available only from k8-1.18 version. Guess if i'm not wrong.

raajavel avatar Jul 31 '20 11:07 raajavel

fsGroupChangePolicy: "Always"

The docs are not totally clear about this, but I understand that this is already the default behaviour.

By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted.

The section also indicates that not every volume type necessarily supports changing permissions:

This field only applies to volume types that support fsGroup controlled ownership and permissions.

spawnia avatar Jul 31 '20 12:07 spawnia

+1

ravikanth39 avatar Aug 13 '20 12:08 ravikanth39

The same issue for AWS EBS gp2 volumes

eakurdyukov avatar Aug 24 '20 19:08 eakurdyukov

+1

tetsun avatar Sep 23 '20 11:09 tetsun

I just ran into this issue today as well. Is there any workaround yet besides using an initContainer?

darose avatar Dec 15 '20 23:12 darose

+1 - facing this issue too!

euven avatar Jan 27 '21 00:01 euven

+1 - facing this issue

vishwa-vyom avatar Jan 28 '21 08:01 vishwa-vyom