`fsgroup` is not applied correctly to already existing content in PVCs
Hello 👋,
I'm using the local-path-provisioner as part of k3d, to test and validate our development and I discovered something strange with the local-path-provisioner and its conformance to the fsGroup parameter
First, all the code used in this issue is available here
With EKS cluster
First, I deploy an app, this just writes on a PVC some files. The important settings are:
fsgroup: 4000runAsGroup: 4000runAs: 1000
$ kubectl apply -k eks/01-write/
namespace/kdavin-test-fsgroup created
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test created
deployment.apps/fsgroup-test created
$ kubectl logs fsgroup-test-dd796cfdd-87fbm -f
total 20
drwxrwsr-x 3 root 4000 4096 Jun 1 14:28 .
drwxr-xr-x 1 root root 43 Jun 1 14:28 ..
drwxrws--- 2 root 4000 16384 Jun 1 14:28 lost+found
Hello from fsgroup-test
total 24
drwxrwsr-x 3 root 4000 4096 Jun 1 14:28 .
drwxr-xr-x 1 root root 43 Jun 1 14:28 ..
-rw-r--r-- 1 1000 4000 0 Jun 1 14:28 foo
drwxrws--- 2 root 4000 16384 Jun 1 14:28 lost+found
-r-xr-xr-x 1 1000 4000 18 Jun 1 14:28 test.txt
-rw-r--r-- 1 1000 4000 0 Jun 1 14:28 /test/a/b/c/subfile.txt
So, files are owned by 1000 and group is 4000.
Then, I redeploy the app with different SecurityContext settings:
fsgroup: 6000runAsGroup: 6000runAs: 1000
$ kubectl apply -k eks/02-read/
namespace/kdavin-test-fsgroup unchanged
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test unchanged
deployment.apps/fsgroup-test configured
$ kubectl logs fsgroup-test-77bcb759db-t7tmd
total 28
drwxrwsr-x 4 root 6000 4096 Jun 1 14:28 .
drwxr-xr-x 1 root root 43 Jun 1 14:30 ..
drwxrwsr-x 3 1000 6000 4096 Jun 1 14:28 a
-rw-rw-r-- 1 1000 6000 0 Jun 1 14:28 foo
drwxrws--- 2 root 6000 16384 Jun 1 14:28 lost+found
-rwxrwxr-x 1 1000 6000 18 Jun 1 14:28 test.txt
-rw-rw-r-- 1 1000 6000 0 Jun 1 14:28 /test/a/b/c/subfile.txt
We can see, 1000 is still the owner, but 6000 is now the group owner, instead of 4000 like before (following the k8s fsgroup spec).
With K3d, presumably k3s
I repeat the same thing with k3d now, with the same settings. First with:
$ kubectl apply -k k3s/01-write/
namespace/kdavin-test-fsgroup created
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test created
deployment.apps/fsgroup-test created
$ kubectl logs fsgroup-test-79b59c9988-9jmrl
total 8
drwxrwxrwx 2 root root 4096 Jun 1 14:33 .
drwxr-xr-x 1 root root 4096 Jun 1 14:34 ..
Hello from fsgroup-test
total 12
drwxrwxrwx 2 root root 4096 Jun 1 14:34 .
drwxr-xr-x 1 root root 4096 Jun 1 14:34 ..
-rw-r--r-- 1 1000 4000 0 Jun 1 14:34 foo
-r-xr-xr-x 1 1000 4000 18 Jun 1 14:34 test.txt
-rw-r--r-- 1 1000 4000 0 Jun 1 14:34 /test/a/b/c/subfile.txt
All is ok, with the same value as eks. But if I apply the same change, with following settings:
fsgroup: 6000runAsGroup: 6000runAs: 1000
$ kubectl apply -k k3s/02-read/
namespace/kdavin-test-fsgroup unchanged
configmap/fsgroup-test-789h6hh8dd created
persistentvolumeclaim/fsgroup-test unchanged
deployment.apps/fsgroup-test configured
$ kubectl logs fsgroup-test-85b478c545-l7znn
total 16
drwxrwxrwx 3 root root 4096 Jun 1 14:34 .
drwxr-xr-x 1 root root 4096 Jun 1 14:35 ..
drwxr-xr-x 3 1000 4000 4096 Jun 1 14:34 a
-rw-r--r-- 1 1000 4000 0 Jun 1 14:34 foo
-r-xr-xr-x 1 1000 4000 18 Jun 1 14:34 test.txt
-rw-r--r-- 1 1000 4000 0 Jun 1 14:34 /test/a/b/c/subfile.txt
Files are still owned by 4000 at group level, where it should be owned by 6000 now.
Conclusion
Is it a bug or an intended limitation of the local-path-provisionner?
If yes, could we state it in the readme?
At implementation level, could we, for example, provide the fsGroup parameter to the setup script as an env variable to make this setup phase compatible?
As user of it, can we do something to bypass this limitation?
Additional details:
- local-path-provisioner version: v0.0.24
- k3d version: k3d version v5.5.1
- k3s version v1.26.4-k3s1 (default)
If you need some extra details, don't hesitate to ask
/cc @tomdcc @mfredenhagen @pio-kol @gurbuzali @athkalia @skurtzemann @deepy @robmoore-i
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
still up
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Still relevant
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Never been so relevant. With some help, we could implement a fix
Issue is still relevant. The securityContext.fsGroup field is not respected. Is this a limitation of the underlying local or hostPath PV?
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
👋 and open to contribute if some guidance is provided 😇
👋 and open to contribute if some guidance is provided 😇
Hello @davinkevin Feel free to contribute PR. We will review it. Thank you.
Please, can you provide some guidance? I would like to discuss before coding instead of rushing 😉
If you have an idea of a plan, it could be enought 😇
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Up
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
More than ever required, and still open for contribution if I have minimal guidance for it
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
👋
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
👋 and see you in 3 months 😇
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
How are you this quarter? 😇