csi-s3 icon indicating copy to clipboard operation
csi-s3 copied to clipboard

Is there any way to influence the owner, group or permissions of a mounted volume?

Open mattgrayisok opened this issue 5 years ago • 13 comments

As per title. I've got this all working with DO spaces but can't figure out if there's any method of mounting into containers with permissions other than root:root 755.

So far I've tried:

  • chmod and chown ing the mount in a running container which does nothing
  • Ensuring the mount point exists with different UID GID and permissions in the base image which gets replaced with root:root 775

If I've missed anything obvious let me know.

Cheers 👌

mattgrayisok avatar Jun 01 '19 12:06 mattgrayisok

also stuck on the same problem :( mounter was goofsys. other mounters didn't even come close to working. Things i have tried:

  1. running the container as root with security context: runAsUser:0 (this actually worked) but one of my container kept crashing for unknown reason

  2. adding annotations to storageclass pv.beta.kubernetes.io/gid: "1000" (didn't work)

  3. Adding security context to deployment resource via spec.template.spec.securityContext: runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 fsUser: 1000 (didn't work)

  4. Haven't tried the initContainer chown-ing

HarrisDePerceptron avatar Jul 10 '19 16:07 HarrisDePerceptron

@mattgrayisok are you using the goofys mounter?

A current limitation of goofys is that it doesn't store the owner/group/permission of each file. The user, group, and permissions are configured when the S3 bucket is mounted.

@HarrisDePerceptron Since it can't be changed after it's mounted, I think that's why none of the methods you used to change the owner/permissions worked. goofys is ignoring those operations.

https://github.com/kahing/goofys/blob/2afba14969d15c288ad258d9d3bedd1707f5c6c5/README.md#current-status List of non-POSIX behaviors/limitations: does not store file mode/owner/group use --(dir|file)-mode or --(uid|gid) options

goofys does support passing in --uid --gid to specify the user and group that everything should be owned by, however csi-s3 doesn't currently allow that to be passed in and goofys defaults to root.

goofys also allows you to specify the permissions for directories and files, however they are currently hardcoded in csi-s3 to 0755 for directories and 0644 for files.

https://github.com/ctrox/csi-s3/blob/6a23dbae7249bef08d93c4bfc0333c8cc0d1bf5a/pkg/s3/mounter_goofys.go#L50-L59

If I have some time I will look into writing a pull request to allow those options to be passed in via the StorageClass (where you define the mounter you want to use), but I don't really know golang or kubernetes, so it might take me awhile.

RexMorgan avatar Aug 27 '19 23:08 RexMorgan

I need to use ReadWriteMany volume, and CSI-S3 works as expected on Kubernetes. However, when using goofys mounter with AWS S3, the bucket for the PVC is created as it should, however the container can't write to the volume folders, throwing an error like that when I or the NodeJS process try to create a file (its a Node official image container):

root@api-deployment:/tmp# touch test.js
touch: failed to close 'test.js': Invalid argument

root@api-deployment:/tmp# echo -v '' > file
bash: echo: write error: Invalid argument

If try the inverse (uploading directly to S3 pvc bucket "csi-fs" folder) it works as expected: it uploads with success and the file is visible inside the container:

root@api-deployment:/tmp# ls -la
drwxr-xr-x 2 root root  4096 Nov  3 20:45 .
drwxr-xr-x 1 root root    29 Nov  3 20:22 ..
-rw-r--r-- 1 root root 46606 Nov  3 20:23 mc.png

EDIT: If I delete "mc.png" from inside the container bash it permits with a: rm mc.png Also, if i create a folder inside /tmp with "mkdir" it permits too... Only file creating (and copying, as "cp" doenst work with the same error) appears to be blocked...

Is this related to this issue @ctrox @RexMorgan? And if it is, there is any way I can achieve ReadWriteMany with multiple pods using goofys or another mounter?


The file I modified were PVC example and StorageClass:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-s3
provisioner: ch.ctrox.csi.s3-driver
parameters:
  mounter: goofys
  csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ${PV_NAME}
  namespace: ${NAMESPACE_NAME}
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: ${STORAGE_SIZE}Gi
  storageClassName: csi-s3

---

apiVersion: v1
kind: Secret
metadata:
  name: csi-s3-secret
  namespace: ${NAMESPACE_NAME}
stringData:
  accessKeyID: ${AWS_ACCESS_KEY_ID}
  secretAccessKey: ${AWS_SECRET_ACCESS_KEY}
  endpoint: ${S3_ENDPOINT_URL}
  region: ${S3_REGION}

The others I used the ones from this repo as they are.

kevinfaveri avatar Nov 03 '19 20:11 kevinfaveri

For those with this problem.

I changed the goofys mounter options

https://github.com/ctrox/csi-s3/blob/master/pkg/mounter/goofys.go#L57

changed the permissions as needed and then create my own image.

jhonsfran1165 avatar Feb 28 '22 15:02 jhonsfran1165

For those with this problem. I changed the goofys mounter options https://github.com/ctrox/csi-s3/blob/master/pkg/mounter/goofys.go#L57 changed the permissions as needed and then created my own image.

Hi @jhonsfran1165 , could you eloborate how exactly its done ? After building the new image, how/where do I refer this image to update the mounter in my k8s pod ?

AnumSheraz avatar Jul 27 '22 11:07 AnumSheraz

Hi @AnumSheraz

After creating the new image you have to modify the files csi-s3.yaml and provisioner.yaml

  1. Change the image of the DaemonSet defined in csi-s3.yaml.
  2. Change the image of the StatefulSet defined in provisioner.yaml.
  3. Deploy normally

This is my image if you need to test https://hub.docker.com/r/jhonsfran/csi-s3

Let me know if it works.

jhonsfran1165 avatar Jul 27 '22 12:07 jhonsfran1165

Thankyou @jhonsfran1165 for quick response. I will try this approach. Hoever, I found this csi-s3 project that supports supplying options variable while defining storage class https://github.com/yandex-cloud/k8s-csi-s3/blob/bca84a06f45afa5b99da854294ae9cac52a70b75/deploy/kubernetes/examples/storageclass.yaml

AnumSheraz avatar Jul 27 '22 13:07 AnumSheraz

I saw it also but it didn't work for me. That's why I changed the image.

jhonsfran1165 avatar Jul 27 '22 14:07 jhonsfran1165

Hi when I change the image where do I make the changes in the files in the kubernetes/deploy directory.

fallmo avatar Sep 20 '22 16:09 fallmo

Hi @AnumSheraz

After creating the new image you have to modify the files csi-s3.yaml and provisioner.yaml

  1. Change the image of the DaemonSet defined in csi-s3.yaml.
  2. Change the image of the StatefulSet defined in provisioner.yaml.
  3. Deploy normally

This is my image if you need to test https://hub.docker.com/r/jhonsfran/csi-s3

Let me know if it works.

Hi help me out here. What images am i supposed to change?

fallmo avatar Sep 20 '22 23:09 fallmo

@fallmo it's been awhile, but I believe you need to change the image in the DaemonSet in csi-s3.yaml and the StatefulSet in provisioner.yaml (as was mentioned in the instructions you quoted.)

You need to change it to the new custom image you build and deploy after making these code changes.

RexMorgan avatar Sep 20 '22 23:09 RexMorgan

@fallmo it's been awhile, but I believe you need to change the image in the DaemonSet in csi-s3.yaml and the StatefulSet in provisioner.yaml (as was mentioned in the instructions you quoted.)

You need to change it to the new custom image you build and deploy after making these code changes.

Figured it out thanks

fallmo avatar Sep 21 '22 10:09 fallmo