hostpath-provisioner-operator icon indicating copy to clipboard operation
hostpath-provisioner-operator copied to clipboard

Adjust created PV owner to be the same as container user's UID

Open Diegunio opened this issue 1 year ago • 3 comments

I have deployed the hostpath-provisioner-operator and it works as it should. However, when I create a pod that has a volume attached and this pod operates on a specific uid, the owner of the folder/files is automatically set to root, which means the user in the container does not have access to the directory then pod is being killed.

I would like the owner of the directory/files to automatically change to the UID/GID that is contained in the container. Such a function exists in https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

An example of improper operation can be this yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: mongodb7
 labels:
  name: mongodb7
spec:
 serviceName: "mongodb-server"
 replicas: 1
 selector:
   matchLabels:
     app: mongodb7
 template:
   metadata:
     labels:
       app: mongodb7
   spec:
     containers:
       - name: mongodb-server
         image: (mongodb7-image)
         imagePullPolicy: Always    
         ports:
           - name: mongodb-port
             containerPort: 27017
             protocol: TCP
         volumeMounts:
           - mountPath: /data/db
             name: data
           - mountPath: /var/log
             name: audit
     securityContext:
       fsGroup: 1001
     imagePullSecrets:
       - name: dockercfg-secret
 volumeClaimTemplates:
   - metadata:
       name: data
     spec:
       storageClassName: hostpath-csi
       accessModes:
         - ReadWriteMany
       resources:
         requests:
           storage: 250Gi       
   - metadata:
       name: audit
     spec:
       storageClassName: hostpath-csi
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 20Gi

After creating a pod through StatefulSet, the owner of the directory in the filesystem is root, not mongodb/1001 as is required by the container to be able to read data from the database.

Structure of my node filesystem, where PV belongs to:

drwxr-xr-x. 3 root root system_u:object_r:container_file_t:s0     17 Mar  6 05:50 data
	drwxr-x---. 6 root root system_u:object_r:container_file_t:s0 4096 Mar  6 07:59 csi
		drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c20,c36 6 Mar  6 07:33 pvc-bcac971b-6150-408f-858a-19565e92a5d

Diegunio avatar Mar 06 '24 09:03 Diegunio

Hi, so this should already be working through https://github.com/kubevirt/hostpath-provisioner-operator/pull/189 in fact we have tests to ensure it is working. In particular can you check that you csidriver resource has the proper fsGroup policy set (if you use the operator it should be set already). Also note that your example you are using RWX for the data directory, which I am assuming is just a copy and paste error since hpp doesn't support RWX at all, and it will fail to bind the PVC to a PV.

If there are any permission issues with the chmod that is executed then the pods of the daemonset will indicate what the problem is. I believe the csi-provisioner container is the one that calls the chmod.

awels avatar Mar 21 '24 17:03 awels

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar Jun 19 '24 17:06 kubevirt-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot avatar Jul 19 '24 18:07 kubevirt-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot avatar Aug 18 '24 18:08 kubevirt-bot

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

kubevirt-bot avatar Aug 18 '24 18:08 kubevirt-bot