proxmox-csi-plugin icon indicating copy to clipboard operation
proxmox-csi-plugin copied to clipboard

Cannot mount provisioned block device

Open modzilla99 opened this issue 1 year ago • 3 comments

Bug Report

Hey there,

I've got a weird one. I am trying to use your csi driver on k0s and for some reason the mounting part does not work correctly. In the logs there is no mention of a failed mount, but the fact is that when I ssh into my node the disk is not mounted anywhere. The unmounting part does not work as well when I do the mounting manually. Since I am running a RHEL derivate (AlmaLinux) I disabled SELinux to be sure, but I really don't think it causes harm in this case.

Description

This is my deployment:

~ : kubectl get no -o wide
NAME          STATUS   ROLES    AGE    VERSION       INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                 CONTAINER-RUNTIME
k8s-node-01   Ready    <none>   469d   v1.29.4+k0s   10.0.0.32     <none>        AlmaLinux 9.4 (Seafoam Ocelot)   5.14.0-362.24.2.el9_3.x86_64   containerd://1.7.15

~: cat kustomization.yaml
resources:
  - https://github.com/sergelogvinov/proxmox-csi-plugin/raw/v0.7.0/docs/deploy/proxmox-csi-plugin-release.yml
  - cloud-config.yaml
  - storageclasses.yaml
patches:
  - patch: |-
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: proxmox-csi-plugin-node
        namespace: csi-proxmox
      spec:
        template:
          spec:
            containers:
            - name: csi-node-driver-registrar
              args:
                - '-v=5'
                - '--csi-address=unix:///csi/csi.sock'
                - '--kubelet-registration-path=/var/lib/k0s/kubelet/plugins/csi.proxmox.sinextra.dev/csi.sock'
            volumes:
              - name: socket
                hostPath:
                  path: /var/lib/k0s/kubelet/plugins/csi.proxmox.sinextra.dev/
                  type: DirectoryOrCreate
              - name: registration
                hostPath:
                  path: /var/lib/k0s/kubelet/plugins_registry/
                  type: Directory
              - name: kubelet
                hostPath:
                  path: /var/lib/k0s/kubelet
                  type: Directory
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ssd-thin
provisioner: csi.proxmox.sinextra.dev
parameters:
  csi.storage.k8s.io/fstype: xfs
  storage: ssd_thin
  ssd: "true"
mountOptions:
- noatime
- discard

Logs

Controller: [kubectl logs -c proxmox-csi-plugin-controller proxmox-csi-plugin-controller-...]

CSI Attacher

I0819 19:29:15.677859       1 controller.go:210] Started VA processing "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:15.677898       1 csi_handler.go:224] CSIHandler: processing VA "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:15.677906       1 csi_handler.go:251] Attaching "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:15.677911       1 csi_handler.go:421] Starting attach operation for "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:15.677980       1 csi_handler.go:341] Adding finalizer to PV "pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5"
I0819 19:29:15.682072       1 csi_handler.go:350] PV finalizer added to "pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5"
I0819 19:29:15.682111       1 csi_handler.go:740] Found NodeID k8s-node-01 in CSINode k8s-node-01
I0819 19:29:15.682125       1 csi_handler.go:312] VA finalizer added to "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:15.682131       1 csi_handler.go:326] NodeID annotation added to "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:15.685367       1 connection.go:195] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0819 19:29:15.685380       1 connection.go:196] GRPC request: {"node_id":"k8s-node-01","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs","mount_flags":["noatime","discard"]}},"access_mode":{"mode":7}},"volume_context":{"ssd":"true","storage":"ssd_thin","storage.kubernetes.io/csiProvisionerIdentity":"1723649759130-9266-csi.proxmox.sinextra.dev"},"volume_id":"kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5"}
I0819 19:29:16.514700       1 leaderelection.go:281] successfully renewed lease csi-proxmox/external-attacher-leader-csi-proxmox-sinextra-dev
I0819 19:29:17.933761       1 connection.go:202] GRPC response: {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443031","lun":"1"}}
I0819 19:29:17.933774       1 connection.go:203] GRPC error: <nil>
I0819 19:29:17.933780       1 csi_handler.go:264] Attached "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:17.933785       1 util.go:38] Marking as attached "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:17.937476       1 util.go:52] Marked as attached "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:17.937490       1 csi_handler.go:270] Fully attached "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:17.937497       1 csi_handler.go:240] CSIHandler: finished processing "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:17.937523       1 controller.go:210] Started VA processing "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:17.937530       1 csi_handler.go:224] CSIHandler: processing VA "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"
I0819 19:29:17.937543       1 csi_handler.go:246] "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38" is already attached
I0819 19:29:17.937549       1 csi_handler.go:240] CSIHandler: finished processing "csi-4c9ba24a8356f2696beda8d39a051aa7652da2ad5512fe463061e572b0297b38"

Controllerplugin

I0819 19:29:13.979361       1 controller.go:87] "CreateVolume: called" args="{\"accessibility_requirements\":{\"preferred\":[{\"segments\":{\"topology.kubernetes.io/region\":\"kie\",\"topology.kubernetes.io/zone\":\"pve-node1\"}}],\"requisite\":[{\"segments\":{\"topology.kubernetes.io/region\":\"kie\",\"topology.kubernetes.io/zone\":\"pve-node1\"}}]},\"capacity_range\":{\"required_bytes\":1073741824},\"name\":\"pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5\",\"parameters\":{\"ssd\":\"true\",\"storage\":\"ssd_thin\"},\"volume_capabilities\":[{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\",\"discard\"]}},\"access_mode\":{\"mode\":7}}]}"
I0819 19:29:14.012566       1 controller.go:166] "CreateVolume" storageConfig={"content":"rootdir,images","digest":"3b6162390fb19e91eef72f623429b52c4ed8160f","storage":"ssd_thin","thinpool":"ssd_thin","type":"lvmthin","vgname":"MX500"}
I0819 19:29:14.443997       1 controller.go:218] "CreateVolume: volume created" cluster="kie" volumeID="kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5" size=1
I0819 19:29:15.686011       1 controller.go:307] "ControllerPublishVolume: called" args="{\"node_id\":\"k8s-node-01\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\",\"discard\"]}},\"access_mode\":{\"mode\":7}},\"volume_context\":{\"ssd\":\"true\",\"storage\":\"ssd_thin\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1723649759130-9266-csi.proxmox.sinextra.dev\"},\"volume_id\":\"kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5\"}"
I0819 19:29:15.689818       1 controller.go:344] "ControllerPublishVolume: failed to get proxmox vmrID from ProviderID" cluster="kie" nodeID="k8s-node-01"
I0819 19:29:17.933460       1 controller.go:423] "ControllerPublishVolume: volume published" cluster="kie" volumeID="kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5" vmID=802

Node:

I0819 19:29:18.221081       1 node.go:89] "NodeStageVolume: called" args="{\"publish_context\":{\"DevicePath\":\"/dev/disk/by-id/wwn-0x5056432d49443031\",\"lun\":\"1\"},\"staging_target_path\":\"/var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\",\"discard\"]}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"ssd\":\"true\",\"storage\":\"ssd_thin\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1723649759130-9266-csi.proxmox.sinextra.dev\"},\"volume_id\":\"kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5\"}"
I0819 19:29:18.221271       1 node.go:124] "NodeStageVolume: mount device" device="/dev/sdb" path="/var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount"
I0819 19:29:18.221571       1 mount_linux.go:634] Attempting to determine if disk "/dev/sdb" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/sdb])
I0819 19:29:18.225989       1 mount_linux.go:637] Output: ""
I0819 19:29:18.226004       1 mount_linux.go:572] Disk "/dev/sdb" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/sdb]
I0819 19:29:18.417379       1 mount_linux.go:583] Disk successfully formatted (mkfs): xfs - /dev/sdb /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount
I0819 19:29:18.417394       1 mount_linux.go:601] Attempting to mount disk /dev/sdb in xfs format at /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount
I0819 19:29:18.417558       1 mount_linux.go:249] Detected OS without systemd
I0819 19:29:18.417567       1 mount_linux.go:224] Mounting cmd (mount) with arguments (-t xfs -o noatime,noatime,discard,nouuid,defaults /dev/sdb /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount)
I0819 19:29:18.434190       1 node.go:214] "NodeStageVolume: volume mounted" device="/dev/sdb"
I0819 19:29:18.435215       1 node.go:518] "NodeGetCapabilities: called"
I0819 19:29:18.437440       1 node.go:518] "NodeGetCapabilities: called"
I0819 19:29:18.438231       1 node.go:518] "NodeGetCapabilities: called"
I0819 19:29:18.438840       1 node.go:285] "NodePublishVolume: called" args="{\"publish_context\":{\"DevicePath\":\"/dev/disk/by-id/wwn-0x5056432d49443031\",\"lun\":\"1\"},\"staging_target_path\":\"/var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount\",\"target_path\":\"/var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\",\"discard\"]}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"nginx-stateful-78c46d6597-ngm7d\",\"csi.storage.k8s.io/pod.namespace\":\"csi-proxmox\",\"csi.storage.k8s.io/pod.uid\":\"3fe14243-7190-45be-b555-8432b6c5ac17\",\"csi.storage.k8s.io/serviceAccount.name\":\"default\",\"ssd\":\"true\",\"storage\":\"ssd_thin\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1723649759130-9266-csi.proxmox.sinextra.dev\"},\"volume_id\":\"kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5\"}"
I0819 19:29:18.440798       1 mount_linux.go:224] Mounting cmd (mount) with arguments (-t xfs -o bind /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount /var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount)
I0819 19:29:18.441785       1 mount_linux.go:224] Mounting cmd (mount) with arguments (-t xfs -o bind,remount,rw /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount /var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount)
I0819 19:29:18.442689       1 node.go:379] "NodePublishVolume: volume published for pod" device="/dev/disk/by-id/wwn-0x5056432d49443031" pod="csi-proxmox/nginx-stateful-78c46d6597-ngm7d"
I0819 19:45:58.648028       1 node.go:285] "NodePublishVolume: called" args="{\"publish_context\":{\"DevicePath\":\"/dev/disk/by-id/wwn-0x5056432d49443031\",\"lun\":\"1\"},\"staging_target_path\":\"/var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount\",\"target_path\":\"/var/lib/k0s/kubelet/pods/383d7e80-ec16-41a7-b0e7-152b9eb30af5/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\",\"discard\"]}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"nginx-stateful-78c46d6597-sdhgp\",\"csi.storage.k8s.io/pod.namespace\":\"csi-proxmox\",\"csi.storage.k8s.io/pod.uid\":\"383d7e80-ec16-41a7-b0e7-152b9eb30af5\",\"csi.storage.k8s.io/serviceAccount.name\":\"default\",\"ssd\":\"true\",\"storage\":\"ssd_thin\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1723649759130-9266-csi.proxmox.sinextra.dev\"},\"volume_id\":\"kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5\"}"
I0819 19:45:58.650499       1 mount_linux.go:224] Mounting cmd (mount) with arguments (-t xfs -o bind /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount /var/lib/k0s/kubelet/pods/383d7e80-ec16-41a7-b0e7-152b9eb30af5/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount)
I0819 19:45:58.651779       1 mount_linux.go:224] Mounting cmd (mount) with arguments (-t xfs -o bind,remount,rw /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount /var/lib/k0s/kubelet/pods/383d7e80-ec16-41a7-b0e7-152b9eb30af5/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount)
I0819 19:45:58.653136       1 node.go:379] "NodePublishVolume: volume published for pod" device="/dev/disk/by-id/wwn-0x5056432d49443031" pod="csi-proxmox/nginx-stateful-78c46d6597-sdhgp"
I0819 19:45:58.744603       1 node.go:390] "NodeUnpublishVolume: called" args="{\"target_path\":\"/var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount\",\"volume_id\":\"kie/pve-node1/ssd_thin/vm-9999-pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5\"}"
I0819 19:45:58.744744       1 mount_helper_common.go:93] unmounting "/var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount" (corruptedMount: false, mounterCanSkipMountPointChecks: true)
I0819 19:45:58.744797       1 mount_linux.go:366] Unmounting /var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount
I0819 19:45:58.746813       1 mount_helper_common.go:150] Deleting path "/var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount"
I0819 19:45:58.747153       1 node.go:404] "NodePublishVolume: volume unpublished" path="/var/lib/k0s/kubelet/pods/3fe14243-7190-45be-b555-8432b6c5ac17/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount"

Environment

  • Plugin version: 0.6.1 -> upgraded to 0.7.0
  • Kubernetes version: v1.29.4
  • CSI resource on the node:
drivers:
- allocatable:
    count: 24
  name: csi.proxmox.sinextra.dev
  nodeID: k8s-node-01
  topologyKeys:
  - topology.kubernetes.io/region
  - topology.kubernetes.io/zone
  • OS version: AlmaLinux 9.4 (Seafoam Ocelot)

modzilla99 avatar Aug 19 '24 20:08 modzilla99

Better to check disk inside the container (pod)

Controller done the job:

CreateVolume: volume created
ControllerPublishVolume: volume published

Node plugin done the job too:

Disk successfully formatted (mkfs): xfs - /dev/sdb /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount
Mounting cmd (mount) with arguments (-t xfs -o bind,remount,rw /var/lib/k0s/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/8df3edcea3bfd50f034ba05a0290386aa203bc4fcdb1325cc75005c52fb9fe2d/globalmount /var/lib/k0s/kubelet/pods/383d7e80-ec16-41a7-b0e7-152b9eb30af5/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount)

the disk should be here /var/lib/k0s/kubelet/pods/383d7e80-ec16-41a7-b0e7-152b9eb30af5/volumes/kubernetes.io~csi/pvc-cf3b9e48-cb59-4212-947a-d42c5c9972e5/mount

Unfortunately, I do not have experience with k0s, I cannot say exactly how to find the disk on the host.

sergelogvinov avatar Aug 21 '24 15:08 sergelogvinov

Thanks for the answer, but the volume is in fact not mounted. The pod will start and the container can write to that location, but the data will end up on the VMs root filesystem. I am pretty puzzled as to why the CSI driver thinks that the mount succeeded as it clearly did not. It neither mount nor unmounts the disk at all. What it does tho, is formatting it correctly.

modzilla99 avatar Aug 21 '24 16:08 modzilla99

If kubket inside the container, check the hostPath volumes paths.

https://github.com/sergelogvinov/proxmox-csi-plugin/blob/36fa5324074d6a695404c0c94fee65ff35c2d96e/charts/proxmox-csi-plugin/values.yaml#L169

example https://github.com/hetznercloud/csi-driver/blob/main/docs/kubernetes/README.md#alternative-kubelet-directory

kubeletDir: /var/lib/k0s/kubelet

sergelogvinov avatar Aug 21 '24 17:08 sergelogvinov

Thanks for your reply. I finally got to it and you were completely right I missed one key... This is my now working kustomization:

resources:
  - https://github.com/sergelogvinov/proxmox-csi-plugin/raw/v0.8.2/docs/deploy/proxmox-csi-plugin-release.yml
  - cloud-config.yaml
  - storageclasses.yaml
patches:
  - patch: |-
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: proxmox-csi-plugin-node
        namespace: csi-proxmox
      spec:
        template:
          spec:
            containers:
            - name: csi-node-driver-registrar
              args:
                - '-v=5'
                - '--csi-address=unix:///csi/csi.sock'
                - '--kubelet-registration-path=/var/lib/k0s/kubelet/plugins/csi.proxmox.sinextra.dev/csi.sock'
            - name: proxmox-csi-plugin-node
              volumeMounts:
              - name: kubelet
                mountPath: /var/lib/k0s/kubelet
                mountPropagation: Bidirectional
            volumes:
            - name: socket
              hostPath:
                path: /var/lib/k0s/kubelet/plugins/csi.proxmox.sinextra.dev/
                type: DirectoryOrCreate
            - name: registration
              hostPath:
                path: /var/lib/k0s/kubelet/plugins_registry/
                type: Directory
            - name: kubelet
              hostPath:
                path: /var/lib/k0s/kubelet
                type: Directory

modzilla99 avatar Oct 20 '24 14:10 modzilla99