Creating static PV with PVC leads to Attach failed for volume
Describe the bug
I have installed Rook on my k3s cluster, and it works fine. I created a StorageClass for my CephFS pool, and I can dynamically create PVC's normally.
Thing is, I really would like to use a (sub)volume that I already created. I followed the instructions here, but when the test container spins up, I get:
Warning FailedAttachVolume 43s attachdetach-controller AttachVolume.Attach failed for volume "test-static-pv" : timed out waiting for external-attacher of cephfs.csi.ceph.com CSI driver to attach volume test-static-pv
This is my pv file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-static-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
# node stage secret name
name: rook-csi-cephfs-node
# node stage secret namespace where above secret is created
namespace: rook-ceph
volumeAttributes:
# optional file system to be mounted
"fsName": "mail"
# Required options from storageclass parameters need to be added in volumeAttributes
"clusterID": "mycluster"
"staticVolume": "true"
"rootPath": "/volumes/mail-storage/mail-test/8886a1db-6536-4e5a-8ef1-73b421a96d24"
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: test-static-pv
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Environment details
- Image/version of Ceph CSI driver :quay.io/cephcsi/cephcsi:v3.14.1 (rook default)
- Kernel version : 6.14.0-1005
- Mounter used for mounting PVC : kernel, though I'm not sure
- Kubernetes cluster version : v1.32.5+k3s1
Steps to reproduce
Steps to reproduce the behavior:
- Create a pv, then a pvc
- Create a pod that uses the pvc
- See error
Warning FailedAttachVolume 57s attachdetach-controller AttachVolume.Attach failed for volume "mail-static-pv" : timed out waiting for external-attacher of cephfs.csi.ceph.com CSI driver to attach volume mail-static-pv
- csi-provisioner and csi-rbdplugin/csi-cephfsplugin container logs from the provisioner pod.
If the issue is in PVC resize please attach complete logs of below containers.
- csi-resizer and csi-rbdplugin/csi-cephfsplugin container logs from the provisioner pod.
plugin:
Defaulted container "driver-registrar" out of: driver-registrar, csi-cephfsplugin, log-collector
Defaulted container "driver-registrar" out of: driver-registrar, csi-cephfsplugin, log-collector
Defaulted container "driver-registrar" out of: driver-registrar, csi-cephfsplugin, log-collector
I0628 15:38:19.389905 1 main.go:150] "Version" version="v2.13.0"
I0628 15:38:19.390077 1 main.go:151] "Running node-driver-registrar" mode=""
I0628 15:38:20.396679 1 node_register.go:56] "Starting Registration Server" socketPath="/registration/rook-ceph.cephfs.csi.ceph.com-reg.sock"
I0628 15:38:20.397109 1 node_register.go:66] "Registration Server started" socketPath="/registration/rook-ceph.cephfs.csi.ceph.com-reg.sock"
I0628 15:38:20.397363 1 node_register.go:96] "Skipping HTTP server"
I0628 15:38:21.301572 1 main.go:96] "Received GetInfo call" request="&InfoRequest{}"
I0628 15:38:21.340984 1 main.go:108] "Received NotifyRegistrationStatus call" status="&RegistrationStatus{PluginRegistered:true,Error:,}"
I0628 15:37:46.970239 1 main.go:150] "Version" version="v2.13.0"
I0628 15:37:46.970400 1 main.go:151] "Running node-driver-registrar" mode=""
I0628 15:37:47.976430 1 node_register.go:56] "Starting Registration Server" socketPath="/registration/rook-ceph.cephfs.csi.ceph.com-reg.sock"
I0628 15:37:47.976783 1 node_register.go:66] "Registration Server started" socketPath="/registration/rook-ceph.cephfs.csi.ceph.com-reg.sock"
I0628 15:37:47.976921 1 node_register.go:96] "Skipping HTTP server"
I0628 15:37:48.204986 1 main.go:96] "Received GetInfo call" request="&InfoRequest{}"
I0628 15:37:48.241734 1 main.go:108] "Received NotifyRegistrationStatus call" status="&RegistrationStatus{PluginRegistered:true,Error:,}"
I0628 15:37:15.007196 1 main.go:150] "Version" version="v2.13.0"
I0628 15:37:15.007319 1 main.go:151] "Running node-driver-registrar" mode=""
I0628 15:37:16.012583 1 node_register.go:56] "Starting Registration Server" socketPath="/registration/rook-ceph.cephfs.csi.ceph.com-reg.sock"
I0628 15:37:16.012981 1 node_register.go:66] "Registration Server started" socketPath="/registration/rook-ceph.cephfs.csi.ceph.com-reg.sock"
I0628 15:37:16.013127 1 node_register.go:96] "Skipping HTTP server"
I0628 15:37:16.762237 1 main.go:96] "Received GetInfo call" request="&InfoRequest{}"
I0628 15:37:16.796986 1 main.go:108] "Received NotifyRegistrationStatus call" status="&RegistrationStatus{PluginRegistered:true,Error:,}"
provisioner:
Defaulted container "csi-attacher" out of: csi-attacher, csi-snapshotter, csi-resizer, csi-provisioner, csi-cephfsplugin, log-collector
Defaulted container "csi-attacher" out of: csi-attacher, csi-snapshotter, csi-resizer, csi-provisioner, csi-cephfsplugin, log-collector
I0628 15:45:01.480178 1 main.go:113] "Version" version="v4.8.1"
I0628 15:45:02.484254 1 common.go:143] "Probing CSI driver for readiness"
I0628 15:45:02.490378 1 leaderelection.go:257] attempting to acquire leader lease rook-ceph/external-attacher-leader-rook-ceph-cephfs-csi-ceph-com...
I0628 15:45:01.486747 1 main.go:113] "Version" version="v4.8.1"
I0628 15:45:02.491301 1 common.go:143] "Probing CSI driver for readiness"
I0628 15:45:02.496905 1 leaderelection.go:257] attempting to acquire leader lease rook-ceph/external-attacher-leader-rook-ceph-cephfs-csi-ceph-com...
I0628 15:47:20.900908 1 leaderelection.go:271] successfully acquired lease rook-ceph/external-attacher-leader-rook-ceph-cephfs-csi-ceph-com
I0628 15:47:20.901123 1 controller.go:129] "Starting CSI attacher"
This issue seems similar to https://github.com/ceph/ceph-csi/issues/3476
You need to adjust the driver name in the PV to match csi Driver deployed by Rook . The csi driver name is different when deployed using yamls from this Repo and when it's managed by Rook
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.