cstor-operators icon indicating copy to clipboard operation
cstor-operators copied to clipboard

Error 'Volume xxx Busy, status: UnmountUnderProgress' in cstor-csi-node pod

Open javiku opened this issue 3 years ago • 6 comments

Kubernetes version: 1.21.6 OpenEBS versions: 3.0.0 (also seen in 2.11.x)

We are seeing this problem happen every now and then when reusing previously existing PVs.

The Pod that tries to reuse the PV will get stuck in ContainerCreating status. Kubelet logs will display an error when requesting CSI to mount the volume:

Nov 24 09:20:47 installer-worker-2 kubelet[15141]: E1124 09:20:47.015940   15141 csi_attacher.go:338] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = Volume pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f Busy, status: UnmountUnderProgress
Nov 24 09:20:47 installer-worker-2 kubelet[15141]: E1124 09:20:47.016267   15141 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/cstor.csi.openebs.io^pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f podName: nodeName:}" failed. No retries permitted until 2021-11-24 09:22:49.016214595 +0100 CET m=+69209.124781304 (durationBeforeRetry 2m2s). Error: "MountVolume.MountDevice failed for volume \"pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f\" (UniqueName: \"kubernetes.io/csi/cstor.csi.openebs.io^pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f\") pod \"kd-080610-5014b0```

The openebs-cstor-csi-node-xxxx Pod in that node is logging this error:

time="2021-11-24T08:20:47Z" level=info msg="Req 87558744-da67-442f-8efd-1ba4a9daf097: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1637673057924-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f\"}"
time="2021-11-24T08:20:47Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f Busy, status: UnmountUnderProgress"

It seems it can't mount the volume and reports the same error over and over in every Kubelet request.

We have seen that if we delete the openebs-cstor-csi-node-xxxx Pod, the new openebs-cstor-csi-node-yyyy Pod will be able to mount the volume the first time Kubelet requests it. It looks like The old Pod was stuck for some reason, but the new one is able to carry out the mounting without any problems.

These are the logs of the new openebs-cstor-csi-node-yyyy:

time="2021-11-24T08:27:16Z" level=info msg="3.0.0 - 77ffa28cdf5c0d4c2f95f114a22fb0f407b4da81"
time="2021-11-24T08:27:16Z" level=info msg="DriverName: cstor.csi.openebs.io Plugin: node EndPoint: unix:///plugin/csi.sock URL: https://openebs.io NodeID: installer-worker-2"
time="2021-11-24T08:27:16Z" level=info msg="enabling volume access mode: SINGLE_NODE_WRITER"
time="2021-11-24T08:27:16Z" level=info msg="Monitoring mounts"
time="2021-11-24T08:27:16Z" level=info msg="Listening for connections on address: &net.UnixAddr{Name:\"//plugin/csi.sock\", Net:\"unix\"}"
time="2021-11-24T08:27:17Z" level=info msg="Req f2a23828-feba-429e-84a9-e96c0940d5db: /csi.v1.Node/NodeGetInfo {}"
time="2021-11-24T08:27:17Z" level=info msg="Resp f2a23828-feba-429e-84a9-e96c0940d5db: {\"accessible_topology\":{\"segments\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/os\":\"linux\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"installer-worker-2\",\"kubernetes.io/os\":\"linux\",\"kumori/ingress\":\"true\",\"kumori/public-ip\":\"10.0.18.58\",\"kumori/role\":\"worker\",\"kumori/storage\":\"true\",\"openebs.io/nodename\":\"installer-worker-2\",\"topology.cstor.openebs.io/nodeName\":\"installer-worker-2\"}},\"node_id\":\"installer-worker-2\"}"

time="2021-11-24T08:28:55Z" level=info msg="Req e574ea75-36ed-4d4a-b739-f5d9fe5a282a: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1637673057924-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f\"}"
time="2021-11-24T08:28:55Z" level=info msg="Volume pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f is ready to accept IOs"
time="2021-11-24T08:28:55Z" level=info msg="Volume pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f is reachable to create connections"
time="2021-11-24T08:28:56Z" level=info msg="NodeStageVolume pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f: start format and mount operation"
time="2021-11-24T08:28:56Z" level=info msg="Resp e574ea75-36ed-4d4a-b739-f5d9fe5a282a: {}"
time="2021-11-24T08:28:56Z" level=info msg="Req 97ec5a8b-f6dc-4bbd-a438-2bd5907d2c92: /csi.v1.Node/NodePublishVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/globalmount\",\"target_path\":\"/var/lib/kubelet/pods/4f63d844-fc12-4ec0-9e5a-70101f9b043f/volumes/kubernetes.io~csi/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"kd-080610-5014b033-worker000-deployment-2\",\"csi.storage.k8s.io/pod.namespace\":\"kumori\",\"csi.storage.k8s.io/pod.uid\":\"4f63d844-fc12-4ec0-9e5a-70101f9b043f\",\"csi.storage.k8s.io/serviceAccount.name\":\"default\",\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1637673057924-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f\"}"
time="2021-11-24T08:28:56Z" level=info msg="NodePublishVolume: creating dir /var/lib/kubelet/pods/4f63d844-fc12-4ec0-9e5a-70101f9b043f/volumes/kubernetes.io~csi/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/mount"
time="2021-11-24T08:28:56Z" level=info msg="NodePublishVolume: mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/globalmount at /var/lib/kubelet/pods/4f63d844-fc12-4ec0-9e5a-70101f9b043f/volumes/kubernetes.io~csi/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/mount with option [bind] as fstype ext4"
time="2021-11-24T08:28:56Z" level=info msg="Resp 97ec5a8b-f6dc-4bbd-a438-2bd5907d2c92: {}"

Looking at openebs-cstor-csi-node-xxxx Pod logs again (the one that got stuck) we can see that the last time it was releasing that volume, an error was logged:

Operation cannot be fulfilled on cstorvolumeattachments.cstor.openebs.io \"pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f-installer-worker-2\": StorageError: invalid object, Code: 4, Key: /registry/cstor.openebs.io/cstorvolumeattachments/openebs/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f-installer-worker-2, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 30cedcf8-db99-4d0b-ae76-56889b67a2b2, UID in object meta: 

Wider log containing that error:

time="2021-11-24T08:05:55Z" level=info msg="Req 3eea4e31-689f-45da-8868-5b4d730c38de: /csi.v1.Node/NodeUnstageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/globalmount\",\"volume_id\":\"pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f\"}"
time="2021-11-24T08:05:55Z" level=info msg="iscsi: log out target 172.25.126.201:3260 iqn iqn.2016-09.com.openebs.cstor:pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f iface default"
time="2021-11-24T08:05:55Z" level=info msg="iscsi: delete node record target 172.25.126.201:3260 iqn iqn.2016-09.com.openebs.cstor:pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f"
time="2021-11-24T08:05:55Z" level=info msg="cstor-csi: volume pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f path: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f/globalmount has been unmounted."
time="2021-11-24T08:05:55Z" level=info msg="Deleting cva pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f-installer-worker-2"
time="2021-11-24T08:05:55Z" level=info msg="Resp 3eea4e31-689f-45da-8868-5b4d730c38de: {}"
time="2021-11-24T08:05:55Z" level=info msg="iscsi: delete node record target 172.25.162.9:3260 iqn iqn.2016-09.com.openebs.cstor:pvc-cbc07d87-ab4a-4eab-94a2-0c2966da3818"
time="2021-11-24T08:05:55Z" level=info msg="cstor-csi: volume pvc-cbc07d87-ab4a-4eab-94a2-0c2966da3818 path: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cbc07d87-ab4a-4eab-94a2-0c2966da3818/globalmount has been unmounted."
time="2021-11-24T08:05:55Z" level=info msg="Deleting cva pvc-cbc07d87-ab4a-4eab-94a2-0c2966da3818-installer-worker-2"
time="2021-11-24T08:05:55Z" level=info msg="Cleaning up pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f from node"
time="2021-11-24T08:05:55Z" level=warning msg="Warning: Unmount skipped because path does not exist: "
time="2021-11-24T08:05:55Z" level=info msg="Cleaning up cva pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f-installer-worker-2"
time="2021-11-24T08:05:55Z" level=info msg="Resp 4bb25fdb-8242-4032-bcdd-a3836e81c77e: {}"
time="2021-11-24T08:05:55Z" level=error msg="Operation cannot be fulfilled on cstorvolumeattachments.cstor.openebs.io \"pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f-installer-worker-2\": StorageError: invalid object, Code: 4, Key: /registry/cstor.openebs.io/cstorvolumeattachments/openebs/pvc-6a006be1-87a3-4059-b3b4-2656a6dee88f-installer-worker-2, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 30cedcf8-db99-4d0b-ae76-56889b67a2b2, UID in object meta: "

javiku avatar Nov 24 '21 08:11 javiku

Can you try out with the latest release of OpenEBS 3.1? Suspecting it might have got fixed via cstor-csi#169

mittachaitu avatar Jan 18 '22 14:01 mittachaitu

Hello @mittachaitu , sadly it is still happening with OpenEBS v3.1.0 (and Kubernetes v1.21.9).

eslap avatar Jan 26 '22 16:01 eslap

Hi @eslap @javiku . Could you provide the exact steps you performed to get the above error. Also arre you able to reproduce this error with the same or new volume?

Ab-hishek avatar May 24 '22 04:05 Ab-hishek

Hi @Ab-hishek as I stated in the issue description, we have only seen this problem when reusing a existing PVs.

If I remember correctly (it's been some time), the error appeared during long running tests where StatefulSets that use cStor volumes were created and destroyed all over again many times.

javiku avatar May 24 '22 08:05 javiku

I'm seeing this with cstor-operator 3.3.0, also with StatefulSets rescheduling pods.

I can work around it by killing the CSI node, but it seems like a legitimate race condition around TransitionVolList. Relevant logs with 2 PVCs experiencing the issue:

cstor-csi-plugin time="2022-08-06T06:42:24Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:42:24Z" level=info msg="Req 91781e09-b6d3-4cc0-9253-414ee0eff90d: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:42:24Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:42:25Z" level=info msg="Req e55e7c7b-2cb9-48e8-bb83-f041b87a55b0: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:42:25Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:42:27Z" level=info msg="Req 947b37f9-d690-4ab3-bcb8-e89cc018355f: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:42:27Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:42:31Z" level=info msg="Req d93a32f8-c59e-4a10-9514-1d8bc24921a1: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:42:31Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:42:39Z" level=info msg="Req 2fdc0dbf-cabf-4aaf-843d-4e8f2f227666: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:42:39Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:42:55Z" level=info msg="Req 6a8b9cc5-5b48-4b51-a1d8-500b0359b837: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:42:55Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:43:27Z" level=info msg="Req 932181b6-66d9-48e3-aab1-f2db2ecf0a18: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:43:27Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:44:13Z" level=info msg="Req 73afe3bc-1554-4cfb-9439-aa8d463a976d: /csi.v1.Node/NodeUnpublishVolume {\"target_path\":\"/var/lib/kubelet/pods/61269d1c-ef57-4c8d-9be7-e08df1d84949/volumes/kubernetes.io~csi/pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1/mount\",\"volume_id\":\"pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1\"}"
cstor-csi-plugin time="2022-08-06T06:44:13Z" level=warning msg="NodeUnpublishVolume: /var/lib/kubelet/pods/61269d1c-ef57-4c8d-9be7-e08df1d84949/volumes/kubernetes.io~csi/pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1/mount is not mounted, err: <nil>"
cstor-csi-plugin time="2022-08-06T06:44:13Z" level=info msg="Volume with ID: pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1 has been successfully removed from the transition volume list: map[pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f:UnmountUnderProgress]"
cstor-csi-plugin time="2022-08-06T06:44:13Z" level=info msg="Resp 73afe3bc-1554-4cfb-9439-aa8d463a976d: {}"
cstor-csi-plugin time="2022-08-06T06:44:31Z" level=info msg="Req 063abc3a-6e7a-4861-9a5e-0f2c6b16ed8d: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:44:31Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:46:15Z" level=info msg="Req 9124e389-3b9a-4c79-90eb-27ec465f5a4e: /csi.v1.Node/NodeUnpublishVolume {\"target_path\":\"/var/lib/kubelet/pods/61269d1c-ef57-4c8d-9be7-e08df1d84949/volumes/kubernetes.io~csi/pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1/mount\",\"volume_id\":\"pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1\"}"
cstor-csi-plugin time="2022-08-06T06:46:15Z" level=warning msg="NodeUnpublishVolume: /var/lib/kubelet/pods/61269d1c-ef57-4c8d-9be7-e08df1d84949/volumes/kubernetes.io~csi/pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1/mount is not mounted, err: <nil>"
cstor-csi-plugin time="2022-08-06T06:46:15Z" level=info msg="Volume with ID: pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1 has been successfully removed from the transition volume list: map[pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f:UnmountUnderProgress]"
cstor-csi-plugin time="2022-08-06T06:46:15Z" level=info msg="Resp 9124e389-3b9a-4c79-90eb-27ec465f5a4e: {}"
cstor-csi-plugin time="2022-08-06T06:46:33Z" level=info msg="Req 04e4794a-2a69-4693-853d-5f4eb1882fab: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:46:33Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
cstor-csi-plugin time="2022-08-06T06:48:17Z" level=info msg="Req 102d8b22-306d-4ceb-b10a-f07930ddfbef: /csi.v1.Node/NodeUnpublishVolume {\"target_path\":\"/var/lib/kubelet/pods/61269d1c-ef57-4c8d-9be7-e08df1d84949/volumes/kubernetes.io~csi/pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1/mount\",\"volume_id\":\"pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1\"}"
cstor-csi-plugin time="2022-08-06T06:48:17Z" level=warning msg="NodeUnpublishVolume: /var/lib/kubelet/pods/61269d1c-ef57-4c8d-9be7-e08df1d84949/volumes/kubernetes.io~csi/pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1/mount is not mounted, err: <nil>"
cstor-csi-plugin time="2022-08-06T06:48:17Z" level=info msg="Volume with ID: pvc-e17ea6f2-9cfc-4816-aa08-3c7a980887b1 has been successfully removed from the transition volume list: map[pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f:UnmountUnderProgress]"
cstor-csi-plugin time="2022-08-06T06:48:17Z" level=info msg="Resp 102d8b22-306d-4ceb-b10a-f07930ddfbef: {}"
cstor-csi-plugin time="2022-08-06T06:48:35Z" level=info msg="Req 5d92aa37-197f-4f6e-8aa3-c152b8a200bf: /csi.v1.Node/NodeStageVolume {\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"openebs.io/cas-type\":\"cstor\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1659633273763-8081-cstor.csi.openebs.io\"},\"volume_id\":\"pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f\"}"
cstor-csi-plugin time="2022-08-06T06:48:35Z" level=error msg="GRPC error: rpc error: code = Internal desc = Volume pvc-a2f057fb-353a-4bce-9c7b-b955d5a3c79f Busy, status: UnmountUnderProgress"
...

blampe avatar Aug 06 '22 07:08 blampe

Deleting pod openebs-cstor-csi-node on node with this pvc pod helps for me.

jnt2007 avatar Aug 30 '22 16:08 jnt2007