vsphere-csi-driver icon indicating copy to clipboard operation
vsphere-csi-driver copied to clipboard

failed to get VolumeID from volumeMigrationService for volumePath

Open jrife opened this issue 1 year ago • 11 comments

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: This happened in a CI environment following a cluster upgrade test where the cluster in question is upgraded to Kubernetes version v1.26 for volumes pods using volumes originally provisioned using the in-tree driver. There seems to be a bug in the CSI migration logic somewhere. Some volumes were attached and mounted fine, but for others the following was observed.

  1. The vSphere CSI controller showed a lot of errors like Error processing "csi-490480314089b18778270925a3036bc93facbdfcd44fbf23071b6a318507bb27": failed to attach: rpc error: code = Internal desc = failed to get VolumeID from volumeMigrationService for volumePath: "[uphk-xio-vc02-ds06] kubevols/kubernetes-dynamic-pvc-374f5b0c-8865-4657-ab80-1a00f1b44afb.vmdk".
  2. Problematic pods show events like Warning FailedAttachVolume 5m22s (x57 over 105m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-08f653b0-2aaf-4445-8490-4918b54570c0" : rpc error: code = Internal desc = failed to get VolumeID from volumeMigrationService for volumePath: "[uphk-xio-vc02-ds06] kubevols/kubernetes-dynamic-pvc-08f653b0-2aaf-4445-8490-4918b54570c0.vmdk" on affected Pods.
  3. Other Pods in the same Deployments/StatefulSet are fine.
  4. The error comes from this line in the vSphere CSI driver in the handler for the ControllerPublishVolume RPC (called during volume attach).
  5. GetVolumeID is a member of the VolumeMigrationService which appears to perform the mapping between old and new volumes to make migration work. This is only working for some of the volumes but not others?

It's a mystery why only some volumes would be registered with the VolumeMigrationService while others are not. Possibly related?

What you expected to happen: For all in-tree volumes to be migrated and used without any problems.

How to reproduce it (as minimally and precisely as possible): N/A

Anything else we need to know?: N/A

Environment:

  • csi-vsphere version: v3.0.2
  • vsphere-cloud-controller-manager version: N/A
  • Kubernetes version: v1.26.2-gke.1001
  • vSphere version: v7.0
  • OS (e.g. from /etc/os-release): linux
  • Kernel (e.g. uname -a): 5.15.0-1024-gkeop
  • Install tools:
  • Others:

jrife avatar Aug 02 '23 20:08 jrife

/cc

mauriciopoppe avatar Aug 16 '23 18:08 mauriciopoppe

We have also seen this happening transiently during cluster upgrades, but usually it goes away once cluster settles down. If these volumes are permanently stuck then we might indeed have a problem.

gnufied avatar Aug 22 '23 12:08 gnufied

@gnufied how long it typically can settle down? We noticed this during our CI tests, so the test could timeout before it settled down?

jingxu97 avatar Aug 24 '23 06:08 jingxu97

@divyenpatel is this issue resolved by https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/2454?

jingxu97 avatar Aug 26 '23 23:08 jingxu97

@jingxu97 https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/2470 is protecting accidental deletion of CnsVSphereVolumeMigration CRs when volumes are on present in the vCenter CNS Cache. This fix will be there in v3.1.0 release.

divyenpatel avatar Aug 28 '23 16:08 divyenpatel

v3.1.0 has the fix. Closing this Issue.

divyenpatel avatar Oct 02 '23 20:10 divyenpatel

I hit this issue with v3.1.1 vsphere-csi-driver on v1.28 k8s version.

/reopen

AnishShah avatar Nov 22 '23 20:11 AnishShah

@AnishShah: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I hit this issue with v3.1.1 vsphere-csi-driver on v1.28 k8s version.

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 22 '23 20:11 k8s-ci-robot

/reopen

mauriciopoppe avatar Nov 22 '23 20:11 mauriciopoppe

@mauriciopoppe: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 22 '23 20:11 k8s-ci-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 20 '24 21:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 21 '24 22:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Apr 20 '24 23:04 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 20 '24 23:04 k8s-ci-robot