vsphere-csi-driver icon indicating copy to clipboard operation
vsphere-csi-driver copied to clipboard

persistent / independent disk mode

Open a-dawg opened this issue 2 years ago • 7 comments

What happened: Using v2.5 it looks like that has changed to dependent mode. Is there any setting to control this behaviour?

What you expected to happen: disk mounted as independent

Anything else we need to know?:

Environment:

  • csi-vsphere version: 2.5
  • openshift version: 4.8
  • vSphere version: 7

a-dawg avatar Apr 20 '22 07:04 a-dawg

I encountered the similar issue with different scenario.

a. on no CSI support Kubernetes cluster, pv was in-tree vmdk disk with independent disk mode b. perform the cluster upgrade to the version with v2.3.1 vsphere CSI supported, but the disk mode was changed to dependent mode, and no disk size visible on vcenter dashboard.

as result, such in-tree vmdk can not be recognized by vcenter as well.

not sure who did the change of disk mode, CSI or Kubernetes? how to avoid it?

zhoudayongdennis avatar Apr 21 '22 06:04 zhoudayongdennis

@a-dawg @zhoudayongdennis

If you have created and attached disk to Node VM using in-tree vSphere volume plugin then the disk is getting attached as with independent mode.

https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/legacy-cloud-providers/vsphere/vclib/virtualmachine.go#L345

If you have enabled in tree vSphere volume plugin to CSI migration, then the disk is getting attached as a dependent disk. CNS API CSI driver is calling, is attaching disk as a dependent.

Can you confirm if this is the case? Also, do you see any functional issue with the disk being attached as a dependent vs independent?

divyenpatel avatar Apr 22 '22 00:04 divyenpatel

@divyenpatel,

the in-tree volume was created with independent disk mode.

during cluster upgrade, there was no CSI migration enable, but the disk mode was changed to dependent from original independent.

the impact is the original in-tree vmdk can NOT be recognized by vcenter, and scale operation got failure.

For such scenario, where got changed for disk mode value? who did that?

zhoudayongdennis avatar Apr 22 '22 05:04 zhoudayongdennis

@gnufied @jsafrane Are you aware of any workflow during cluster upgrade in the OpenShift environment causing this? Are we re-creating VM during the upgrade process and attaching these disks out of the band?

divyenpatel avatar Apr 22 '22 14:04 divyenpatel

@divyenpatel,

FYI

My infrastructure was NOT on openshift, and it was on vcenter 6.7 or 7.0, so this issue seems common.

zhoudayongdennis avatar Apr 24 '22 02:04 zhoudayongdennis

Are you aware of any workflow during cluster upgrade in the OpenShift environment causing this?

No

Are we re-creating VM during the upgrade process and attaching these disks out of the band?

No, we just update packages on the node, we keep the VMs alive and don't think we change their config in vSphere. It is rebooted, possibly several times though.

In addition:

csi-vsphere version: 2.5

We don't ship this CSI driver in OpenShift 4.8-4.9. @a-dawg must have installed the community version (which is OK, just not our fault ;-) )

jsafrane avatar May 03 '22 15:05 jsafrane

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 01 '22 15:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 31 '22 15:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Sep 30 '22 16:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 30 '22 16:09 k8s-ci-robot