vsphere-csi-driver
vsphere-csi-driver copied to clipboard
Volume Expansion Procedure for migrated volumes?
Is this a BUG REPORT or FEATURE REQUEST?: Bug Report
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened: We deployed vsphere-csi-controller on existing kubernetes cluster which was using in-tree provider. The PVCs were migrated automatically by the controller. We are able to expand the freshly created PVCs but when we try to expand existing PVCs, we get errors as below.
I0119 12:20:15.475938 1 controller.go:281] Started PVC processing "default/my-pvc"
I0119 12:20:15.488116 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-pvc", UID:"b5ed8c5d-7076-408d-b658-c9a6ffcf66bd", APIVersion:"v1", ResourceVersion:"140639392", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-b5ed8c5d-7076-408d-b658-c9a6ffcf66bd
E0119 12:20:15.488813 1 controller.go:272] Error syncing PVC: resize volume "pvc-b5ed8c5d-7076-408d-b658-c9a6ffcf66bd" by resizer "csi.vsphere.vmware.com" failed: rpc error: code = Unimplemented desc = cannot expand migrated vSphere volume. :"[XXXX_vSAN-Datastore] e4c0a561-2ad5-7719-dee3-4c52624e0cd4/_0088/f6fccb203b024aea8290c9135bee9c49.vmdk"
I have seen #300 - which has explicitly prevented the expansion of such migrated volumes.
What you expected to happen: Migrated PVCs also can be expanded
How to reproduce it (as minimally and precisely as possible):
- Have a Kubernetes cluster with in-tree vsphere provider
- create PVCs with in-tree provider
- Migrate to out-of-tree provider
- Try to expand PVC
Anything else we need to know?: My question is whether there is a simple way to enable expansion of such PVCs (trying to avoid lengthy procedure of creation of new PVCs and data copy as it is time consuming and involves downtime). Can vsphere-csi-controller create CNS volumes and create copy of data as part of migration?
Environment:
- csi-vsphere version: v2.3.0
- vsphere-cloud-controller-manager version: v1.21.0
- Kubernetes version: 1.20.13 / 1.21.8
- vSphere version: vsphere v 7u2. Exsi version 7.0.2
- OS (e.g. from /etc/os-release): Ubuntu 20.04.3 LTS
- Kernel (e.g.
uname -a): Linux pvctest-7c5b7496f6-fp9rf 5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux - Install tools:
- Others:
@dharapvj we disabled users from using new capabilities and features for migrated volumes. One reason is if the user disables the migration and move back to the in-tree volume plugin, and if in-tree plugin does not support the feature, they will have more issues.
consider the following case
- user expanded volume after migration, but before he expand the file system on the volume, the admin turned off the migration feature. Now user is back on the in-tree vSphere plugin, which does not have the capability to expand the file system on the expanded volume.
just to avoid such situations we have guarded users against using new capabilities for migrated volumes.
I think we should revisit this once we enable migration by Default in the k8s and users do not have the ability to move back to the older plugin.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
still a valid usecase. removing stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
its still a valid usecase. /remove-lifecycle stale
@divyenpatel I see below comment from you..
the admin turned off the migration feature
On a unrelated note - we are facing issues about volume migration only in one esxi host and not in other. So your above comment made me wonder - where can admin turn off the migration? Are you referring to the internal-xx configmap or some setting in vcenter/exsi level? Could you elaborate? Related CSI issue which we face on one exsi host
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten /reopen
@divyenpatel: Reopened this issue.
In response to this:
/remove-lifecycle rotten /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@chethanv28 has enabled volume expansion for migrated volume with this PR - https://github.com/kubernetes-sigs/vsphere-csi-driver/pull/2194
the feature should be available in the next release of the vSphere CSI Driver.