vsphere-csi-driver
vsphere-csi-driver copied to clipboard
question -- co-exist between in-tree volume and out-of-tree volume in one Kubernetes cluster
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
/kind feature
What happened: when wen perform the cluster upgrade from kubernetes 1.20 to 1.21 on vcenter infra-structure, have the following question/concern.
a. before upgrade, there were in-tree volumes created by applications with in-tree related storageclass b. after upgrade, CPI & CSI feature have been introduced with one extra CSI related storageclass defined, but still keep the in-tree storageclass as the same name.
question here is could the application use either old in-tree related storageclass or new out-of-tree related storageclass?
base on my understanding, the value of cloud-provider is "vsphere" for in-tree mechanism, but "external" for CPI, right?
so, it seems could only support original in-tree storageclass, right?
if want to use the new out-of-tree storageclass, in-tree volume migration is required firstly, then after migration completed and change the value of cloud-provider to "external" from "vsphere", new out-of-tree storageclass could be used by application.
please help clarify my understanding for any correction.
thank in advance!
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- csi-vsphere version:
- vsphere-cloud-controller-manager version:
- Kubernetes version:
- vSphere version:
- OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
in-tree vSphere volume plugin and vSphere CSI driver can co-exist on the k8s cluster.
You can also enable in-tree vSphere to CSI Migration, and hand over operations for in-tree vSphere volume to vSphere CSI Driver.
After the migration is enabled, if you create a new PVC using the in-tree storage class, the volume creation request will be delivered to CSI.
you can also create a new Storageclass with vSphere CSI provisioner and directly create a volume using driver without requiring to perform translation.
@zhoudayongdennis close this issue if your question is answered.
@divyenpatel,
if migration was not enable during upgrade, the in-tree volume and out-of-tree volume still could be co-exist? what's the value of cloud-provider in Kubernetes, vsphere or external?
I understand the co-existence between in-tree and out-of-tree is possible after migration enabled, and both original in-tree volume or new csi volume will be taken care of by CSI, and the value of cloud-provider is external.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.