Automated way to transfer old volumes to new provisioner
Is your feature request related to a problem? Please describe. Switching to aws-ebs-csi driver is great however we are left alone with hundreds of volumes that are handled by kubernetes.io/aws-ebs(old) provisioner. CSI redirects all plugin operations from the existing in-tree plugin to the ebs.csi.aws.com however this is useless when we can't use the benefits of enabling CSI like gp3 or snapshots.
Describe the solution you'd like in detail
Provide automated or semi-automated way to transfer old volumes to new csi format. In the way we could use all the benefits of switching to ebs-csi-driver
Describe alternatives you've considered Right now we could upgrade volume type to gp3 in aws console and accept the drift between kubernetes and actual state of volumes OR use manual workaround https://aws.amazon.com/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/ (very time consuming).
/kind feature
@czomo it's not exactly convenient, but I believe you should be able to manually create a new PV (and if necessary, PVC) that explicitly specifies the old volume by using the same method as static provisioning
@czomo it's not exactly convenient, but I believe you should be able to manually create a new PV (and if necessary, PVC) that explicitly specifies the old volume by using the same method as static provisioning
interesting, we could 0. Update gp2 to gp3 in aws console(question: can we do it in different order?)
- patch existing pv retain policy
- get pv/pvc definition | kubectl neat
- sed old values with new ones
- delete old definition of pv/pvc
- kubectl apply -f new-pv/pvc.yaml
I am a little worried about downtime but few seconds should be okish. Any other ideas, improvements?
Yeah, that's the basic idea. Currently, the external driver doesn't reconcile the volume type of already-created volumes at all, so you could do step 0 at any point during the process.
Unfortunately, I think some small downtime will be necessary (and you already have it down to the minimal amount) unless Kubernetes itself adds a migration feature, because existing volumes cannot have their StorageClass and/or provisioner updated (those fields are immutable), so you will always end up having to recreate the PV/PVC to migrate.
@torredil @gtxu pvmigrate from replicatedhq have sth that can help ppl to transfer multiple pv at once. It needs some enchantments of course. wdyt?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.