cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

remove v1alpha3/4 ?

Open jichenjc opened this issue 2 years ago • 3 comments

/kind feature

Describe the solution you'd like [A clear and concise description of what you want to happen.]

we stated v1alpha3/4 is for CAPI v1alpahx version and now CAPI is already beta should we remove the early support in main branch so that we can avoid maintain unnecessary complex code e.g conversions?

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

jichenjc avatar Aug 17 '22 00:08 jichenjc

I don't think that we should remove old apiVersions. Kubernetes stores all old apiVersions in the CRDs and starts complaining if an old version was removed. It does need a manual task to remove the versions.

See https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#previous-storage-versions And CAPI Slack thread https://kubernetes.slack.com/archives/C8TSNPY4T/p1655898088475859?thread_ts=1655898054.621999&cid=C8TSNPY4T

tobiasgiese avatar Aug 17 '22 06:08 tobiasgiese

ok, I will read more in the thread you provided ,thanks :)

a general understanding to me is beta/stable version need keep but why we need keep alpha version? Given we might have v1alpha7 (maybe but likely or even 8,9) , I think maintain such would be a burden? We already removed v1alpha1/2 , do we ? Anyway, will read more for the info, thanks~

jichenjc avatar Aug 17 '22 06:08 jichenjc

I reviewed above links but seems there is no clear indication we can't remove v1alphax version ..

jichenjc avatar Aug 19 '22 01:08 jichenjc

@tobiasgiese is correct, this is more involved than just removing from the code base. If (when?) we remove older API versions we will need to first make sure that these API versions are no longer listed under status.storedVersions and then ensure that all users also "refresh" them so the stored version is actually updated.

Cert-manager has successfully managed to do this so they can work as a good example for how to do it. They did end up with a special CLI command for this though which is not so nice IMO.

Since all other providers will have the same issue at some point, I would suggest trying to solve this at the CAPI level. It would be very natural to include something like this in clusterctl. We could already remove the older APIs from the stored versions in the CRDs though if we want.

lentzi90 avatar Oct 18 '22 12:10 lentzi90

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 16 '23 13:01 k8s-triage-robot

CAPI is now starting to plan for removal of older versions: https://github.com/kubernetes-sigs/cluster-api/issues/8038 Good to follow how they plan to do it!

lentzi90 avatar Feb 01 '23 13:02 lentzi90

Yep, lets get on this!

mdbooth avatar Feb 01 '23 15:02 mdbooth

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 03 '23 15:03 k8s-triage-robot

/remove-lifecycle rotten

lentzi90 avatar Mar 06 '23 06:03 lentzi90

I think we're at the stage where we can just go ahead and do this. clusterctl now upgrades storage versions when upgrading, so this should be safe. I believe anybody running v0.7 will already have been upgraded to v1alpha6.

That said, I haven't tested this, I haven't read the code, and I'm not 100% confident of my assertions here. It would be good to gain this confidence before doing the removal.

@tobiasgiese You expressed concerns before. Any thoughts?

mdbooth avatar Apr 05 '23 15:04 mdbooth

That said, I haven't tested this, I haven't read the code, and I'm not 100% confident of my assertions here. It would be good to gain this confidence before doing the removal.

I think less likely users stay on too old version we should support .. they should consider upgrade and not sure whether CAPI itself has supported list (e.g N-2 or N-3?)

jichenjc avatar Apr 06 '23 02:04 jichenjc