autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

Questions about upgrading cluster-autoscaler from 1.23 to 1.28

Open duyawen8 opened this issue 1 year ago • 10 comments

Which component are you using?: cluster-autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Describe the solution you'd like.:

Describe any alternative solutions you've considered.:

Additional context.:

I want to upgrade CA from 1.23 to 1.28 to use the parallel drain feature. The kubernetes version is 1.21. Is there any compatibility issue?

duyawen8 avatar Jun 25 '24 01:06 duyawen8

Hi @duyawen8, there will be a compatibility issue, if you use k8s v1.21 for CA 1.28. For CA 1.28, you have to use k8s v1.28. See the release section under README.md for the corresponding k8s version to the CA version.

Shubham82 avatar Jun 25 '24 08:06 Shubham82

/remove-kind feature /kind support

Shubham82 avatar Jun 25 '24 08:06 Shubham82

/area cluster-autoscaler

adrianmoisey avatar Jun 25 '24 11:06 adrianmoisey

@duyawen8, if your concern is resolved so can we close this issue?

Shubham82 avatar Jun 28 '24 09:06 Shubham82

I know that the CA version and the kubernetes version have a one-to-one correspondence. Is there any way to use CA 1.28 for parallel eviction without upgrading the cluster? Are there definitely compatibility issues?

duyawen8 avatar Jul 01 '24 02:07 duyawen8

IMO there will be compatibility issues because For every CA release, we update corresponding upstream dependencies(k8s). so there might be issues with mismatched and deprecated API items, I don't think it's recommended to use a CA that is greater than your cluster version.

Shubham82 avatar Jul 04 '24 11:07 Shubham82

cc @gjtempleton @MaciekPytel your thoughts on this?

Shubham82 avatar Jul 04 '24 11:07 Shubham82

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 02 '24 11:10 k8s-triage-robot

IMO there will be compatibility issues because For every CA release, we update corresponding upstream dependencies(k8s). so there might be issues with mismatched and deprecated API items, I don't think it's recommended to use a CA that is greater than your cluster version.

cc @MaciekPytel @gjtempleton @jackfrancis WDYT?

Shubham82 avatar Oct 10 '24 06:10 Shubham82

/remove-lifecycle stale

Shubham82 avatar Oct 10 '24 06:10 Shubham82

tl;dr here is that this particular scenario is not tested by the project, so we can't officially say either way, although the general guidance and compatibility matrix would suggest the guidance "not recommended"

if upgrading your cluster is not a practical possibility, you could stress test this in a staging environment to tease out the behaviors, and whether or not any API connectivity pathologies are meet your operational requirements

I think this is about as clear of a message as we can give about this particular request. Not recommended but it's true that there is a possibility that within any given k8s release delta that falls outside of the supported and tested compatibility matrix your cluster may work acceptably for you.

/close

jackfrancis avatar Dec 09 '24 17:12 jackfrancis

@jackfrancis: Closing this issue.

In response to this:

tl;dr here is that this particular scenario is not tested by the project, so we can't officially say either way, although the general guidance and compatibility matrix would suggest the guidance "not recommended"

if upgrading your cluster is not a practical possibility, you could stress test this in a staging environment to tease out the behaviors, and whether or not any API connectivity pathologies are meet your operational requirements

I think this is about as clear of a message as we can give about this particular request. Not recommended but it's true that there is a possibility that within any given k8s release delta that falls outside of the supported and tested compatibility matrix your cluster may work acceptably for you.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Dec 09 '24 17:12 k8s-ci-robot