Questions about upgrading cluster-autoscaler from 1.23 to 1.28
Which component are you using?: cluster-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
Describe the solution you'd like.:
Describe any alternative solutions you've considered.:
Additional context.:
I want to upgrade CA from 1.23 to 1.28 to use the parallel drain feature. The kubernetes version is 1.21. Is there any compatibility issue?
Hi @duyawen8, there will be a compatibility issue, if you use k8s v1.21 for CA 1.28. For CA 1.28, you have to use k8s v1.28. See the release section under README.md for the corresponding k8s version to the CA version.
/remove-kind feature /kind support
/area cluster-autoscaler
@duyawen8, if your concern is resolved so can we close this issue?
I know that the CA version and the kubernetes version have a one-to-one correspondence. Is there any way to use CA 1.28 for parallel eviction without upgrading the cluster? Are there definitely compatibility issues?
IMO there will be compatibility issues because For every CA release, we update corresponding upstream dependencies(k8s). so there might be issues with mismatched and deprecated API items, I don't think it's recommended to use a CA that is greater than your cluster version.
cc @gjtempleton @MaciekPytel your thoughts on this?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
IMO there will be compatibility issues because For every CA release, we update corresponding upstream dependencies(k8s). so there might be issues with mismatched and deprecated API items, I don't think it's recommended to use a CA that is greater than your cluster version.
cc @MaciekPytel @gjtempleton @jackfrancis WDYT?
/remove-lifecycle stale
tl;dr here is that this particular scenario is not tested by the project, so we can't officially say either way, although the general guidance and compatibility matrix would suggest the guidance "not recommended"
if upgrading your cluster is not a practical possibility, you could stress test this in a staging environment to tease out the behaviors, and whether or not any API connectivity pathologies are meet your operational requirements
I think this is about as clear of a message as we can give about this particular request. Not recommended but it's true that there is a possibility that within any given k8s release delta that falls outside of the supported and tested compatibility matrix your cluster may work acceptably for you.
/close
@jackfrancis: Closing this issue.
In response to this:
tl;dr here is that this particular scenario is not tested by the project, so we can't officially say either way, although the general guidance and compatibility matrix would suggest the guidance "not recommended"
if upgrading your cluster is not a practical possibility, you could stress test this in a staging environment to tease out the behaviors, and whether or not any API connectivity pathologies are meet your operational requirements
I think this is about as clear of a message as we can give about this particular request. Not recommended but it's true that there is a possibility that within any given k8s release delta that falls outside of the supported and tested compatibility matrix your cluster may work acceptably for you.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.