autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

Cluster autosclaer helm chart for EKS 1.22

Open avnerv opened this issue 2 years ago • 2 comments

Which component are you using?: helm chart cluster autosclaer

helm charts

What version of the component are you using?: 1.23.0

Component version: 9.16.2

What k8s version are you using (kubectl version)?:

kubectl version Output
$ kubectl version
1.22

What environment is this in?:

AWS EKS What did you expect to happen?: You mentioned in the README (https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) that the CA and the K8s version should be matched, for example, CA 1.22 should be installed on k8s 1.22. But I don't find any helm chart version that matched 1.22 (https://artifacthub.io/packages/helm/cluster-autoscaler/cluster-autoscaler/9.18.0) .

avnerv avatar May 04 '22 12:05 avnerv

Bumped into the same situation. I worked around it by using the latest chart (9.18.0) and image.tag="v1.22.2" . No problems so far.

marpada avatar May 08 '22 11:05 marpada

Had a similar situation. Did not find a chart version with default image.tag set to 1.22.x. Found cluster-autoscaler image.tag 1.22.3 for eks 1.22..

Does latest cluster-autoscaler chart version eg. 9.19.2 support lower k8 versions with image.tag=1.22.3/1.21.3, in addition to the default 1.23.1?

kumarpmd avatar Aug 01 '22 13:08 kumarpmd

That's an issue for a long time, and it's kind of a headache to double check the appVersion within the chart matches the ones that we want to deploy.

The issue I guess we're all facing is the impossibility to know beforehand if the latest chart version contains properties/instructions that are specifically done for the latest cluster-autoscaler versions, thus making it unusable for cluster-autoscaler versions n-1 / n-2.

I understand the willingness to decorelate Helm chart from cluster-autoscaler versions, and I guess at the end of the day it's just about documentation to explain the users the matrix compatibility between app and charts.

Either docs, or either another version management for the charts, where when a new flag/property is specifically added for the last version of the app, then the major version gets bumped. At least we would know which version of the chart we can safely deploy with our given app version.

wdyt ?

cebidhem avatar Sep 01 '22 10:09 cebidhem

I am just about to introduce helm here, and am thinking hard on a policy regarding this... And I am more or less set on keeping them always identical. That discussion convinced me: https://github.com/helm/helm/issues/8194#issuecomment-633465800

tl;dr: appVersion was introduced for "I don't care about version, just fetch me the helm chart for the application version XYZ", but it never caught on.

For me, I want to avoid confusion, and flux's helm operator only upgrades on version change, not appVersion change (see https://fluxcd.io/flux/components/helm/api/ ff. on reconcileStrategy). After all, IMHO the helm chart is part of the release, so it should have the very same version. An option could be to append a chart-specific version to the version field, similar to what Deb/RPM versions do.

MartinEmrich avatar Oct 17 '22 07:10 MartinEmrich

That's an issue for a long time, and it's kind of a headache to double check the appVersion within the chart matches the ones that we want to deploy.

The issue I guess we're all facing is the impossibility to know beforehand if the latest chart version contains properties/instructions that are specifically done for the latest cluster-autoscaler versions

Yes! I've always thought this with the cluster-autoscaler and its chart. It's a pain.

I always have to go to the history of Chart.yaml, examine every change and then copy the version used for the appVersion that we want. And in this case, with version 1.22, there isn't a chart version for this appVersion 😢

I guess we'll copy what @marpada suggested.

max-rocket-internet avatar Oct 24 '22 09:10 max-rocket-internet

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 22 '23 10:01 k8s-triage-robot

/remove-lifecycle stale

max-rocket-internet avatar Jan 23 '23 08:01 max-rocket-internet

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 23 '23 09:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 23 '23 09:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jun 22 '23 10:06 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 22 '23 10:06 k8s-ci-robot