autoscaler
autoscaler copied to clipboard
CA all process will be blocked when one nodepool size is fixed failed
Which component are you using?: cluster-autoscaler
What version of the component are you using?:
Component version: 1.26.4
What k8s version are you using (kubectl version
)?: 1.26.4
kubectl version
Output
$ kubectl version
What environment is this in?: Linux amd64
What did you expect to happen?: If CA fix a nodepool size failed, it should not blocked all nodepools scale up.
What happened instead?: It will block all nodepools scale up when CA fix a nodepool size failed.
From
https://github.com/kubernetes/autoscaler/blob/57374884244ed178c7453e46a583926698459d6b/cluster-autoscaler/core/static_autoscaler.go#L695-L721
and
https://github.com/kubernetes/autoscaler/blob/57374884244ed178c7453e46a583926698459d6b/cluster-autoscaler/core/static_autoscaler.go#L446-L450
CA all process will be blocked if one nodepool size is fixed faile.
I think if one nodepool is fixed failed, this nodepool should be set as backoff but not block all CA process.
I would like create a PR to fix this issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten