autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

cluster autoscaler not scaling up the autoscaling group when already downscaled to 0

Open vkkumarswamy opened this issue 3 years ago • 8 comments

This happens when autoscaling group is downscaled to 0. ie desired capacity is set to 0. Now if I start the cluster autoscaler and start a pod which requires a node from this autoscaling group. Some how autoscaling is not happening. I have defined node affinity towards this autoscaling group.

Below is the event log from pod describe. Normal NotTriggerScaleUp 4m15s (x121 over 24m) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) didn't match Pod's node affinity/selector

But it works when I manually set the desired capacity 1 (This is when cluster autoscaler is already running) and set back the desired capacity to 0. And make a new pod deployment. Looks like cluster autoscaler is not getting the nodes details associated with the autoscaling group at the start up when desired capacity is set to 0.

vkkumarswamy avatar May 17 '22 04:05 vkkumarswamy

I'm seeing this too and am a bit puzzled as to the solution

WebSpider avatar May 20 '22 07:05 WebSpider

Any updates on this ?. Or any workaround you would suggest ?

vkkumarswamy avatar May 26 '22 08:05 vkkumarswamy

I'm experiencing the same thing. Did you ever find an answer to this?

It can scale up from 0 as expected only after I've scaled it up at least once manually while cluster autoscaler is running.

So I assume it caches somehow, somewhere node info and relates it to the ASG. "Ooooh, this node has a gpu! Ok". However, I HAVE the "node-template" label and resource tags I'm "supposed to", for it to scale from 0 on its own from the ASG. And yet I still have to scale up once manually before it can scale up from 0 itself.

ZTGallagher avatar Jul 07 '22 15:07 ZTGallagher

I am facing the same issue, and this looks like a random behavior, sometimes it works, and sometimes it does until I scale the node group once manually.

rshad avatar Jul 27 '22 15:07 rshad

We are also experiencing the same issue, and I did dig a little bit

  1. So we have a NodeGroup with 0 node as a start with taint, and we deploy the cluster-autoscaler
  2. We deploy a pod which calling that NodeGroup with toloration
  3. We found this log from cluster-autoscaler:
I0803 11:24:40.153592       1 scale_up.go:300] Pod ${pod} can't be scheduled on ${desired_ASG}, predicate checking error: node(s) didn't match Pod's node affinity/selector; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity/selector; debugInfo=

3.1, so it did try to calculate with the NodeGroup which it should be triggering scale up 4. From the code, it seems try to run CheckPredicates and I assume the clusterSnapshot seems don't have enough info for the ASG if it come from 0 🤔 ?

DesmondH0 avatar Aug 03 '22 14:08 DesmondH0

The official documentation already covers this. As we are scaling up from capacity 0, this is not possible by default in the Cluster Autoscaler, and to do so, they indicated in the official documentation that we need to manually add a tag with the corresponding node-group label used by the Gitlab Runner jobs' node selectors to the corresponding autoscaling group. Currently is not possible via CDK to get the node group's Autoscaling group, so this can only be added manually.

The tag for the label gitlab-runner-type/heavy is as follows:

key: k8s.io/cluster-autoscaler/node-template/label/gitlab-runner-type
value: heavy

I tested it and it works.

rshad avatar Aug 04 '22 07:08 rshad

@rshad

I appreciate the response and recognize you're right. In my case, however, we are tagging the ASGs and they're still not coming up properly.

0/1 nodes are available: 1 Insufficient nvidia.com/gpu

The ASGs are, however, tagged with k8s.io/cluster-autoscaler/node-template/resources/nvidia.com/gpu=1

I'm not sure what, then can be the problem.

ZTGallagher avatar Aug 04 '22 08:08 ZTGallagher

@ZTGallagher

What is the label in your case? I see that you want to use a non-label tag as a label for the node-selector. As they indicate, the tag should be label not resource. So, your tag should be:

k8s.io/cluster-autoscaler/node-template/label/nvidia-gpu=1

And the label should be as:

nvidia-gpu: 1

rshad avatar Aug 04 '22 10:08 rshad

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 02 '22 11:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 02 '22 11:12 k8s-triage-robot

/remove-lifecycle rotten

WebSpider avatar Dec 03 '22 07:12 WebSpider

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 03 '23 07:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 02 '23 08:04 k8s-triage-robot

/remove-lifecycle rotten

WebSpider avatar Apr 08 '23 17:04 WebSpider

Experience same issue, from below log I think that cluster-autoscaler would remember all taints on the last node of ASG in memory before scaling down to zero regardless some taints were added automatically by other service and not existing in ASG's tags, so that if new pods does not have a toleration for the additional taint added by other service, cluster-autoscaler would think the pod is untolerated to the node group.

I0530 10:59:16.283578       1 scale_up.go:300] Pod overprovision-gp-c-type-arm64-spot-68d5cd4ccc-2rxz2 can't be scheduled on eks-general-purpose-worker-arm64-spot-c-type-xlarge, predicate checking error: node(s) had untolerated taint {aws-node-termination-handler/rebalance-recommendation: rebalance-recommendation-event-39396261316437352d336166632d3337}; predicateName=TaintToleration

The workaround is either manually scaling up the ASG from 0 to 1 to refresh ASG's taint's in memory or restarting cluster-autoscaler pods to refresh everything

dogzzdogzz avatar May 30 '23 11:05 dogzzdogzz

any update on this i am facing the same issue.

geosigno avatar Jul 28 '23 14:07 geosigno

Same issue here, we taint nodes when draining and before shutting them down. New nodes can't be started then because cluster-autoscaler thinks all of them have these taint.

der-eismann avatar Aug 10 '23 09:08 der-eismann

Same issue. The autoscaler doesn't work if the we set node count = 0 or after the nodes scales down to 0.

matcasx avatar Sep 04 '23 07:09 matcasx

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 27 '24 12:01 k8s-triage-robot

/remove-lifecycle stale

der-eismann avatar Jan 27 '24 12:01 der-eismann