autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

Feature Request: Allow individual values of scale-down-utilization-threshold for CPU and memory

Open frittentheke opened this issue 6 years ago • 35 comments

When adjusting scale down limits it's far less risky to go full on Mad Max on the CPU as a resource than it is when talking about memory: When you run out of CPU everything becomes slower, when running out of memory things begin to crash.

Currently the cluster-autoscaler does not allow individual values for CPU and memory, scale-down-utilization-threshold is always affecting both using the one with the higher value to determine if it can scale down more nodes. I suggest to allow for both resources to be configured to a different level, i.e. to allow a CPU headroom of only 10% while still only scaling down when there is more than 50% for memory still left.

This would not be a breaking change, nobody is forced to move away from one value for both. I simply believe some workloads differ greatly in their required headroom of CPU and memory. A more flexible configuration allows to run those clusters more efficiently in regards to cluster scaling.

frittentheke avatar Oct 15 '18 09:10 frittentheke

Sounds reasonable to me, with a caveat that we already have too many parameters which are hard to tweak. @MaciekPytel, WDYT?

aleksandra-malinowska avatar Oct 15 '18 10:10 aleksandra-malinowska

Sounds reasonable to me.

If you are preparing a PR please just leave old configuration parameter which would serve as default value for the cpu/memory specific ones.

losipiuk avatar Oct 16 '18 09:10 losipiuk

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jan 14 '19 10:01 fejta-bot

/remove-lifecycle stale

frittentheke avatar Jan 17 '19 08:01 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Apr 17 '19 08:04 fejta-bot

/remove-lifecycle stale

legal90 avatar Apr 17 '19 09:04 legal90

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jul 16 '19 09:07 fejta-bot

/remove-lifecycle stale

frittentheke avatar Jul 19 '19 17:07 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Oct 17 '19 17:10 fejta-bot

/remove-lifecycle stale

frittentheke avatar Oct 20 '19 07:10 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jan 18 '20 08:01 fejta-bot

/remove-lifecycle stale

frittentheke avatar Jan 19 '20 19:01 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Apr 18 '20 19:04 fejta-bot

/remove-lifecycle stale

frittentheke avatar Apr 19 '20 19:04 frittentheke

@aleksandra-malinowska may I kindly ask if you see a chance for this feature to be added? If not by yourself - would you accept a PR then?

frittentheke avatar Apr 19 '20 19:04 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jul 18 '20 20:07 fejta-bot

/remove-lifecycle stale

frittentheke avatar Jul 19 '20 07:07 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Oct 17 '20 07:10 fejta-bot

/remove-lifecycle stale

frittentheke avatar Oct 19 '20 05:10 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jan 17 '21 05:01 fejta-bot

This is partly implemented by https://github.com/kubernetes/autoscaler/pull/3789. What will remain after it is merged will be adding integration for cloud providers.

MaciekPytel avatar Jan 18 '21 10:01 MaciekPytel

/remove-lifecycle stale

frittentheke avatar Jan 18 '21 11:01 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Apr 18 '21 12:04 fejta-bot

/remove-lifecycle stale

frittentheke avatar Apr 21 '21 18:04 frittentheke

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jul 20 '21 19:07 fejta-bot

/remove-lifecycle stale

frittentheke avatar Jul 21 '21 05:07 frittentheke

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 19 '21 05:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 18 '21 06:11 k8s-triage-robot

/remove-lifecycle rotten

frittentheke avatar Nov 18 '21 06:11 frittentheke

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 16 '22 07:02 k8s-triage-robot