cluster-api-provider-packet icon indicating copy to clipboard operation
cluster-api-provider-packet copied to clipboard

Add cluster autoscaler scale from 0 support

Open davidspek opened this issue 3 years ago • 8 comments
trafficstars

User Story

When using the cluster autoscaler, one of the big advantages is that it is possible to add multiple sizes of node groups that can automatically scale as needed. The big downside to this is that a user must always run 1 node of each group if scaling from 0 isn't supported. There is an accepted proposal that Cluster API providers can choose to implement that would enable scale from 0 support.

https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/pull/30/files

Detailed Description

The support for scale from 0 for the cluster-api provider is not yet implemented in the cluster autoscaler, progress for this is being tracked in this issue and a unmerged working version of the cluster autoscaler can be found in this comment. However, it would be good to implement the needed support for scale from 0 in the cluster-api-provider-packet so that users can benefit from this feature as soon as it is released.

Along with the proposal, the changes in this PR can be used as a guide on how to implement scale from 0 support.

/kind feature

davidspek avatar Apr 21 '22 18:04 davidspek

@cprivitere You might also be interested in this. This along with kube-vip and metro are what I have on my list for the distribution I'm working on.

davidspek avatar Apr 21 '22 18:04 davidspek

@DavidSpek i have opened a PR for the autoscaler, https://github.com/kubernetes/autoscaler/pull/4840

elmiko avatar Apr 29 '22 18:04 elmiko

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 28 '22 19:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 27 '22 20:08 k8s-triage-robot

fwiw, this feature has merged in the autoscaler

elmiko avatar Aug 29 '22 13:08 elmiko

@elmiko Awesome, thanks for the heads up.

davidspek avatar Aug 30 '22 04:08 davidspek

/lifecycle frozen

cprivitere avatar Aug 30 '22 14:08 cprivitere

/remove-lifecycle rotten

cprivitere avatar Aug 30 '22 14:08 cprivitere