cluster-api-provider-packet
cluster-api-provider-packet copied to clipboard
Add cluster autoscaler scale from 0 support
User Story
When using the cluster autoscaler, one of the big advantages is that it is possible to add multiple sizes of node groups that can automatically scale as needed. The big downside to this is that a user must always run 1 node of each group if scaling from 0 isn't supported. There is an accepted proposal that Cluster API providers can choose to implement that would enable scale from 0 support.
https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/pull/30/files
Detailed Description
The support for scale from 0 for the cluster-api provider is not yet implemented in the cluster autoscaler, progress for this is being tracked in this issue and a unmerged working version of the cluster autoscaler can be found in this comment. However, it would be good to implement the needed support for scale from 0 in the cluster-api-provider-packet so that users can benefit from this feature as soon as it is released.
Along with the proposal, the changes in this PR can be used as a guide on how to implement scale from 0 support.
/kind feature
@cprivitere You might also be interested in this. This along with kube-vip and metro are what I have on my list for the distribution I'm working on.
@DavidSpek i have opened a PR for the autoscaler, https://github.com/kubernetes/autoscaler/pull/4840
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
fwiw, this feature has merged in the autoscaler
@elmiko Awesome, thanks for the heads up.
/lifecycle frozen
/remove-lifecycle rotten