Cluster Autoscaling for Hetzner
/kind feature
1. Describe IN DETAIL the feature/behavior/change you would like to see. According to the getting started https://kops.sigs.k8s.io/getting_started/hetzner/, cluster autoscaler is not yet available for Hetzner. I am running two clusters on Hetzner as a development environment and we soon are going to add two more for production use. So my question is, what would be needed to get support for the autoscaler? And is this something I could help with?
As far as I can tell, the autoscaler itself already supports Hetzner according to this https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/hetzner/README.md
@lobermann It should not be that hard to get CAS to work on Hetzner, now that it supports multiple profiles. I you, or anyone else would like to help with this, trying to install it manually and sharing the results would be much appreciated. I assume it would be a process similar to what the CloudPilot AI tried with Karpenter (great blog post BTW): https://www.cloudpilot.ai/en/blog/how-to-deploy-karpenter-on-k8s-with-kops/
@hakman let me give you an update on this.
I spent some time to try to get this running.
First I tried with the kops cluster configuration, but the issue here is that the environment variable for the hc token is not exposed correctly to the autoscaler pods. The configuration kops passes on to the autoscaler seems correct tho.
Then I tried it manually, based on the Karpenter docs and the autoscaler example https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml There I ran into the issue that the the new nodes are not connecting to the cluster, because of an incorrect HC_CLOUD_INIT. As far as I could research, kops is generating the cloud init for each node based also on what datacenter that node is located in.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale