k0sctl icon indicating copy to clipboard operation
k0sctl copied to clipboard

Kubernetes cluster autoscaler with Terraform

Open Brightside56 opened this issue 3 years ago • 3 comments

Hello.

k0sctl is great and I use it for my projects for some time with success. Now I would like to extend my clusters to support autoscale. I would try to use something like this for Hetzner cloud - https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/hetzner/README.md with some k0s cloud-init customizations.

However my cluster is bootstrapped with Terraform and I would like keep Terraform state to be actual and up to date. Does anyone have recommendations/ideas/points how to use k0sctl/k0s in conjunction with the Terraform and Kubernetes cluster autoscaler?

Brightside56 avatar Sep 23 '21 23:09 Brightside56

Cluster autoscaler can work with k0sctl provisioned cluster. I consider two main ways of nodes provisioning:

  1. Prebuilt Packer images which will be used to spin up new workers
  2. Base images like CentOS 8 and cloud-init provisioning with k0s

I've managed to get cluster autoscaler working with Hetzner (process with AWS will be pretty same), but at same time it's quite tricky. I still need to manage k0s configuration separately from k0sctl configuration. It would great to have a possibility to export k0s template for joining worker into existing cluster based on existing k0sctl configuration.

For example something like k0sctl template -c k0sctl.yaml --worker

Brightside56 avatar Oct 09 '21 12:10 Brightside56

Currently k0sctl does not care if there are nodes in the cluster that do not exist in the config.

Could you have a k0sctl config that has the controllers and only insert the config/IP for the new worker when adding a new node?

It will make upgrades difficult though as you will have to build a full cluster config for that.

I'm not sure I understand what that k0s template would contain? Adding workers shouldn't affect k0s config?

kke avatar Oct 11 '21 07:10 kke

Currently k0sctl does not care if there are nodes in the cluster that do not exist in the config.

@kke Exactly. k0sctl doesn't care about new workers. So it would just enough to add worker into existing cluster somehow. When cluster will be out of capacity - cluster autoscaler will create cloud instance for new worker node with cloud-init.

I'm not sure I understand what that k0s template would contain? Adding workers shouldn't affect k0s config?

So I need two things for this cloud-init: join token and k0s config which will let my freshly provisioned node join into cluster as worker. Join token isn't problem I believe, but k0s config is something I have hustle with (depending from setup). I would like k0sctl generate k0s config for worker the way it does during cluster setup, and Terraform to put this config into cluster autoscaler configmap. Then when cluster autoscaler will spin up fresh node, cloud-init will run something like k0s install -c xxxyyy -ip zzz and new worker will join cluster.

It will make upgrades difficult though as you will have to build a full cluster config for that.

As for now I don't see big issue with this. Mainframe nodes which were set up with k0sctl will be maintained with k0sctl. Nodes which were provisioned by autoscaler - ephemeral. They can be easily replaced same way.

Brightside56 avatar Oct 26 '21 21:10 Brightside56