autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

HETZNER - Custom Labels on scaled nodes

Open ThatDeveloper opened this issue 3 years ago • 9 comments

Which component are you using?: Hetzner

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.: More and more developers decide to let their more and more complex solutions run on kubernetes and in order for them to spot a place to run, need labels on them. (Affinity) With adding the possibility to add multiple (n) custom labels to the --nodes= that will get applied on a scale event, that would be great. Also, the node pool could be selected / preffered if the labels match the ones of the pod.

Describe the solution you'd like.: --nodes=1:10:CPX21:FSN1:pool1:[abc.io/services] --nodes=1:10:CPX51:FSN1:pool1:[abc.io/services,abc.io/regular] The selection of the nodepool would look if [label...] is present and select based on this; scale up would add these labels to the node.

Describe any alternative solutions you've considered.: Working with prebuilt images, but the cluster autoscaler can not differentiate.

ThatDeveloper avatar Jan 12 '22 14:01 ThatDeveloper

Documentation is not fantastic but you can already scale up based on pool1 in your exemple

The only downside is: it's not based on Kubernetes labels but Hetzner node labels, so you can't scale up on custom Kubernetes labels

affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: hcloud/node-group
                operator: In
                values:
                  - pool1

AzSiAz avatar Jan 12 '22 16:01 AzSiAz

Thank you for this information. But how would I do the following:

PROJECT X tells me, that the compontents only get schedules on Nodes with specific labels. Lets call the labels: L1, L2 L3

L1, L2, L3 should never be on the same node. Now a pod wants to get scheduled and requires the L2 tag. As you mentioned above, it is not "custom Kubernetes labels".

If I do not know which label the node has, I can not change something in the setup process. Cluster Autoscaler would know, but not the server itself.

ThatDeveloper avatar Jan 12 '22 16:01 ThatDeveloper

Unfortunately I am not there yet, the only thing I can do is trigger different nodepool scale up/down and sometimes it just don't so yeah

AzSiAz avatar Jan 12 '22 17:01 AzSiAz

Well I give up and I will just label my node with cloud-init based on hostname first part with this hcloud/node-group as a key

AzSiAz avatar Jan 12 '22 18:01 AzSiAz

If I'm not mistaken, this is where the labels are added. The Labels field was just left empty instead of passing in the values. https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go#L205 I don't have a dev environment for this setup at the moment so I'm not exactly sure.

BlakeB415 avatar Jan 25 '22 00:01 BlakeB415

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 25 '22 01:04 k8s-triage-robot

/remove-lifecycle stale

I'm experiencing this as well, and would love a way to attach kubernetes node labels to provisioned nodes. I'll see if I can borrow some code from other cloud-providers, since I'm not very proficient in go

WebSpider avatar May 19 '22 08:05 WebSpider

I need to target the Load Balancer to scaled nodes, is this possible without custom labels support ?

mhmnemati avatar Jul 17 '22 12:07 mhmnemati

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 15 '22 13:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 14 '22 13:11 k8s-triage-robot

/remove-lifecycle rotten

WebSpider avatar Nov 14 '22 13:11 WebSpider

Also facing this issue, adding a +1 Bringing attention to @apricote

nfacha avatar Feb 02 '23 17:02 nfacha

Hi everyone,

I was unclear whether this is about the labels for Hetzner Cloud Servers or the labels for Kubernetes Nodes. The scheduling constraint appears to be related to Node labels, while the load balancer target request requires Server labels.

Server Labels

Currently, we only specify the label hcloud/node-group=foobar which can be used to target Hetzner Cloud servers with a load balancer. However, this might not be sufficient once multiple clusters are run in the same project. To improve this, the cluster-autoscaler cloud provider needs to change the label and provide a config interface for users to specify these labels.

Node Labels

Unfortunately, the cluster-autoscaler cloud provider cannot make changes to Node labels. These labels are added from other cluster components (e.g., hcloud-cloud-controller-manager) and not created by the cluster-autoscaler.

Custom Node labels can be specified when using kubeadm/kubelet by utilizing the kubelet --node-labels flag. You can achieve this by modifying the cloud-init script passed to the server. However, it appears that you cannot specify different scripts for each node group, limiting the usefulness of this option.

apricote avatar Feb 03 '23 09:02 apricote

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 04 '23 10:05 k8s-triage-robot

/remove-lifecycle stale

nfacha avatar May 04 '23 10:05 nfacha

As of https://github.com/kubernetes/autoscaler/pull/6184 it will be possible to specify to the Cluster-Autoscaler which additional Node Labels for use in Kubernetes are added to the Nodes in the cloud-init config.

/area provider/hetzner

apricote avatar Oct 20 '23 06:10 apricote

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 30 '24 18:01 k8s-triage-robot

No one has clearly requested Server Labels for the last year, and there is now an option to add Node Labels. I think we can consider this closed until someone comes with a request for the Server Labels.

/close

apricote avatar Feb 05 '24 06:02 apricote

@apricote: Closing this issue.

In response to this:

No one has clearly requested Server Labels for the last year, and there is now an option to add Node Labels. I think we can consider this closed until someone comes with a request for the Server Labels.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 05 '24 06:02 k8s-ci-robot