autoscaler
autoscaler copied to clipboard
HETZNER - Custom Labels on scaled nodes
Which component are you using?: Hetzner
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
More and more developers decide to let their more and more complex solutions run on kubernetes and in order for them to spot a place to run, need labels on them. (Affinity)
With adding the possibility to add multiple (n) custom labels to the --nodes=
that will get applied on a scale event, that would be great. Also, the node pool could be selected / preffered if the labels match the ones of the pod.
Describe the solution you'd like.:
--nodes=1:10:CPX21:FSN1:pool1:[abc.io/services]
--nodes=1:10:CPX51:FSN1:pool1:[abc.io/services,abc.io/regular]
The selection of the nodepool would look if [label...] is present and select based on this; scale up would add these labels to the node.
Describe any alternative solutions you've considered.: Working with prebuilt images, but the cluster autoscaler can not differentiate.
Documentation is not fantastic but you can already scale up based on pool1
in your exemple
The only downside is: it's not based on Kubernetes labels but Hetzner node labels, so you can't scale up on custom Kubernetes labels
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: hcloud/node-group
operator: In
values:
- pool1
Thank you for this information. But how would I do the following:
PROJECT X tells me, that the compontents only get schedules on Nodes with specific labels. Lets call the labels: L1, L2 L3
L1, L2, L3 should never be on the same node. Now a pod wants to get scheduled and requires the L2 tag. As you mentioned above, it is not "custom Kubernetes labels".
If I do not know which label the node has, I can not change something in the setup process. Cluster Autoscaler would know, but not the server itself.
Unfortunately I am not there yet, the only thing I can do is trigger different nodepool scale up/down and sometimes it just don't so yeah
Well I give up and I will just label my node with cloud-init based on hostname first part with this hcloud/node-group
as a key
If I'm not mistaken, this is where the labels are added. The Labels field was just left empty instead of passing in the values. https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go#L205 I don't have a dev environment for this setup at the moment so I'm not exactly sure.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I'm experiencing this as well, and would love a way to attach kubernetes node labels to provisioned nodes. I'll see if I can borrow some code from other cloud-providers, since I'm not very proficient in go
I need to target the Load Balancer to scaled nodes, is this possible without custom labels support ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Also facing this issue, adding a +1 Bringing attention to @apricote
Hi everyone,
I was unclear whether this is about the labels for Hetzner Cloud Servers or the labels for Kubernetes Nodes. The scheduling constraint appears to be related to Node labels, while the load balancer target request requires Server labels.
Server Labels
Currently, we only specify the label hcloud/node-group=foobar
which can be used to target Hetzner Cloud servers with a load balancer. However, this might not be sufficient once multiple clusters are run in the same project. To improve this, the cluster-autoscaler cloud provider needs to change the label and provide a config interface for users to specify these labels.
Node Labels
Unfortunately, the cluster-autoscaler cloud provider cannot make changes to Node labels. These labels are added from other cluster components (e.g., hcloud-cloud-controller-manager
) and not created by the cluster-autoscaler.
Custom Node labels can be specified when using kubeadm/kubelet by utilizing the kubelet --node-labels
flag. You can achieve this by modifying the cloud-init script passed to the server. However, it appears that you cannot specify different scripts for each node group, limiting the usefulness of this option.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
As of https://github.com/kubernetes/autoscaler/pull/6184 it will be possible to specify to the Cluster-Autoscaler which additional Node Labels for use in Kubernetes are added to the Nodes in the cloud-init config.
/area provider/hetzner
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
No one has clearly requested Server Labels for the last year, and there is now an option to add Node Labels. I think we can consider this closed until someone comes with a request for the Server Labels.
/close
@apricote: Closing this issue.
In response to this:
No one has clearly requested Server Labels for the last year, and there is now an option to add Node Labels. I think we can consider this closed until someone comes with a request for the Server Labels.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.