k0sctl icon indicating copy to clipboard operation
k0sctl copied to clipboard

Allow specifying node labels in configuration.

Open vs49688 opened this issue 4 years ago • 9 comments

Specifically for cases where using installFlags: ["--labels=whatever"] isn't acceptable, e.g.

Kubelets can't set the node-role.kubernetes.io/master="" label on themselves for security reasons, it has to be done via an API client (e.g. kubectl). See https://github.com/kubernetes/kubernetes/issues/84912#issuecomment-551362981

This could be added as the following:

spec:
  hosts:
  - role: controller+worker
    labels:
    - "node-role.kubernetes.io/master="

or

spec:
  hosts:
  - role: controller+worker
    labels:
    - key: node-role.kubernetes.io/master
      value: ""

vs49688 avatar Jul 28 '21 08:07 vs49688

I think it needs to know which labels were set by k0sctl so it can remove the ones that no longer exist in k0sctl.yaml

Maybe some k0sctl.k0sproject.io/node-labels annotation of the label keys 🤔

kke avatar Aug 26 '21 11:08 kke

So something like this?

metadata:
  annotations:
    k0sctl.k0sproject.io/node-labels: "node-role.kubernetes.io/master,label1,label2"

That's probably the nicest way to do it, as least that I can think of.

vs49688 avatar Aug 28 '21 13:08 vs49688

Maybe a configMap

kke avatar Sep 15 '21 07:09 kke

This would be a super useful feature to have. For now I'm working around this by adding

    installFlags:
    - --labels="machine-type=train"

but this only applies on freshly provisioned hosts. It would be great if this was implemented in a way that updated labels on existing clusters by using a mechanism like one suggested above.

sjdrc avatar Oct 16 '22 23:10 sjdrc

+1

redzioch avatar Jan 14 '23 18:01 redzioch

+1

pinghe avatar Feb 11 '23 12:02 pinghe

It would be a bit simpler to just have something like:

spec:
  hosts:
    - role: controller+worker
      labels:
        apply:
          - node-role.kubernetes.io/control-plane=
        delete:
          - node-role.kubernetes.io/master
      # or:
      labels:
        - apply: node-role.kubernetes.io/control-plane=
        - delete: node-role.kubernetes.io/master     

Then it wouldn't need to keep track of anything. Possibly less room for error too. Same could be done for taints while at it.

kke avatar Feb 13 '23 08:02 kke

This reads slightly nicer:

spec:
  hosts:
    - role: controller+worker
      labels:
        apply:
          - node-role.kubernetes.io/control-plane=
        delete:
          - node-role.kubernetes.io/master

How do you plan on merging this with installFlags? I am just curious what the path going forward is since it's gonna be messy to support multiple places.

till avatar Oct 31 '23 15:10 till

As installFlags is already conveniently named "install flags", I think anything you have there will only be used to modify k0s install flags like before. The labels section would be applied once the node is up and rechecked on every apply.

It would be possible to allow something like:

spec:
  hosts:
    - role: controller+worker
      labels:
        install:
          - node.kubernetes.io/out-of-service=NoExecute
        apply:
          - node-role.kubernetes.io/control-plane=
        delete:
          - node-role.kubernetes.io/master

The install ones would then get merged into the installFlags behind the scenes.

kke avatar Nov 01 '23 07:11 kke