k0sctl
k0sctl copied to clipboard
Modifying the installFlags doesn't do anything after initial install
Hi,
just did a clean installation of a HA claster (3 controllers, 3 nodes) using k0sctl v0.12.3. Everything went smooth until I noticed that I had to add "--disable-components konnectivity-server" to my installFlags.
Changed from
installFlags:
- --disable-components metrics-server
to
installFlags:
- --disable-components metrics-server
- --disable-components konnectivity-server
Running k0sctl apply --config k0sctl.yaml after modifying the k0sctl.yaml did nothing. Is this expected?
The cluster is still running only with a single argument instead of two:
$ ps aux | grep disable-components
root 3922 2.0 3.9 788256 81176 ? Ssl 12:37 2:10 /usr/local/bin/k0s controller --config=/etc/k0s/k0s.yaml --disable-components=metrics-server
And these are in the correct section of the YAML?
spec:
hosts:
- role: controller
installFlags:
- --disable-components xyz
- role: worker
installFlags:
- --debug
Yep, they even work after doing a reset and apply again. Here is the full file:
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: my-cluster
spec:
k0s:
version: 1.23.1+k0s.1
hosts:
- ssh:
address: 172.20.4.56
port: 22
user: root
role: controller
installFlags: &installFlagsController
# disable metrics-server since we use prometheus instead
- --disable-components metrics-server
# disable konnectivity-server since we don't have a LB infront of our controller
# see: https://github.com/k0sproject/k0s/issues/1352
- --disable-components konnectivity-server
- ssh:
address: 172.20.4.57
port: 22
user: root
role: controller
installFlags: *installFlagsController
- ssh:
address: 172.20.4.58
port: 22
user: root
role: controller
installFlags: *installFlagsController
- ssh:
address: 172.20.4.59
port: 22
user: root
role: worker
- ssh:
address: 172.20.4.60
port: 22
user: root
role: worker
- ssh:
address: 172.20.4.61
port: 22
user: root
role: worker
ps aux after reset and apply:
$ ps aux | grep disable
root 2240 2.0 3.8 788232 78968 ? Ssl 14:35 2:26 /usr/local/bin/k0s controller --config=/etc/k0s/k0s.yaml --disable-components=metrics-server,konnectivity-server
Hmm, OK, that's weird 🤔
I'll investigate this tomorrow.
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: cluster
spec:
hosts:
- role: single
installFlags:
- --debug
- --disable-components metrics-server
- --disable-components konnectivity-server
ssh:
address: 127.0.0.1
port: 9022
INFO[0006] ==> Running phase: Initialize the k0s cluster
INFO[0006] [ssh] 127.0.0.1:9022: installing k0s controller
DEBU[0006] [ssh] 127.0.0.1:9022: executing `/usr/local/bin/k0s install controller --debug --disable-components metrics-server --disable-components konnectivity-server --single --config "/etc/k0s/k0s.yaml"`
# cat /etc/systemd/system/k0scontroller.service
[Unit]
Description=k0s - Zero Friction Kubernetes
Documentation=https://docs.k0sproject.io
ConditionFileIsExecutable=/usr/local/bin/k0s
After=network-online.target
Wants=network-online.target
[Service]
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/local/bin/k0s controller --config=/etc/k0s/k0s.yaml --debug=true --disable-components="metrics-server,konnectivity-server" --single=true
RestartSec=120
Delegate=yes
KillMode=process
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
LimitNOFILE=999999
Restart=always
[Install]
WantedBy=multi-user.target
Seems to work fine for me 🤔
Mh... I've added --debug to all controllers and --extra-kubelet-args to all workers and nothing happend.
Controller:
$ cat /etc/systemd/system/k0scontroller.service
[Unit]
Description=k0s - Zero Friction Kubernetes
Documentation=https://docs.k0sproject.io
ConditionFileIsExecutable=/usr/local/bin/k0s
After=network-online.target
Wants=network-online.target
[Service]
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/local/bin/k0s controller --config=/etc/k0s/k0s.yaml --disable-components="metrics-server,konnectivity-server"
RestartSec=120
Delegate=yes
KillMode=process
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
LimitNOFILE=999999
Restart=always
[Install]
WantedBy=multi-user.target
Here is the full log using trace enabled:
Ah yes, are we talking about changing the install flags for an existing cluster? That doesn't happen.
Yeah, this was my initial question. I was expecting that k0sctl can change that :(
K0sctl uses k0s install when k0s isn't installed already, so I think it would require some changes to k0s. I made an issue there.
https://github.com/k0sproject/k0s/issues/1458 is closed! :partying_face:
Presumably k0sctl needs to use k0s's new --force flag in order to close this issue?
k0sproject/k0s#1458 is closed! 🥳
Presumably
k0sctlneeds to usek0s's new--forceflag in order to close this issue?
Yes, but that's not all, it needs to determine when it is needed and if the version that is being installed supports it.