Kimmo Lehto
Kimmo Lehto
My guess is this is a networking issue between the VMs. > The worker instance have no admin.conf file It's not supposed to, it's only on controllers.
Could be still cleaner
Outdated and I don't remember what it was for
And these are in the correct section of the YAML? ```yaml spec: hosts: - role: controller installFlags: - --disable-components xyz - role: worker installFlags: - --debug ```
Hmm, OK, that's weird 🤔 I'll investigate this tomorrow.
```yaml apiVersion: k0sctl.k0sproject.io/v1beta1 kind: cluster spec: hosts: - role: single installFlags: - --debug - --disable-components metrics-server - --disable-components konnectivity-server ssh: address: 127.0.0.1 port: 9022 ``` ``` INFO[0006] ==> Running phase:...
Ah yes, are we talking about changing the install flags for an existing cluster? That doesn't happen.
K0sctl uses `k0s install` when k0s isn't installed already, so I think it would require some changes to k0s. I made an issue there.
> [k0sproject/k0s#1458](https://github.com/k0sproject/k0s/issues/1458) is closed! 🥳 > > Presumably `k0sctl` needs to use `k0s`'s new `--force` flag in order to close this issue? Yes, but that's not all, it needs to...
I don't think this is caused by k0sctl, @jnummelin @s0j @soider any ideas?