high-available tailscale networked cluster fails to start properly
I want to connect to my cluster over tailscale and use tailscale between the nodes. I don't have firewall enabled on the machines. With various configs I'm able to bootstrap the cluster, but usually address is picked up from eth0 (public ip) or any other controller fails to properly start (konnectivity doesn't come ready)
I think this is related: https://github.com/k0sproject/k0sctl/issues/760
here's one of the configs I've tried:
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
user: admin
spec:
hosts:
- ssh:
address: hetzner-tuusula-1
user: root
role: controller+worker
privateInterface: tailscale0
- ssh:
address: hetzner-tuusula-2
user: root
role: controller+worker
privateInterface: tailscale0
- ssh:
address: hetzner-tuusula-3
user: root
role: controller+worker
privateInterface: tailscale0
k0s:
config:
apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
name: k0s
namespace: kube-system
spec:
storage:
type: kine
kine:
dataSource: postgres://postgres:[email protected]:5432/k0s
hmm: https://github.com/k0sproject/k0sctl/issues/105#issuecomment-796752590
So this setup needs a loadbalancer or round robin dns name (https://github.com/k0sproject/k0sctl/issues/105#issuecomment-855266480)
after creating round robin dns name and setting it to externalAddress and sans it's slightly better
So with hetzner loadbalancer this works, but that's not what I initially wanted to do with fully private tailscale based cluster
I don't see how a Hetzner loadbalancer is a lot different conceptional.
Can you maybe attach logs - e.g. when you said it's slightly better what else was happening in the logs? Or what was missing?
I think with solo Tailscale you may need split DNS and run your own dns server to resolve a rr to multiple nodes since each will have a distinct name in magic DNS but you need one?
I think you could use CPLB as the LB in this case
I followed this a while ago: https://github.com/zombiezen/tailscale-lb