terraform-provider-talos icon indicating copy to clipboard operation
terraform-provider-talos copied to clipboard

talos_cluster_health fails, while talosctl health is fine

Open stelb opened this issue 11 months ago • 1 comments

Hi,

  • I have 2 interfaces attached, external and internal network.
  • I provide internal control plane ips
  • endpoint one of the external control plane ips

first problem: │ waiting for etcd members to be control plane nodes: etcd member ips ["10.1.0.6" "XX.75.176.68" "10.1.0.2"] are not subset of control plane node ips ["10.1.0.2" "10.1.0.6" "10.1.0.7"] I added advertisedSubnets to be the internal cidr

Now etcd is ok, but now there is an unexpected k8s node │ waiting for all k8s nodes to report: can't find expected node with IPs ["10.1.0.3"] │ waiting for all k8s nodes to report: unexpected nodes with IPs ["XX.75.176.68"] (I reduced nodes)

But when I check this with talosctl:

talosctl -n 10.1.0.3 -e xx.13.164.153 health

discovered nodes: ["10.1.0.3" "xx.75.176.68"] waiting for etcd to be healthy: ... waiting for etcd to be healthy: OK waiting for etcd members to be consistent across nodes: ... waiting for etcd members to be consistent across nodes: OK waiting for etcd members to be control plane nodes: ... waiting for etcd members to be control plane nodes: OK waiting for apid to be ready: ... waiting for apid to be ready: OK waiting for all nodes memory sizes: ... waiting for all nodes memory sizes: OK waiting for all nodes disk sizes: ... waiting for all nodes disk sizes: OK waiting for kubelet to be healthy: ... waiting for kubelet to be healthy: OK waiting for all nodes to finish boot sequence: ... waiting for all nodes to finish boot sequence: OK waiting for all k8s nodes to report: ... waiting for all k8s nodes to report: OK waiting for all k8s nodes to report ready: ... waiting for all k8s nodes to report ready: OK waiting for all control plane static pods to be running: ... waiting for all control plane static pods to be running: OK waiting for all control plane components to be ready: ... waiting for all control plane components to be ready: OK waiting for kube-proxy to report ready: ... waiting for kube-proxy to report ready: SKIP waiting for coredns to report ready: ... waiting for coredns to report ready: OK waiting for all k8s nodes to report schedulable: ... waiting for all k8s nodes to report schedulable: OK

or with public cp ip:

talosctl -n xx.13.164.153 -e xx.13.164.153 health

discovered nodes: ["10.1.0.3" "xx.75.176.68"] waiting for etcd to be healthy: ... waiting for etcd to be healthy: OK waiting for etcd members to be consistent across nodes: ... waiting for etcd members to be consistent across nodes: OK waiting for etcd members to be control plane nodes: ... waiting for etcd members to be control plane nodes: OK waiting for apid to be ready: ... waiting for apid to be ready: OK waiting for all nodes memory sizes: ... waiting for all nodes memory sizes: OK waiting for all nodes disk sizes: ... waiting for all nodes disk sizes: OK waiting for kubelet to be healthy: ... waiting for kubelet to be healthy: OK waiting for all nodes to finish boot sequence: ... waiting for all nodes to finish boot sequence: OK waiting for all k8s nodes to report: ... waiting for all k8s nodes to report: OK waiting for all k8s nodes to report ready: ... waiting for all k8s nodes to report ready: OK waiting for all control plane static pods to be running: ... waiting for all control plane static pods to be running: OK waiting for all control plane components to be ready: ... waiting for all control plane components to be ready: OK waiting for kube-proxy to report ready: ... waiting for kube-proxy to report ready: SKIP waiting for coredns to report ready: ... waiting for coredns to report ready: OK waiting for all k8s nodes to report schedulable: ... waiting for all k8s nodes to report schedulable: OK

so what is the problem?

stelb avatar Mar 15 '24 20:03 stelb

I have the same issue when using vip. It works with talosctl health ....

machine:
  network:
    interfaces:
      - interface: eth0
        dhcp: true
        vip:
          ip: 10.0.2.160

JonasKop avatar Sep 16 '24 19:09 JonasKop

Almost the Same issue here. kubelet.nodeIp.validsubnets is wel defined to internal IPs and using advertisedSubnets set to internal Ips but still

the terraform data does not find the cluster healthy while the command talosctl health does.

Errror is

 unexpected nodes with IP 

followed by the list of the private ips for the worker nodes.

spastorclovr avatar Oct 08 '24 10:10 spastorclovr