sealos add BUG
Sealos Version
v4.3.7
How to reproduce the bug?
centos7.9
already have a cluster , one master . excute sealos add --nodes 10.34.30.105, failed.
error log:
error Applied to cluster error: failed to join node 10.34.30.105:22 run command kubeadm join --config=/root/.sealos/default/etc/kubeadm-join-node.yaml -v 6 on 10.34.30.105:22, output: I0408 22:39:02.168392 19590 join.go:413] [preflight] found NodeName empty; using OS hostname as NodeName
I0408 22:39:02.168534 19590 joinconfiguration.go:76] loading configuration from "/root/.sealos/default/etc/kubeadm-join-node.yaml"
[preflight] Running pre-flight checks
I0408 22:39:02.171629 19590 preflight.go:92] [preflight] Running general checks
I0408 22:39:02.171732 19590 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf
I0408 22:39:02.171771 19590 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0408 22:39:02.171793 19590 checks.go:107] validating the container runtime
I0408 22:39:02.383862 19590 checks.go:373] validating the presence of executable crictl
I0408 22:39:02.383951 19590 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0408 22:39:02.384024 19590 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0408 22:39:02.384080 19590 checks.go:654] validating whether swap is enabled or not
I0408 22:39:02.384156 19590 checks.go:373] validating the presence of executable conntrack
I0408 22:39:02.384507 19590 checks.go:373] validating the presence of executable ip
I0408 22:39:02.384535 19590 checks.go:373] validating the presence of executable iptables
I0408 22:39:02.384562 19590 checks.go:373] validating the presence of executable mount
I0408 22:39:02.384591 19590 checks.go:373] validating the presence of executable nsenter
I0408 22:39:02.384618 19590 checks.go:373] validating the presence of executable ebtables
I0408 22:39:02.384642 19590 checks.go:373] validating the presence of executable ethtool
I0408 22:39:02.384666 19590 checks.go:373] validating the presence of executable socat
I0408 22:39:02.384695 19590 checks.go:373] validating the presence of executable tc
I0408 22:39:02.384719 19590 checks.go:373] validating the presence of executable touch
I0408 22:39:02.384753 19590 checks.go:521] running all checks
I0408 22:39:02.395688 19590 checks.go:404] checking whether the given node name is valid and reachable using net.LookupHost
I0408 22:39:02.395973 19590 checks.go:620] validating kubelet version
I0408 22:39:02.484583 19590 checks.go:133] validating if the "kubelet" service is enabled and active
I0408 22:39:02.498249 19590 checks.go:206] validating availability of port 10250
I0408 22:39:02.498536 19590 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt
I0408 22:39:02.498598 19590 checks.go:433] validating if the connectivity type is via proxy or direct
I0408 22:39:02.498698 19590 join.go:530] [preflight] Discovering cluster-info
I0408 22:39:02.498804 19590 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "10.103.97.2:6443"
I0408 22:39:12.500422 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10000 milliseconds
I0408 22:39:12.500558 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:39:28.409789 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10001 milliseconds
I0408 22:39:28.409882 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:39:44.824894 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10002 milliseconds
I0408 22:39:44.824977 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:40:00.828503 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10003 milliseconds
I0408 22:40:00.828622 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:40:16.493777 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10004 milliseconds
I0408 22:40:16.493873 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:40:32.131737 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10000 milliseconds
I0408 22:40:32.131826 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:40:48.163425 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10000 milliseconds
I0408 22:40:48.163514 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:41:03.263541 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10000 milliseconds
I0408 22:41:03.263826 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:41:18.500639 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10001 milliseconds
I0408 22:41:18.500746 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:41:33.648298 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10001 milliseconds
I0408 22:41:33.648374 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:41:49.102285 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10001 milliseconds
I0408 22:41:49.102358 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:42:04.880095 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10002 milliseconds
I0408 22:42:04.880167 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:42:21.107848 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10003 milliseconds
I0408 22:42:21.107959 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:42:36.437655 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10004 milliseconds
I0408 22:42:36.437746 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:42:52.017955 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10004 milliseconds
I0408 22:42:52.018092 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:43:07.496988 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10000 milliseconds
I0408 22:43:07.497144 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:43:23.202044 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10001 milliseconds
I0408 22:43:23.202218 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:43:38.628838 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10001 milliseconds
I0408 22:43:38.628955 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:43:54.071631 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10000 milliseconds
I0408 22:43:54.071745 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0408 22:44:10.095556 19590 round_trippers.go:553] GET https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 10002 milliseconds
I0408 22:44:10.095674 19590 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
couldn't validate the identity of the API Server
k8s.io/kubernetes/cmd/kubeadm/app/discovery.For
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/discovery/discovery.go:45
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).TLSBootstrapCfg
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:531
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).InitCfg
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:541
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runPreflight
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/preflight.go:97
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:178
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:255
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1581
error execution phase preflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:178
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:255
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1581
, error: Process exited with status 1,
What is the expected behavior?
No response
What do you see instead?
No response
Operating environment
- Sealos version:v4.3.7
- Docker version:
- Kubernetes version:v1.23.15
- Operating system:centos7.9
- Runtime environment:
- Cluster size:
- Additional information:
Additional information
No response
The error seems to be due to network reasons that cause the node node to time out when accessing the api-server, please check the direct connection from node to master vip (https://10.103.97.2:6443).
Error: failed to join node 172.18.8.252:22 run command kubeadm join --config=/root/.sealos/default/etc/kubeadm-join-node.yaml -v 0on 172.18.8.252:22, output: [preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "centos2" could not be reached [WARNING Hostname]: hostname "centos2": lookup centos2 on 221.6.4.66:53: read udp 172.18.8.252:18650->221.6.4.66:53: i/o timeout error execution phase preflight: couldn't validate the identity of the API Server: Get "https://10.103.97.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": x509: certificate is valid for 172.18.8.247, 172.18.8.248, 172.18.8.249, 127.0.0.1, 10.96.0.1, not 10.103.97.2
多master高可用集群默认使用10.103.97.2作为vip,并依赖lvscare实现负载均衡,node节点通过启动static-pod加入lvscare.
但是add节点的时候没有还启动static-pod,因此无法通过lvscare下发的ipvs策略连通10.103.97.2,从而导致连接超时报错。希望能够注意到这个bug!Thanks
This issue has been automatically closed because we haven't heard back for more than 60 days, please reopen this issue if necessary.