3.1.9安装集群报错
What is version of KubeKey has the issue?
3.1.9
What is your os environment?
ubuntu22.04
KubeKey config file
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: dual-node-cluster
spec:
hosts:
- {name: master, address: 10.12.115.34, internalAddress: 10.12.115.34, user: root, sshKey: "/root/.ssh/id_rsa"} # 使用 SSH 密钥路径
- {name: worker, address: 10.12.114.139, internalAddress: 10.12.114.139, user: root, sshKey: "/root/.ssh/id_rsa"}
roleGroups:
etcd: [master]
control-plane: [master]
worker: [worker]
kubernetes:
version: v1.23.0
clusterName: mada
autoRenewCerts: true
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
A clear and concise description of what happend.
15:13:40 CST [JoinNodesModule] Join worker node 15:13:40 CST stdout: [worker] [discovery.bootstrapToken.token: Invalid value: "": the bootstrap token is invalid, discovery.tlsBootstrapToken: Invalid value: "": the bootstrap token is invalid] To see the stack trace of this error execute with --v=5 or higher 15:13:40 CST stdout: [worker] [preflight] Running pre-flight checks W0521 15:13:40.696722 36053 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" W0521 15:13:40.698379 36053 cleanupnode.go:109] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. 15:13:40 CST message: [worker] join node failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm join --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" [discovery.bootstrapToken.token: Invalid value: "": the bootstrap token is invalid, discovery.tlsBootstrapToken: Invalid value: "": the bootstrap token is invalid] To see the stack trace of this error execute with --v=5 or higher: Process exited with status 3 15:13:40 CST retry: [worker]
Relevant log output
Additional information
No response
It seems that there was a bootstrapToken issue when adding the node.
Perhaps you can uninstall the cluster kk delete cluster -f xxx.yaml, then modify the k8s version in the configuration file to v1.23.17 and try to reinstall.
一样的问题 kubernetes 版本1.26.15 在执行添加节点时候报错token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "k7f0jb", will try again master节点kubectl get configmap cluster-info --namespace=kube-public -o yaml apiVersion: v1 data: kubeconfig: | apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJS2VNZ0tXWVp1WEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBMU16QXdOalUzTVRoYUZ3MHpOVEExTWpnd056QXlNVGhhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURsN2dqN0p1U1FYdHVVbkRwMzlBVHlSL2tlblVMYUU1ejI4dkpCTkppQWRFVjYrdDQxa0I4eHJOdkgKZkNhNUVjV3YyREFKb1pvSEVQcUhwUkhqRHh1RG9QallzYUVZVjIzTzdhaU1EeVptR0gwK2E2RGtFZzg3Q1g0RApNSzRlNTNJVGJQSGpMRUhmOVgrMXorbmxaQysxK1djMnJ6ekdMQWNJdHFlR1ZCVEFLVTZKQU1YYVlDQ25UakZVClBibG5IYlRMQUo3VDEzcmdUUVc5MllKZHZHa0w0NGkydVRlWVJGd3BhYVJIcVFsSjhjeFg2b3hqRm5qVHM1dlUKcTdLK01Gd0JYajUvVUtDbDlTZG05M1dtSXNDbmJzVjJoZFE1TEhCdm96VU1oejJjZjduTVNVaG9KTmxWVFRUcwpNdFBSZmNiWU5pOUx0aHlFUUpaNEtGTms2d0s5QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSdERCclFBMWR5MzBMK29Ba0tjQVJwbE5LWkZUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQlhvSFhXVmV4aAp1UUhSck5odWdCMEIvWjdMT2FMZUFVdHI3SEt0ZDU1SmVDMkxyZVRzYllza1J2TEJDNEZ1NTZvS3A4TG0xQ3lpClV6blZmUnd0RjY3dnpHL0tnVkliVHJlY25IRkZIditVOUtDeU83aVdwY2VlNEgyMUFyVERkNEtqcS9QbFNwS0YKc1JCcU5JT0pWeXZDU1JmcFFVT3cxNjB4Z1E0eHczd2dVR0poVG1rMWppNHRyMTRCV2Z0OHNCQ0xxMkJXYkNJMAo5ajQrZEkxRFhWbks0cnVHQ0VTNjNwVXBzbWFJeTkyeWFwSmhuOTRuQ2czMXkyTHJmbzlRV3hxUnFTMDIwbExjCnBLU0trMERVQTY3VTZ0eFNLL0plSGFwcmZoVUhGMGpVcHlaQzAybDFWSHNWbGl6WmlBUmxseEJIQzEzcnJNT1IKU0ZJb1Z2WVJ3RklUCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://lb.kubesphere.local:6443 name: "" contexts: null current-context: "" kind: Config preferences: {} users: null 没有jws-kubeconfig信息 ipv4单栈可以正常添加节点,ipv4 ipv6的双栈就报错 确认和kubernetes版本没有关系,测试了1.26.15 1.27.16 1.28.15 1.29.15 1.30.12 1.31.8都一样
same issues in kk 3.1.7 3.1.8 3.1.9 when install k8s 1.29 1.31 1.32 1.33
W0612 09:55:10.368866 6017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. [discovery.bootstrapToken.token: Invalid value: "": the bootstrap token is invalid, discovery.tlsBootstrapToken: Invalid value: "": the bootstrap token is invalid] To see the stack trace of this error execute with --v=5 or higher