kubekey
kubekey copied to clipboard
1.28.0, 1.28.1, 1.28.2, 1.29.0 not found
What is version of KubeKey has the issue?
v3.0.13
What is your os environment?
debian 12
KubeKey config file
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master, address: 192.168.101.148, internalAddress: 192.168.101.148, user: root, password: "123"}
# - {name: worker, address: 192.168.101.145, internalAddress: 192.168.101.145, user: root, password: "123"}
roleGroups:
etcd:
- master
control-plane:
- master
worker:
- master
controlPlaneEndpoint:
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.29.0
clusterName: cluster.local
masqueradeAll: false
maxPods: 110
nodeCidrMaskSize: 24
proxyMode: ipvs
autoRenewCerts: true
containerManager: containerd
etcd:
type: kubeadm
network:
plugin: calico
calico:
ipipMode: Never
vxlanMode: Never
vethMTU: 1440
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
multusCNI:
enabled: false
registry:
registryMirrors: []
addons: []
### A clear and concise description of what happend.
1.28.0, 1.28.1, 1.28.2, 1.29.0 not found
### Relevant log output
```shell
root@debian:~# kk create cluster -f config-sample.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
15:43:17 MSK [GreetingsModule] Greetings
15:43:17 MSK message: [master]
Greetings, KubeKey!
15:43:17 MSK success: [master]
15:43:17 MSK [NodePreCheckModule] A pre-check on nodes
15:43:18 MSK success: [master]
15:43:18 MSK [ConfirmModule] Display confirmation form
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master | y | y | y | y | y | | | y | | | y | | | | MSK 15:43:18 |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: y
15:43:19 MSK success: [LocalHost]
15:43:19 MSK [NodeBinariesModule] Download installation binaries
15:43:19 MSK message: [localhost]
downloading amd64 kubeadm v1.29.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46.0M 100 46.0M 0 0 8498k 0 0:00:05 0:00:05 --:--:-- 9547k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46.0M 100 46.0M 0 0 7686k 0 0:00:06 0:00:06 --:--:-- 9019k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46.0M 100 46.0M 0 0 6974k 0 0:00:06 0:00:06 --:--:-- 8047k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46.0M 100 46.0M 0 0 7817k 0 0:00:06 0:00:06 --:--:-- 9180k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46.0M 100 46.0M 0 0 7472k 0 0:00:06 0:00:06 --:--:-- 8597k
15:43:54 MSK message: [LocalHost]
Failed to download kubeadm binary: curl -L -o /root/kubekey/kube/v1.29.0/amd64/kubeadm https://storage.googleapis.com/kubernetes-release/release/v1.29.0/bin/linux/amd64/kubeadm error: No SHA256 found for kubeadm. v1.29.0 is not supported.
15:43:54 MSK failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[NodeBinariesModule] exec failed:
failed: [LocalHost] [DownloadBinaries] exec failed after 1 retries: Failed to download kubeadm binary: curl -L -o /root/kubekey/kube/v1.29.0/amd64/kubeadm https://storage.googleapis.com/kubernetes-release/release/v1.29.0/bin/linux/amd64/kubeadm error: No SHA256 found for kubeadm. v1.29.0 is not supported.
1.27.8 too not found
root@debian:~# kk create cluster -f config-sample.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
17:03:29 MSK [GreetingsModule] Greetings
17:03:29 MSK message: [master]
Greetings, KubeKey!
17:03:29 MSK success: [master]
17:03:29 MSK [NodePreCheckModule] A pre-check on nodes
17:03:29 MSK success: [master]
17:03:29 MSK [ConfirmModule] Display confirmation form
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master | y | y | y | y | y | | | y | | | y | | | | MSK 17:03:29 |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: y
17:03:31 MSK success: [LocalHost]
17:03:31 MSK [NodeBinariesModule] Download installation binaries
17:03:31 MSK message: [localhost]
downloading amd64 kubeadm v1.27.2 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 45.9M 100 45.9M 0 0 7711k 0 0:00:06 0:00:06 --:--:-- 9006k
17:03:38 MSK message: [localhost]
downloading amd64 kubelet v1.27.2 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 101M 100 101M 0 0 8657k 0 0:00:11 0:00:11 --:--:-- 10.9M
17:03:52 MSK message: [localhost]
downloading amd64 kubectl v1.27.2 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 46.9M 100 46.9M 0 0 7008k 0 0:00:06 0:00:06 --:--:-- 7976k
17:03:59 MSK message: [localhost]
downloading amd64 helm v3.9.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.3M 100 13.3M 0 0 1678k 0 0:00:08 0:00:08 --:--:-- 1628k
17:04:08 MSK message: [localhost]
downloading amd64 kubecni v1.2.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 38.6M 100 38.6M 0 0 5272k 0 0:00:07 0:00:07 --:--:-- 8738k
17:04:16 MSK message: [localhost]
downloading amd64 crictl v1.24.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
100 13.8M 100 13.8M 0 0 3700k 0 0:00:03 0:00:03 --:--:-- 8841k
17:04:20 MSK message: [localhost]
downloading amd64 etcd v3.4.13 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 16.5M 100 16.5M 0 0 4571k 0 0:00:03 0:00:03 --:--:-- 6167k
17:04:24 MSK message: [localhost]
downloading amd64 containerd 1.6.4 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 42.3M 100 42.3M 0 0 6093k 0 0:00:07 0:00:07 --:--:-- 6675k
17:04:32 MSK message: [localhost]
downloading amd64 runc v1.1.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 9194k 100 9194k 0 0 3694k 0 0:00:02 0:00:02 --:--:-- 7306k
17:04:35 MSK message: [localhost]
downloading amd64 calicoctl v3.26.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 62.7M 100 62.7M 0 0 9538k 0 0:00:06 0:00:06 --:--:-- 11.2M
17:04:43 MSK success: [LocalHost]
17:04:43 MSK [ConfigureOSModule] Get OS release
17:04:43 MSK success: [master]
17:04:43 MSK [ConfigureOSModule] Prepare to init OS
17:04:44 MSK success: [master]
17:04:44 MSK [ConfigureOSModule] Generate init os script
17:04:44 MSK success: [master]
17:04:44 MSK [ConfigureOSModule] Exec init os script
17:04:45 MSK stdout: [master]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
17:04:45 MSK success: [master]
17:04:45 MSK [ConfigureOSModule] configure the ntp server for each node
17:04:45 MSK skipped: [master]
17:04:45 MSK [KubernetesStatusModule] Get kubernetes cluster status
17:04:45 MSK success: [master]
17:04:45 MSK [InstallContainerModule] Sync containerd binaries
17:04:47 MSK success: [master]
17:04:47 MSK [InstallContainerModule] Sync crictl binaries
17:04:48 MSK success: [master]
17:04:48 MSK [InstallContainerModule] Generate containerd service
17:04:48 MSK success: [master]
17:04:48 MSK [InstallContainerModule] Generate containerd config
17:04:48 MSK success: [master]
17:04:48 MSK [InstallContainerModule] Generate crictl config
17:04:48 MSK success: [master]
17:04:48 MSK [InstallContainerModule] Enable containerd
17:04:49 MSK success: [master]
17:04:49 MSK [PullModule] Start to pull images on all nodes
17:04:49 MSK message: [master]
downloading image: kubesphere/etcd:v3.4.13
17:05:04 MSK message: [master]
downloading image: kubesphere/pause:3.8
17:05:09 MSK message: [master]
downloading image: kubesphere/kube-apiserver:v1.27.2
17:05:22 MSK message: [master]
downloading image: kubesphere/kube-controller-manager:v1.27.2
17:05:33 MSK message: [master]
downloading image: kubesphere/kube-scheduler:v1.27.2
17:05:41 MSK message: [master]
downloading image: kubesphere/kube-proxy:v1.27.2
17:05:51 MSK message: [master]
downloading image: coredns/coredns:1.9.3
17:06:01 MSK message: [master]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
17:06:16 MSK message: [master]
downloading image: calico/kube-controllers:v3.26.1
17:06:29 MSK message: [master]
downloading image: calico/cni:v3.26.1
17:06:49 MSK message: [master]
downloading image: calico/node:v3.26.1
17:07:09 MSK message: [master]
downloading image: calico/pod2daemon-flexvol:v3.26.1
17:07:19 MSK message: [master]
downloading image: library/haproxy:2.3
17:07:32 MSK success: [master]
17:07:32 MSK [InstallKubeBinariesModule] Synchronize kubernetes binaries
17:07:49 MSK success: [master]
17:07:49 MSK [InstallKubeBinariesModule] Change kubelet mode
17:07:49 MSK success: [master]
17:07:49 MSK [InstallKubeBinariesModule] Generate kubelet service
17:07:50 MSK success: [master]
17:07:50 MSK [InstallKubeBinariesModule] Enable kubelet service
17:07:50 MSK success: [master]
17:07:50 MSK [InstallKubeBinariesModule] Generate kubelet env
17:07:50 MSK success: [master]
17:07:50 MSK [InitKubernetesModule] Generate kubeadm config
17:07:51 MSK success: [master]
17:07:51 MSK [InitKubernetesModule] Generate audit policy
17:07:51 MSK skipped: [master]
17:07:51 MSK [InitKubernetesModule] Generate audit webhook
17:07:51 MSK skipped: [master]
17:07:51 MSK [InitKubernetesModule] Init cluster using kubeadm
17:07:51 MSK stdout: [master]
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta2". Please use kubeadm v1.22 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher
17:07:51 MSK stdout: [master]
[preflight] Running pre-flight checks
W1216 17:07:51.188830 2884 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1216 17:07:51.198551 2884 cleanupnode.go:134] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
17:07:51 MSK message: [master]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta2". Please use kubeadm v1.22 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
17:07:51 MSK retry: [master]
17:07:56 MSK stdout: [master]
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta2". Please use kubeadm v1.22 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher
17:07:56 MSK stdout: [master]
[preflight] Running pre-flight checks
W1216 17:07:56.344533 2899 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1216 17:07:56.351650 2899 cleanupnode.go:134] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
17:07:56 MSK message: [master]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta2". Please use kubeadm v1.22 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
17:07:56 MSK retry: [master]
17:08:01 MSK stdout: [master]
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta2". Please use kubeadm v1.22 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher
17:08:01 MSK stdout: [master]
[preflight] Running pre-flight checks
W1216 17:08:01.495692 2915 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1216 17:08:01.504846 2915 cleanupnode.go:134] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
17:08:01 MSK message: [master]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta2". Please use kubeadm v1.22 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
17:08:01 MSK failed: [master]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [master] [KubeadmInit] exec failed after 3 retries: init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta2". Please use kubeadm v1.22 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
failed: [LocalHost] [DownloadBinaries] exec failed after 1 retries: Failed to download kubeadm binary: curl -L -o /root/kubekey/kube/v1.28.2/amd64/kubeadm https://kubernetes-release.pek3b.qingstor.com/release/v1.28.2/bin/linux/amd64/kubeadm error: No SHA256 found for kubeadm. v1.28.2 is not supported.