离线部署v1.28.8,初始化集群报错
What is version of KubeKey has the issue?
kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.5", GitCommit:"8347277057bf9f84e89fec174019a675d582b23b", GitTreeState:"clean", BuildDate:"2024-08-15T11:55:51Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
What is your os environment?
Centos 7.9
KubeKey config file
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: k8s-master01, address: 10.0.0.23, internalAddress: 10.0.0.23, user: root, privateKeyPath: "/root/.ssh/id_rsa"}
- {name: k8s-node01, address: 10.0.0.25, internalAddress: 10.0.0.25, user: root, privateKeyPath: "/root/.ssh/id_rsa"}
- {name: k8s-node02, address: 10.0.0.46, internalAddress: 10.0.0.46, user: root, privateKeyPath: "/root/.ssh/id_rsa"}
roleGroups:
etcd:
- k8s-master01
master:
- k8s-master01
worker:
- k8s-master01
- k8s-node01
- k8s-node02
controlPlaneEndpoint:
# # internalLoadbalancer: kube-vip #Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
domain: api.k8s.inner.lb
address: ""
# port: 6443
kubernetes:
version: v1.28.8
clusterName: cluster.local
autoRenewCerts: true
containerManager: containerd
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
type: harbor
registryMirrors: []
insecureRegistries: ["customer.registry.com","10.0.0.47"]
privateRegistry: "customer.registry.com"
auths:
"customer.registry.com":
username: admin
password: BNv6F5TrG0WWBFjcySHLA7bH
plainHTTP: true
addons: []
A clear and concise description of what happend.
使用的是manifest.yaml制作的离线包 apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Manifest metadata: name: sample spec: arches:
- amd64 operatingSystems:
- arch: amd64 type: linux id: centos version: "7" osImage: CentOS Linux 7 (Core) repository: iso: localPath: url:
- arch: amd64 type: linux id: ubuntu version: "22.04" osImage: Ubuntu 22.04 LTS repository: iso: localPath: url: kubernetesDistributions:
- type: kubernetes
version: v1.28.8
components:
helm:
version: v3.14.3
cni:
version: v1.2.0
etcd:
version: v3.5.13
calicoctl:
version: v3.27.3
containerRuntimes:
- type: containerd version: 1.7.13 crictl: version: v1.29.0 images:
k8s包
- sobot-private-cloud.tencentcloudcr.com/koordinator-sh/koord-descheduler:v1.5.0
- sobot-private-cloud.tencentcloudcr.com/koordinator-sh/koord-manager:v1.5.0
- sobot-private-cloud.tencentcloudcr.com/koordinator-sh/koordlet:v1.5.0
- sobot-private-cloud.tencentcloudcr.com/calico/cni:v3.27.3
- sobot-private-cloud.tencentcloudcr.com/coredns/coredns:1.9.3
- sobot-private-cloud.tencentcloudcr.com/kubesphere/k8s-dns-node-cache:1.22.20
- sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-apiserver:v1.28.8
- sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-controller-manager:v1.28.8
- sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-controllers:v3.27.3
- sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-proxy:v1.28.8
- sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-scheduler:v1.28.8
- sobot-private-cloud.tencentcloudcr.com/calico/node:v3.27.3
- sobot-private-cloud.tencentcloudcr.com/kubesphere/pause:3.9
- sobot-private-cloud.tencentcloudcr.com/library/haproxy:2.3
- sobot-private-cloud.tencentcloudcr.com/calico/pod2daemon-flexvol:v3.26.1
- sobot-private-cloud.tencentcloudcr.com/kubesphere/provisioner-localpv:3.3.0
- sobot-private-cloud.tencentcloudcr.com/kubesphere/linux-utils:3.3.0
- sobot-private-cloud.tencentcloudcr.com/kubesphere/kubectl:v1.22.0
制作好镜像之后,使用本地harbor的仓库进行部署 出现的报错: 15:43:53 CST success: [LocalHost] 15:43:53 CST [UnArchiveArtifactModule] Create the KubeKey artifact Md5 file 15:43:54 CST success: [LocalHost] 15:43:54 CST [NodeBinariesModule] Download installation binaries 15:43:54 CST message: [localhost] downloading amd64 kubeadm v1.28.8 ... 15:43:54 CST message: [localhost] kubeadm exists 15:43:54 CST message: [localhost] downloading amd64 kubelet v1.28.8 ... 15:43:55 CST message: [localhost] kubelet exists 15:43:55 CST message: [localhost] downloading amd64 kubectl v1.28.8 ... 15:43:55 CST message: [localhost] kubectl exists 15:43:55 CST message: [localhost] downloading amd64 helm v3.14.3 ... 15:43:55 CST message: [localhost] helm exists 15:43:55 CST message: [localhost] downloading amd64 kubecni v1.2.0 ... 15:43:55 CST message: [localhost] kubecni exists 15:43:55 CST message: [localhost] downloading amd64 crictl v1.29.0 ... 15:43:55 CST message: [localhost] crictl exists 15:43:55 CST message: [localhost] downloading amd64 etcd v3.5.13 ... 15:43:55 CST message: [localhost] etcd exists 15:43:55 CST message: [localhost] downloading amd64 containerd 1.7.13 ... 15:43:55 CST message: [localhost] containerd exists 15:43:55 CST message: [localhost] downloading amd64 runc v1.1.12 ... 15:43:55 CST message: [localhost] runc exists 15:43:55 CST message: [localhost] downloading amd64 calicoctl v3.27.3 ... 15:43:55 CST message: [localhost] calicoctl exists 15:43:55 CST success: [LocalHost] 15:43:55 CST [ConfigureOSModule] Get OS release 15:43:56 CST success: [k8s-node02] 15:43:56 CST success: [k8s-node01] 15:43:56 CST success: [k8s-master01] 15:43:56 CST [ConfigureOSModule] Prepare to init OS 15:43:56 CST success: [k8s-node02] 15:43:56 CST success: [k8s-node01] 15:43:56 CST success: [k8s-master01] 15:43:56 CST [ConfigureOSModule] Generate init os script 15:43:56 CST success: [k8s-node02] 15:43:56 CST success: [k8s-master01] 15:43:56 CST success: [k8s-node01] 15:43:56 CST [ConfigureOSModule] Exec init os script 15:43:57 CST stdout: [k8s-node02] setenforce: SELinux is disabled Disabled net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 net.ipv4.ip_forward = 1 net.ipv4.tcp_max_tw_buckets = 1048576 net.core.somaxconn = 32768 net.ipv4.tcp_slow_start_after_idle = 0 kernel.sysrq = 1 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 kernel.printk = 5 net.ipv4.ip_local_reserved_ports = 30000-32767 net.core.netdev_max_backlog = 65535 net.core.rmem_max = 33554432 net.core.wmem_max = 33554432 net.ipv4.tcp_max_syn_backlog = 1048576 net.ipv4.neigh.default.gc_thresh1 = 512 net.ipv4.neigh.default.gc_thresh2 = 2048 net.ipv4.neigh.default.gc_thresh3 = 4096 net.ipv4.tcp_retries2 = 15 net.ipv4.tcp_max_orphans = 65535 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 10 net.ipv4.udp_rmem_min = 131072 net.ipv4.udp_wmem_min = 131072 net.ipv4.conf.all.arp_accept = 1 net.ipv4.conf.default.arp_accept = 1 net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.default.arp_ignore = 1 vm.max_map_count = 262144 vm.swappiness = 0 vm.overcommit_memory = 0 fs.inotify.max_user_instances = 524288 fs.inotify.max_user_watches = 524288 fs.pipe-max-size = 4194304 fs.aio-max-nr = 262144 kernel.pid_max = 65535 kernel.watchdog_thresh = 5 kernel.hung_task_timeout_secs = 5 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 sysctl: setting key "net.ipv4.tcp_keepalive_probes": Invalid argument net.ipv4.tcp_keepalive_probes = 10net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 15:43:57 CST stdout: [k8s-node01] setenforce: SELinux is disabled Disabled net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 net.ipv4.ip_forward = 1 net.ipv4.tcp_max_tw_buckets = 1048576 net.core.somaxconn = 32768 net.ipv4.tcp_slow_start_after_idle = 0 kernel.sysrq = 1 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 kernel.printk = 5 net.ipv4.ip_local_reserved_ports = 30000-32767 net.core.netdev_max_backlog = 65535 net.core.rmem_max = 33554432 net.core.wmem_max = 33554432 net.ipv4.tcp_max_syn_backlog = 1048576 net.ipv4.neigh.default.gc_thresh1 = 512 net.ipv4.neigh.default.gc_thresh2 = 2048 net.ipv4.neigh.default.gc_thresh3 = 4096 net.ipv4.tcp_retries2 = 15 net.ipv4.tcp_max_orphans = 65535 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 10 net.ipv4.udp_rmem_min = 131072 net.ipv4.udp_wmem_min = 131072 net.ipv4.conf.all.arp_accept = 1 net.ipv4.conf.default.arp_accept = 1 net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.default.arp_ignore = 1 vm.max_map_count = 262144 vm.swappiness = 0 vm.overcommit_memory = 0 fs.inotify.max_user_instances = 524288 fs.inotify.max_user_watches = 524288 fs.pipe-max-size = 4194304 fs.aio-max-nr = 262144 kernel.pid_max = 65535 kernel.watchdog_thresh = 5 kernel.hung_task_timeout_secs = 5 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 sysctl: setting key "net.ipv4.tcp_keepalive_probes": Invalid argument net.ipv4.tcp_keepalive_probes = 10net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 15:44:04 CST stdout: [k8s-master01] setenforce: SELinux is disabled Disabled net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 net.ipv4.ip_forward = 1 net.ipv4.tcp_max_tw_buckets = 1048576 net.core.somaxconn = 32768 net.ipv4.tcp_slow_start_after_idle = 0 kernel.sysrq = 1 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 kernel.printk = 5 net.ipv4.ip_local_reserved_ports = 30000-32767 net.core.netdev_max_backlog = 65535 net.core.rmem_max = 33554432 net.core.wmem_max = 33554432 net.ipv4.tcp_max_syn_backlog = 1048576 net.ipv4.neigh.default.gc_thresh1 = 512 net.ipv4.neigh.default.gc_thresh2 = 2048 net.ipv4.neigh.default.gc_thresh3 = 4096 net.ipv4.tcp_retries2 = 15 net.ipv4.tcp_max_orphans = 65535 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 10 net.ipv4.udp_rmem_min = 131072 net.ipv4.udp_wmem_min = 131072 net.ipv4.conf.all.arp_accept = 1 net.ipv4.conf.default.arp_accept = 1 net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.default.arp_ignore = 1 vm.max_map_count = 262144 vm.swappiness = 0 vm.overcommit_memory = 0 fs.inotify.max_user_instances = 524288 fs.inotify.max_user_watches = 524288 fs.pipe-max-size = 4194304 fs.aio-max-nr = 262144 kernel.pid_max = 65535 kernel.watchdog_thresh = 5 kernel.hung_task_timeout_secs = 5 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 sysctl: setting key "net.ipv4.tcp_keepalive_probes": Invalid argument net.ipv4.tcp_keepalive_probes = 10net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 15:44:04 CST success: [k8s-node02] 15:44:04 CST success: [k8s-node01] 15:44:04 CST success: [k8s-master01] 15:44:04 CST [ConfigureOSModule] configure the ntp server for each node 15:44:04 CST skipped: [k8s-node02] 15:44:04 CST skipped: [k8s-node01] 15:44:04 CST skipped: [k8s-master01] 15:44:04 CST [KubernetesStatusModule] Get kubernetes cluster status 15:44:04 CST success: [k8s-master01] 15:44:04 CST [InstallContainerModule] Sync containerd binaries 15:44:04 CST skipped: [k8s-master01] 15:44:04 CST skipped: [k8s-node02] 15:44:04 CST skipped: [k8s-node01] 15:44:04 CST [InstallContainerModule] Generate containerd service 15:44:04 CST skipped: [k8s-node02] 15:44:04 CST skipped: [k8s-master01] 15:44:04 CST skipped: [k8s-node01] 15:44:04 CST [InstallContainerModule] Generate containerd config 15:44:04 CST skipped: [k8s-node02] 15:44:04 CST skipped: [k8s-node01] 15:44:04 CST skipped: [k8s-master01] 15:44:04 CST [InstallContainerModule] Enable containerd 15:44:04 CST skipped: [k8s-node02] 15:44:04 CST skipped: [k8s-node01] 15:44:04 CST skipped: [k8s-master01] 15:44:04 CST [InstallContainerModule] Sync crictl binaries 15:44:04 CST skipped: [k8s-node02] 15:44:04 CST skipped: [k8s-master01] 15:44:04 CST skipped: [k8s-node01] 15:44:04 CST [InstallContainerModule] Generate crictl config 15:44:04 CST success: [k8s-node02] 15:44:04 CST success: [k8s-master01] 15:44:04 CST success: [k8s-node01] 15:44:04 CST [CopyImagesToRegistryModule] Copy images to a private registry from an artifact OCI Path 15:44:04 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/koordinator-sh/koord-descheduler:v1.5.0-amd64 15:44:04 CST Destination: docker://customer.registry.com/koordinator-sh/koord-descheduler:v1.5.0-amd64 Getting image source signatures Copying blob 286c61c9a31a skipped: already exists Copying blob 2bdf44d7aa71 skipped: already exists Copying blob 452e9eed7ecf skipped: already exists Copying blob 0f8b424aa0b9 skipped: already exists Copying blob d557676654e5 skipped: already exists Copying blob c8022d07192e skipped: already exists Copying blob d858cbc252ad skipped: already exists Copying blob 1069fc2daed1 skipped: already exists Copying blob b40161cd83fc skipped: already exists Copying blob 3f4e2c586348 skipped: already exists Copying blob 80a8c047508a skipped: already exists Copying blob e43f5d512bec skipped: already exists Copying config 12905dd9c0 done Writing manifest to image destination Storing signatures 15:44:04 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/koordinator-sh/koord-manager:v1.5.0-amd64 15:44:04 CST Destination: docker://customer.registry.com/koordinator-sh/koord-manager:v1.5.0-amd64 Getting image source signatures Copying blob 286c61c9a31a skipped: already exists Copying blob 2bdf44d7aa71 skipped: already exists Copying blob 452e9eed7ecf skipped: already exists Copying blob 0f8b424aa0b9 skipped: already exists Copying blob d557676654e5 skipped: already exists Copying blob c8022d07192e skipped: already exists Copying blob d858cbc252ad skipped: already exists Copying blob 1069fc2daed1 skipped: already exists Copying blob b40161cd83fc skipped: already exists Copying blob 3f4e2c586348 skipped: already exists Copying blob 80a8c047508a skipped: already exists Copying blob bf24dc6f739d skipped: already exists Copying config 87d9ac0254 done Writing manifest to image destination Storing signatures 15:44:05 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/koordinator-sh/koordlet:v1.5.0-amd64 15:44:05 CST Destination: docker://customer.registry.com/koordinator-sh/koordlet:v1.5.0-amd64 Getting image source signatures Copying blob 43f89b94cd7d skipped: already exists Copying blob 5e3b7ee77381 skipped: already exists Copying blob 5bd037f007fd skipped: already exists Copying blob 4cda774ad2ec skipped: already exists Copying blob 775f22adee62 skipped: already exists Copying blob 8722b13b9dd0 skipped: already exists Copying blob 6a28040c9466 skipped: already exists Copying blob 92f9ed391b36 skipped: already exists Copying config 0e5b147aaa done Writing manifest to image destination Storing signatures 15:44:05 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/calico/cni:v3.27.3-amd64 15:44:05 CST Destination: docker://customer.registry.com/calico/cni:v3.27.3-amd64 Getting image source signatures Copying blob 7f6b369675cd skipped: already exists Copying blob 2467ebffbc61 skipped: already exists Copying blob 9e67ece2e21a skipped: already exists Copying blob 2dc9da424f21 skipped: already exists Copying blob a3bc6886b0cc skipped: already exists Copying blob a70831ca8562 skipped: already exists Copying blob 77a0dd1eba72 skipped: already exists Copying blob f0a908dd4912 skipped: already exists Copying blob 38313ede1f1b skipped: already exists Copying blob 780d72147b30 skipped: already exists Copying blob f4bd7abfee54 skipped: already exists Copying blob 4f4fb700ef54 skipped: already exists Copying config eb9a13bb78 done Writing manifest to image destination Storing signatures 15:44:05 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/coredns/coredns:1.9.3-amd64 15:44:05 CST Destination: docker://customer.registry.com/coredns/coredns:1.9.3-amd64 Getting image source signatures Copying blob d92bdee79785 skipped: already exists Copying blob f2401d57212f skipped: already exists Copying config a2fe663586 done Writing manifest to image destination Storing signatures 15:44:05 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/k8s-dns-node-cache:1.22.20-amd64 15:44:05 CST Destination: docker://customer.registry.com/kubesphere/k8s-dns-node-cache:1.22.20-amd64 Getting image source signatures Copying blob d6460eb5ced5 skipped: already exists Copying blob 9088d8860fc5 skipped: already exists Copying blob cf529ee3fa7a skipped: already exists Copying blob 87c9185cc69e skipped: already exists Copying blob d312b7a6b045 skipped: already exists Copying config 0d5dc31130 done Writing manifest to image destination Storing signatures 15:44:06 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-apiserver:v1.28.8-amd64 15:44:06 CST Destination: docker://customer.registry.com/kubesphere/kube-apiserver:v1.28.8-amd64 Getting image source signatures Copying blob fc6336cdd860 skipped: already exists Copying blob 960043b8858c skipped: already exists Copying blob 5c984a731132 skipped: already exists Copying blob eebb06941f3e skipped: already exists Copying blob 02cd68c0cbf6 skipped: already exists Copying blob d3c894b5b2b0 skipped: already exists Copying blob b40161cd83fc skipped: already exists Copying blob 46ba3f23f1d3 skipped: already exists Copying blob 4fa131a1b726 skipped: already exists Copying blob 5e400a958615 skipped: already exists Copying blob 94e53edc7745 skipped: already exists Copying config a009bd11f7 done Writing manifest to image destination Storing signatures 15:44:06 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-controller-manager:v1.28.8-amd64 15:44:06 CST Destination: docker://customer.registry.com/kubesphere/kube-controller-manager:v1.28.8-amd64 Getting image source signatures Copying blob fc6336cdd860 skipped: already exists Copying blob 960043b8858c skipped: already exists Copying blob 5c984a731132 skipped: already exists Copying blob eebb06941f3e skipped: already exists Copying blob 02cd68c0cbf6 skipped: already exists Copying blob d3c894b5b2b0 skipped: already exists Copying blob b40161cd83fc skipped: already exists Copying blob 46ba3f23f1d3 skipped: already exists Copying blob 4fa131a1b726 skipped: already exists Copying blob 5e400a958615 skipped: already exists Copying blob 0251913befbc skipped: already exists Copying config ed6dfa47b8 done Writing manifest to image destination Storing signatures 15:44:06 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-controllers:v3.27.3-amd64 15:44:06 CST Destination: docker://customer.registry.com/kubesphere/kube-controllers:v3.27.3-amd64 Getting image source signatures Copying blob 541dccf77e44 skipped: already exists Copying blob 4faccd180c6e skipped: already exists Copying blob dbdaf440d7d2 skipped: already exists Copying blob 1c22ae41ba0b skipped: already exists Copying blob b18148039777 skipped: already exists Copying blob 6725ee4cc5b4 skipped: already exists Copying blob 709996667c9b skipped: already exists Copying blob 9f68ab4f9609 skipped: already exists Copying blob 5d8c0e6d8287 skipped: already exists Copying blob 2421731ce672 skipped: already exists Copying blob 10df919b4f05 skipped: already exists Copying blob 258f5f7faba4 skipped: already exists Copying blob 9ddd038d83c5 skipped: already exists Copying blob d59d71dc3c12 skipped: already exists Copying config ed2d35e60b done Writing manifest to image destination Storing signatures 15:44:06 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-proxy:v1.28.8-amd64 15:44:06 CST Destination: docker://customer.registry.com/kubesphere/kube-proxy:v1.28.8-amd64 Getting image source signatures Copying blob 7b8d7b893f0b skipped: already exists Copying blob e343076964aa skipped: already exists Copying config 76efc62fe6 done Writing manifest to image destination Storing signatures 15:44:07 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/kube-scheduler:v1.28.8-amd64 15:44:07 CST Destination: docker://customer.registry.com/kubesphere/kube-scheduler:v1.28.8-amd64 Getting image source signatures Copying blob fc6336cdd860 skipped: already exists Copying blob 960043b8858c skipped: already exists Copying blob 5c984a731132 skipped: already exists Copying blob eebb06941f3e skipped: already exists Copying blob 02cd68c0cbf6 skipped: already exists Copying blob d3c894b5b2b0 skipped: already exists Copying blob b40161cd83fc skipped: already exists Copying blob 46ba3f23f1d3 skipped: already exists Copying blob 4fa131a1b726 skipped: already exists Copying blob 5e400a958615 skipped: already exists Copying blob 3bd5e25420dc skipped: already exists Copying config beb90fa554 done Writing manifest to image destination Storing signatures 15:44:07 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/calico/node:v3.27.3-amd64 15:44:07 CST Destination: docker://customer.registry.com/calico/node:v3.27.3-amd64 Getting image source signatures Copying blob 2e45eae287af skipped: already exists Copying blob 502ef6e95862 skipped: already exists Copying blob 327c4b1f0b0a skipped: already exists Copying config 5440f1e5cc done Writing manifest to image destination Storing signatures 15:44:07 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/pause:3.9-amd64 15:44:07 CST Destination: docker://customer.registry.com/kubesphere/pause:3.9-amd64 Getting image source signatures Copying blob 61fec91190a0 skipped: already exists Copying config ada54d1fe6 done Writing manifest to image destination Storing signatures 15:44:07 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/library/haproxy:2.3-amd64 15:44:07 CST Destination: docker://customer.registry.com/library/haproxy:2.3-amd64 Getting image source signatures Copying blob 42c077c10790 skipped: already exists Copying blob 7ea83783973b skipped: already exists Copying blob 9849c7201598 skipped: already exists Copying blob 21beaf372245 skipped: already exists Copying blob d70ea2130bb6 skipped: already exists Copying config ed76d1ad45 done Writing manifest to image destination Storing signatures 15:44:07 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/calico/pod2daemon-flexvol:v3.26.1-amd64 15:44:07 CST Destination: docker://customer.registry.com/calico/pod2daemon-flexvol:v3.26.1-amd64 Getting image source signatures Copying blob bcbe902cc8f2 skipped: already exists Copying blob ee3291750abd skipped: already exists Copying blob 81be78a427ba skipped: already exists Copying blob 0585e205710b skipped: already exists Copying blob a43b09eb580d skipped: already exists Copying blob 8c7ace431c28 skipped: already exists Copying blob 8498b7b5c88b skipped: already exists Copying blob 2336f9a44c47 skipped: already exists Copying blob da4861763bf9 skipped: already exists Copying blob 45a8e1c1136a skipped: already exists Copying blob 25b629e1d006 skipped: already exists Copying blob 2f9f24e877dd skipped: already exists Copying blob a914731ab875 skipped: already exists Copying blob 69c97127315a skipped: already exists Copying blob 6af08340e5fe skipped: already exists Copying blob 980e0c741f2c skipped: already exists Copying blob 7ce226536430 skipped: already exists Copying blob 5a75e16b555b skipped: already exists Copying config 25ff9b57c1 done Writing manifest to image destination Storing signatures 15:44:08 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/provisioner-localpv:3.3.0-amd64 15:44:08 CST Destination: docker://customer.registry.com/kubesphere/provisioner-localpv:3.3.0-amd64 Getting image source signatures Copying blob 1b7ca6aea1dd skipped: already exists Copying blob dc334afc6648 skipped: already exists Copying blob 0781fd48f154 skipped: already exists Copying config 8ab9fdaac5 done Writing manifest to image destination Storing signatures 15:44:08 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/linux-utils:3.3.0-amd64 15:44:08 CST Destination: docker://customer.registry.com/kubesphere/linux-utils:3.3.0-amd64 Getting image source signatures Copying blob 8663204ce13b skipped: already exists Copying blob 13a99462d545 skipped: already exists Copying config 86697973ca done Writing manifest to image destination Storing signatures 15:44:08 CST Source: oci:/data/kk/kubekey/images:sobot-private-cloud.tencentcloudcr.com/kubesphere/kubectl:v1.22.0-amd64 15:44:08 CST Destination: docker://customer.registry.com/kubesphere/kubectl:v1.22.0-amd64 Getting image source signatures Copying blob 4e9f2cdf4387 skipped: already exists Copying blob 09b414607f74 skipped: already exists Copying blob b8c417bda645 skipped: already exists Copying config 30e85dae7e done Writing manifest to image destination Storing signatures 15:44:08 CST success: [LocalHost] 15:44:08 CST [CopyImagesToRegistryModule] Push multi-arch manifest to private registry 15:44:08 CST Push multi-arch manifest list: customer.registry.com/calico/node:v3.27.3 INFO[0024] Retrieving digests of member images 15:44:08 CST Digest: sha256:a39a2a2e20a96231358fca4e68a3fd03ba021ff2a40c79b0d38848dbec02574a Length: 392 15:44:08 CST Push multi-arch manifest list: customer.registry.com/kubesphere/linux-utils:3.3.0 INFO[0024] Retrieving digests of member images 15:44:08 CST Digest: sha256:7b32a4317ef5f0399e5a11dc63faaf1769fe2d0fe8f1666091a1b11847e8bd34 Length: 392 15:44:08 CST Push multi-arch manifest list: customer.registry.com/kubesphere/kube-scheduler:v1.28.8 INFO[0024] Retrieving digests of member images 15:44:08 CST Digest: sha256:be60497368efd159339c0c0f16505e329e874cbb8002e129922dece61d37a03a Length: 393 15:44:08 CST Push multi-arch manifest list: customer.registry.com/calico/pod2daemon-flexvol:v3.26.1 INFO[0024] Retrieving digests of member images 15:44:08 CST Digest: sha256:e943320ffe4e5a299f9b818fae31bac158fbe4084125df97434d6954b1803e15 Length: 393 15:44:08 CST Push multi-arch manifest list: customer.registry.com/koordinator-sh/koord-manager:v1.5.0 INFO[0025] Retrieving digests of member images 15:44:08 CST Digest: sha256:9ec77d69f6c0596cb92c8d3aaa33d524527b74191f4e7e86d2aea2844ee39fd2 Length: 393 15:44:08 CST Push multi-arch manifest list: customer.registry.com/kubesphere/kube-controller-manager:v1.28.8 INFO[0025] Retrieving digests of member images 15:44:08 CST Digest: sha256:dcb9724f279896c3896a4ab4538991864420dc7c4630a3b432d6a87ad09c5cd3 Length: 393 15:44:08 CST Push multi-arch manifest list: customer.registry.com/kubesphere/kube-controllers:v3.27.3 INFO[0025] Retrieving digests of member images 15:44:08 CST Digest: sha256:70bfc9dcf0296a14ae87035b8f80911970cb0990c4bb832fc4cf99937284c477 Length: 393 15:44:08 CST Push multi-arch manifest list: customer.registry.com/library/haproxy:2.3 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:1f2e507c23e7018d8c11299d08ad61c92acc803ce29f5b723be74b9dde3a5e53 Length: 393 15:44:09 CST Push multi-arch manifest list: customer.registry.com/koordinator-sh/koord-descheduler:v1.5.0 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:fa750021b6114e8e2ac28ef7d0664d29f459739d799bb2920322ab7181ba070b Length: 393 15:44:09 CST Push multi-arch manifest list: customer.registry.com/calico/cni:v3.27.3 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:b8b8db087f6ffd6993e64f733706ada233d6f6388b258d4f0d9976e5c4072c35 Length: 393 15:44:09 CST Push multi-arch manifest list: customer.registry.com/coredns/coredns:1.9.3 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:67c5f759dc05ed4408037dcb3dfc4d16b6b8de5bc1e7a9880ccfd3156737a422 Length: 392 15:44:09 CST Push multi-arch manifest list: customer.registry.com/kubesphere/k8s-dns-node-cache:1.22.20 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:d24c38635e172fc9432f56a23ad8c42ff1d12387a379654af1c5ddba1d0f2497 Length: 393 15:44:09 CST Push multi-arch manifest list: customer.registry.com/kubesphere/kube-apiserver:v1.28.8 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:5c53442d7bfce382b65d5d67a549daf82b84a91cac476064a426c5057e3b8328 Length: 393 15:44:09 CST Push multi-arch manifest list: customer.registry.com/kubesphere/kube-proxy:v1.28.8 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:8bb9e2c995a42d77e40d8d705b9efb7f7b9940280461994602d93775e697ef46 Length: 392 15:44:09 CST Push multi-arch manifest list: customer.registry.com/kubesphere/pause:3.9 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:7315ae9eecc0fce77d454f77cf85e4437836b77542ea1f38de59ac71df32869d Length: 392 15:44:09 CST Push multi-arch manifest list: customer.registry.com/kubesphere/provisioner-localpv:3.3.0 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:4c4be87f4d1db406c142b162f0141a91d2f9e193f83231a1f40f4e64de852f14 Length: 392 15:44:09 CST Push multi-arch manifest list: customer.registry.com/koordinator-sh/koordlet:v1.5.0 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:8201ef650844446474f91a80d5fddcdeb2f93edce45041be1f5089c951b1b855 Length: 393 15:44:09 CST Push multi-arch manifest list: customer.registry.com/kubesphere/kubectl:v1.22.0 INFO[0025] Retrieving digests of member images 15:44:09 CST Digest: sha256:ba0e4cac5e7d566e34526ad5c33235f2c758dbf33f274bfed680c1f6e19a59ad Length: 392 15:44:09 CST success: [LocalHost] 15:44:09 CST [PullModule] Start to pull images on all nodes 15:44:09 CST message: [k8s-node02] downloading image: customer.registry.com/kubesphere/pause:3.9 15:44:09 CST message: [k8s-master01] downloading image: customer.registry.com/kubesphere/pause:3.9 15:44:09 CST message: [k8s-node01] downloading image: customer.registry.com/kubesphere/pause:3.9 15:44:09 CST message: [k8s-node02] downloading image: customer.registry.com/kubesphere/kube-proxy:v1.28.8 15:44:09 CST message: [k8s-node01] downloading image: customer.registry.com/kubesphere/kube-proxy:v1.28.8 15:44:09 CST message: [k8s-master01] downloading image: customer.registry.com/kubesphere/kube-apiserver:v1.28.8 15:44:09 CST message: [k8s-node02] downloading image: customer.registry.com/coredns/coredns:1.9.3 15:44:09 CST message: [k8s-node01] downloading image: customer.registry.com/coredns/coredns:1.9.3 15:44:09 CST message: [k8s-master01] downloading image: customer.registry.com/kubesphere/kube-controller-manager:v1.28.8 15:44:09 CST message: [k8s-node02] downloading image: customer.registry.com/kubesphere/k8s-dns-node-cache:1.22.20 15:44:09 CST message: [k8s-node01] downloading image: customer.registry.com/kubesphere/k8s-dns-node-cache:1.22.20 15:44:09 CST message: [k8s-master01] downloading image: customer.registry.com/kubesphere/kube-scheduler:v1.28.8 15:44:09 CST message: [k8s-node02] downloading image: customer.registry.com/calico/kube-controllers:v3.27.3 15:44:09 CST message: [k8s-node01] downloading image: customer.registry.com/calico/kube-controllers:v3.27.3 15:44:09 CST message: [k8s-master01] downloading image: customer.registry.com/kubesphere/kube-proxy:v1.28.8 15:44:09 CST message: [k8s-node02] downloading image: customer.registry.com/calico/cni:v3.27.3 15:44:09 CST message: [k8s-node01] downloading image: customer.registry.com/calico/cni:v3.27.3 15:44:09 CST message: [k8s-master01] downloading image: customer.registry.com/coredns/coredns:1.9.3 15:44:09 CST message: [k8s-node02] downloading image: customer.registry.com/calico/node:v3.27.3 15:44:10 CST message: [k8s-node01] downloading image: customer.registry.com/calico/node:v3.27.3 15:44:10 CST message: [k8s-node02] downloading image: customer.registry.com/calico/pod2daemon-flexvol:v3.27.3 15:44:10 CST message: [k8s-master01] downloading image: customer.registry.com/kubesphere/k8s-dns-node-cache:1.22.20 15:44:10 CST message: [k8s-node01] downloading image: customer.registry.com/calico/pod2daemon-flexvol:v3.27.3 15:44:10 CST message: [k8s-master01] downloading image: customer.registry.com/calico/kube-controllers:v3.27.3 15:44:10 CST message: [k8s-master01] downloading image: customer.registry.com/calico/cni:v3.27.3 15:44:10 CST message: [k8s-master01] downloading image: customer.registry.com/calico/node:v3.27.3 15:44:10 CST message: [k8s-master01] downloading image: customer.registry.com/calico/pod2daemon-flexvol:v3.27.3 15:44:10 CST success: [k8s-node02] 15:44:10 CST success: [k8s-node01] 15:44:10 CST success: [k8s-master01] 15:44:10 CST [ETCDPreCheckModule] Get etcd status 15:44:10 CST success: [k8s-master01] 15:44:10 CST [CertsModule] Fetch etcd certs 15:44:10 CST success: [k8s-master01] 15:44:10 CST [CertsModule] Generate etcd Certs [certs] Generating "ca" certificate and key [certs] admin-k8s-master01 serving cert is signed for DNS names [api.k8s.inner.lb etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local k8s-master01 k8s-node01 k8s-node02 localhost] and IPs [127.0.0.1 ::1 10.0.0.23 10.0.0.25 10.0.0.46] [certs] member-k8s-master01 serving cert is signed for DNS names [api.k8s.inner.lb etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local k8s-master01 k8s-node01 k8s-node02 localhost] and IPs [127.0.0.1 ::1 10.0.0.23 10.0.0.25 10.0.0.46] [certs] node-k8s-master01 serving cert is signed for DNS names [api.k8s.inner.lb etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local k8s-master01 k8s-node01 k8s-node02 localhost] and IPs [127.0.0.1 ::1 10.0.0.23 10.0.0.25 10.0.0.46] 15:44:10 CST success: [LocalHost] 15:44:10 CST [CertsModule] Synchronize certs file 15:44:11 CST success: [k8s-master01] 15:44:11 CST [CertsModule] Synchronize certs file to master 15:44:11 CST skipped: [k8s-master01] 15:44:11 CST [InstallETCDBinaryModule] Install etcd using binary 15:44:12 CST success: [k8s-master01] 15:44:12 CST [InstallETCDBinaryModule] Generate etcd service 15:44:12 CST success: [k8s-master01] 15:44:12 CST [InstallETCDBinaryModule] Generate access address 15:44:12 CST success: [k8s-master01] 15:44:12 CST [ETCDConfigureModule] Health check on exist etcd 15:44:12 CST skipped: [k8s-master01] 15:44:12 CST [ETCDConfigureModule] Generate etcd.env config on new etcd 15:44:12 CST success: [k8s-master01] 15:44:12 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd 15:44:12 CST success: [k8s-master01] 15:44:12 CST [ETCDConfigureModule] Restart etcd 15:44:17 CST stdout: [k8s-master01] Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service. 15:44:17 CST success: [k8s-master01] 15:44:17 CST [ETCDConfigureModule] Health check on all etcd 15:44:17 CST success: [k8s-master01] 15:44:17 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd 15:44:18 CST success: [k8s-master01] 15:44:18 CST [ETCDConfigureModule] Health check on all etcd 15:44:18 CST success: [k8s-master01] 15:44:18 CST [ETCDBackupModule] Backup etcd data regularly 15:44:18 CST success: [k8s-master01] 15:44:18 CST [ETCDBackupModule] Generate backup ETCD service 15:44:18 CST success: [k8s-master01] 15:44:18 CST [ETCDBackupModule] Generate backup ETCD timer 15:44:18 CST success: [k8s-master01] 15:44:18 CST [ETCDBackupModule] Enable backup etcd service 15:44:18 CST success: [k8s-master01] 15:44:18 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries 15:44:28 CST success: [k8s-master01] 15:44:28 CST success: [k8s-node02] 15:44:28 CST success: [k8s-node01] 15:44:28 CST [InstallKubeBinariesModule] Change kubelet mode 15:44:28 CST success: [k8s-node02] 15:44:28 CST success: [k8s-node01] 15:44:28 CST success: [k8s-master01] 15:44:28 CST [InstallKubeBinariesModule] Generate kubelet service 15:44:28 CST success: [k8s-node02] 15:44:28 CST success: [k8s-node01] 15:44:28 CST success: [k8s-master01] 15:44:28 CST [InstallKubeBinariesModule] Enable kubelet service 15:44:29 CST success: [k8s-master01] 15:44:29 CST success: [k8s-node02] 15:44:29 CST success: [k8s-node01] 15:44:29 CST [InstallKubeBinariesModule] Generate kubelet env 15:44:29 CST success: [k8s-node02] 15:44:29 CST success: [k8s-node01] 15:44:29 CST success: [k8s-master01] 15:44:29 CST [InitKubernetesModule] Generate kubeadm config 15:44:29 CST success: [k8s-master01] 15:44:29 CST [InitKubernetesModule] Generate audit policy 15:44:29 CST skipped: [k8s-master01] 15:44:29 CST [InitKubernetesModule] Generate audit webhook 15:44:29 CST skipped: [k8s-master01] 15:44:29 CST [InitKubernetesModule] Init cluster using kubeadm 15:48:56 CST stdout: [k8s-master01] W0328 15:44:29.470836 30675 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] [init] Using Kubernetes version: v1.28.8 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [api.k8s.inner.lb k8s-master01 k8s-master01.cluster.local k8s-node01 k8s-node01.cluster.local k8s-node02 k8s-node02.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.233.0.1 10.0.0.23 127.0.0.1 10.0.0.25 10.0.0.46] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 15:48:57 CST stdout: [k8s-master01] [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0328 15:48:56.906010 32590 reset.go:120] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://api.k8s.inner.lb:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") [preflight] Running pre-flight checks W0328 15:48:56.906135 32590 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] Deleted contents of the etcd data directory: /var/lib/etcd [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. 15:48:57 CST message: [k8s-master01] init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" W0328 15:44:29.470836 30675 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] [init] Using Kubernetes version: v1.28.8 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [api.k8s.inner.lb k8s-master01 k8s-master01.cluster.local k8s-node01 k8s-node01.cluster.local k8s-node02 k8s-node02.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.233.0.1 10.0.0.23 127.0.0.1 10.0.0.25 10.0.0.46] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 15:48:57 CST retry: [k8s-master01] 15:53:04 CST stdout: [k8s-master01] W0328 15:49:02.536635 32905 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] [init] Using Kubernetes version: v1.28.8 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [api.k8s.inner.lb k8s-master01 k8s-master01.cluster.local k8s-node01 k8s-node01.cluster.local k8s-node02 k8s-node02.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.233.0.1 10.0.0.23 127.0.0.1 10.0.0.25 10.0.0.46] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher 15:53:05 CST stdout: [k8s-master01] [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0328 15:53:04.593004 34794 reset.go:120] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://api.k8s.inner.lb:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") [preflight] Running pre-flight checks W0328 15:53:04.593145 34794 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] Deleted contents of the etcd data directory: /var/lib/etcd [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. 15:53:05 CST message: [k8s-master01] init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" W0328 15:49:02.536635 32905 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] [init] Using Kubernetes version: v1.28.8 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [api.k8s.inner.lb k8s-master01 k8s-master01.cluster.local k8s-node01 k8s-node01.cluster.local k8s-node02 k8s-node02.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.233.0.1 10.0.0.23 127.0.0.1 10.0.0.25 10.0.0.46] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
15:53:05 CST retry: [k8s-master01]
15:53:25 CST stdout: [k8s-master01]
W0328 15:53:10.260740 35126 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.28.8
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://10.0.0.23:2379/version": dial tcp 10.0.0.23:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
15:53:25 CST stdout: [k8s-master01]
[preflight] Running pre-flight checks
W0328 15:53:25.389737 35259 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
15:53:25 CST message: [k8s-master01]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W0328 15:53:10.260740 35126 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.28.8
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://10.0.0.23:2379/version": dial tcp 10.0.0.23:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
15:53:25 CST failed: [k8s-master01]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [k8s-master01] [KubeadmInit] exec failed after 3 retries: init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W0328 15:53:10.260740 35126 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.28.8
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://10.0.0.23:2379/version": dial tcp 10.0.0.23:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
Relevant log output
Additional information
No response
看日志报错是连不上etcd。检查一下etcd的日志
我重新安装了一次,这次好像是证书的问题
Mar 28 18:16:31 VM-0-23-centos kubelet: W0328 18:16:31.008336 46469 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://api.k8s.inner.lb:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master01&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Mar 28 18:16:31 VM-0-23-centos kubelet: E0328 18:16:31.008385 46469 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api.k8s.inner.lb:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master01&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Mar 28 18:16:31 VM-0-23-centos kubelet: W0328 18:16:31.146749 46469 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://api.k8s.inner.lb:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Mar 28 18:16:31 VM-0-23-centos kubelet: E0328 18:16:31.146775 46469 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api.k8s.inner.lb:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Mar 28 18:16:31 VM-0-23-centos etcd: {"level":"fatal","ts":"2025-03-28T18:16:31.207624+0800","caller":"etcdserver/server.go:898","msg":"failed to purge wal file","error":"open /var/lib/etcd/member/wal: no such file or directory","stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).purgeFile\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:898\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).GoAttach.func1\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:2803"} Mar 28 18:16:31 VM-0-23-centos systemd: etcd.service: main process exited, code=exited, status=1/FAILURE Mar 28 18:16:31 VM-0-23-centos systemd: Unit etcd.service entered failed state. Mar 28 18:16:31 VM-0-23-centos systemd: etcd.service failed
{"level":"fatal","ts":"2025-03-28T18:16:31.207624+0800","caller":"etcdserver/server.go:898","msg":"failed to purge wal file","error":"open /var/lib/etcd/member/wal: no such file or directory","stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).purgeFile\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:898\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).GoAttach.func1\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:280
etcd 还在初始化的时候,目录会删除吗,这个提示目录损坏了,初始化的时候,我同步查看,看到的etcd 是正常启动的,但是在kubeadm init 的时候,就卡住了,然后ectd 就崩溃了。
重新安装的时候,把残留目录清干净了么。用kk delete cluster 来清理看看
https://github.com/kubesphere/kubekey/issues/2165 我的问题跟这个一样类似。 并且我已经清理了 我执行的是kk delete cluster -f config.yaml 并且批量执行了: kubeadm reset iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ipvsadm -C rm -rvf $HOME/.kube rm -rvf ~/.kube/ rm -rvf /etc/kubernetes/ rm -rvf /etc/systemd/system/kubelet.service.d rm -rvf /etc/systemd/system/kubelet.service rm -rvf /usr/local/bin/kube* rm -rvf /etc/cni rm -rvf /opt/cni rm -rvf /var/lib/etcd rm -rvf /var/etcd rm -rvf /etc/ssl/etcd/ssl rm -rvf /etc/etcd.env rm -rvf /etc/systemd/system/etcd.service rm -rvf /var/lib/kubelet
好像kk 确实是有这个bug,并且containerManager:containerd 在离线包制作之后,部署的kubernetes version:v1.28.8,使用离线方式来做部署,就会出现这个问题。 在线部署,并没有出现过。如果containerManager:docker 制作的v1.23.17 确实没有问题。
If harbor is also deployed on the node in cluster, you can check the configuration of containerd (/etc/containerd/config.toml) before creating the cluster. If the node where the harbor is installed does not have a containerd configuration file, you can create one by referring to the following configuration file.
mkdir /etc/containerd
# Create containerd configuration file (note that modify the configuration file sandbox_image and [plugins."io.containerd.grpc.v1.cri".registry.configs] under the registry information for practical environment)
cat <<EOF > /etc/containerd/config.toml
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
[grpc]
address = "/run/containerd/containerd.sock"
uid = 0
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
[ttrpc]
address = ""
uid = 0
gid = 0
[debug]
address = ""
uid = 0
gid = 0
level = ""
[metrics]
address = ""
grpc_histogram = false
[cgroup]
path = ""
[timeouts]
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[plugins]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "dockerhub.kubekey.local/kse/kubesphere/pause:3.6"
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
max_conf_num = 1
conf_template = ""
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io/"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."dockerhub.kubekey.local".auth]
username = "admin"
password = "harbor2345"
[plugins."io.containerd.grpc.v1.cri".registry.configs."dockerhub.kubekey.local".tls]
ca_file = ""
cert_file = ""
key_file = ""
insecure_skip_verify = true
EOF
# restart containerd
systemctl restart containerd