website
website copied to clipboard
All-in-One Installation of Kubernetes and KubeSphere on Linux Not Works
I have followed the article but kubesphere does not load. Installing on ubuntu 20, 22 and debian 12.
| | / / | | | | / / | |/ / _ | |_ | |/ / ___ _ _ | | | | | ' \ / _ \ \ / _ \ | | | | |\ \ || | |) | / |\ \ / || | _| _/_,|./ __| _/___|_, | / | |/
15:01:22 UTC [GreetingsModule] Greetings 15:01:23 UTC message: [svrksmaster] Greetings, KubeKey! 15:01:23 UTC success: [svrksmaster] 15:01:23 UTC [NodePreCheckModule] A pre-check on nodes 15:01:23 UTC success: [svrksmaster] 15:01:23 UTC [ConfirmModule] Display confirmation form +-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+ | name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time | +-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+ | svrksmaster | y | y | y | y | y | y | | y | | | | | | | UTC 15:01:23 | +-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment. Before installation, ensure that your machines meet all requirements specified at https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes 15:01:26 UTC success: [LocalHost] 15:01:26 UTC [NodeBinariesModule] Download installation binaries 15:01:26 UTC message: [localhost] downloading amd64 kubeadm v1.23.10 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 43.1M 100 43.1M 0 0 18.9M 0 0:00:02 0:00:02 --:--:-- 18.9M 15:01:29 UTC message: [localhost] downloading amd64 kubelet v1.23.10 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 118M 100 118M 0 0 42.2M 0 0:00:02 0:00:02 --:--:-- 42.2M 15:01:32 UTC message: [localhost] downloading amd64 kubectl v1.23.10 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 44.4M 100 44.4M 0 0 9.8M 0 0:00:04 0:00:04 --:--:-- 10.8M 15:01:37 UTC message: [localhost] downloading amd64 helm v3.9.0 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 13.3M 100 13.3M 0 0 4115k 0 0:00:03 0:00:03 --:--:-- 4115k 15:01:41 UTC message: [localhost] downloading amd64 kubecni v0.9.1 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 37.9M 100 37.9M 0 0 26.3M 0 0:00:01 0:00:01 --:--:-- 55.7M 15:01:43 UTC message: [localhost] downloading amd64 crictl v1.24.0 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 13.8M 100 13.8M 0 0 5723k 0 0:00:02 0:00:02 --:--:-- 8146k 15:01:45 UTC message: [localhost] downloading amd64 etcd v3.4.13 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 16.5M 100 16.5M 0 0 6328k 0 0:00:02 0:00:02 --:--:-- 8518k 15:01:48 UTC message: [localhost] downloading amd64 docker 20.10.8 ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 58.1M 100 58.1M 0 0 43.5M 0 0:00:01 0:00:01 --:--:-- 43.5M 15:01:50 UTC success: [LocalHost] 15:01:50 UTC [ConfigureOSModule] Get OS release 15:01:50 UTC success: [svrksmaster] 15:01:50 UTC [ConfigureOSModule] Prepare to init OS 15:01:51 UTC success: [svrksmaster] 15:01:51 UTC [ConfigureOSModule] Generate init os script 15:01:51 UTC success: [svrksmaster] 15:01:51 UTC [ConfigureOSModule] Exec init os script 15:01:52 UTC stdout: [svrksmaster] net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_local_reserved_ports = 30000-32767 vm.max_map_count = 262144 vm.swappiness = 1 fs.inotify.max_user_instances = 524288 kernel.pid_max = 65535 15:01:52 UTC success: [svrksmaster] 15:01:52 UTC [ConfigureOSModule] configure the ntp server for each node 15:01:52 UTC skipped: [svrksmaster] 15:01:52 UTC [KubernetesStatusModule] Get kubernetes cluster status 15:01:52 UTC success: [svrksmaster] 15:01:52 UTC [InstallContainerModule] Sync docker binaries 15:01:56 UTC success: [svrksmaster] 15:01:56 UTC [InstallContainerModule] Generate docker service 15:01:56 UTC success: [svrksmaster] 15:01:56 UTC [InstallContainerModule] Generate docker config 15:01:56 UTC success: [svrksmaster] 15:01:56 UTC [InstallContainerModule] Enable docker 15:01:58 UTC success: [svrksmaster] 15:01:58 UTC [InstallContainerModule] Add auths to container runtime 15:01:58 UTC skipped: [svrksmaster] 15:01:58 UTC [PullModule] Start to pull images on all nodes 15:01:58 UTC message: [svrksmaster] downloading image: kubesphere/pause:3.6 15:02:01 UTC message: [svrksmaster] downloading image: kubesphere/kube-apiserver:v1.23.10 15:02:08 UTC message: [svrksmaster] downloading image: kubesphere/kube-controller-manager:v1.23.10 15:02:13 UTC message: [svrksmaster] downloading image: kubesphere/kube-scheduler:v1.23.10 15:02:17 UTC message: [svrksmaster] downloading image: kubesphere/kube-proxy:v1.23.10 15:02:22 UTC message: [svrksmaster] downloading image: coredns/coredns:1.8.6 15:02:26 UTC message: [svrksmaster] downloading image: kubesphere/k8s-dns-node-cache:1.15.12 15:02:31 UTC message: [svrksmaster] downloading image: calico/kube-controllers:v3.23.2 15:02:37 UTC message: [svrksmaster] downloading image: calico/cni:v3.23.2 15:02:47 UTC message: [svrksmaster] downloading image: calico/node:v3.23.2 15:02:55 UTC message: [svrksmaster] downloading image: calico/pod2daemon-flexvol:v3.23.2 15:02:59 UTC success: [svrksmaster] 15:02:59 UTC [ETCDPreCheckModule] Get etcd status 15:02:59 UTC success: [svrksmaster] 15:02:59 UTC [CertsModule] Fetch etcd certs 15:02:59 UTC success: [svrksmaster] 15:02:59 UTC [CertsModule] Generate etcd Certs [certs] Generating "ca" certificate and key [certs] admin-svrksmaster serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost svrksmaster] and IPs [127.0.0.1 ::1 192.168.10.148] [certs] member-svrksmaster serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost svrksmaster] and IPs [127.0.0.1 ::1 192.168.10.148] [certs] node-svrksmaster serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost svrksmaster] and IPs [127.0.0.1 ::1 192.168.10.148] 15:03:00 UTC success: [LocalHost] 15:03:00 UTC [CertsModule] Synchronize certs file 15:03:00 UTC success: [svrksmaster] 15:03:00 UTC [CertsModule] Synchronize certs file to master 15:03:00 UTC skipped: [svrksmaster] 15:03:00 UTC [InstallETCDBinaryModule] Install etcd using binary 15:03:01 UTC success: [svrksmaster] 15:03:01 UTC [InstallETCDBinaryModule] Generate etcd service 15:03:01 UTC success: [svrksmaster] 15:03:01 UTC [InstallETCDBinaryModule] Generate access address 15:03:01 UTC success: [svrksmaster] 15:03:01 UTC [ETCDConfigureModule] Health check on exist etcd 15:03:01 UTC skipped: [svrksmaster] 15:03:01 UTC [ETCDConfigureModule] Generate etcd.env config on new etcd 15:03:01 UTC success: [svrksmaster] 15:03:01 UTC [ETCDConfigureModule] Refresh etcd.env config on all etcd 15:03:01 UTC success: [svrksmaster] 15:03:01 UTC [ETCDConfigureModule] Restart etcd 15:03:02 UTC stdout: [svrksmaster] Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service. 15:03:02 UTC success: [svrksmaster] 15:03:02 UTC [ETCDConfigureModule] Health check on all etcd 15:03:02 UTC success: [svrksmaster] 15:03:02 UTC [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd 15:03:03 UTC success: [svrksmaster] 15:03:03 UTC [ETCDConfigureModule] Health check on all etcd 15:03:03 UTC success: [svrksmaster] 15:03:03 UTC [ETCDBackupModule] Backup etcd data regularly 15:03:03 UTC success: [svrksmaster] 15:03:03 UTC [ETCDBackupModule] Generate backup ETCD service 15:03:03 UTC success: [svrksmaster] 15:03:03 UTC [ETCDBackupModule] Generate backup ETCD timer 15:03:03 UTC success: [svrksmaster] 15:03:03 UTC [ETCDBackupModule] Enable backup etcd service 15:03:03 UTC success: [svrksmaster] 15:03:03 UTC [InstallKubeBinariesModule] Synchronize kubernetes binaries 15:03:09 UTC success: [svrksmaster] 15:03:09 UTC [InstallKubeBinariesModule] Synchronize kubelet 15:03:09 UTC success: [svrksmaster] 15:03:09 UTC [InstallKubeBinariesModule] Generate kubelet service 15:03:09 UTC success: [svrksmaster] 15:03:09 UTC [InstallKubeBinariesModule] Enable kubelet service 15:03:10 UTC success: [svrksmaster] 15:03:10 UTC [InstallKubeBinariesModule] Generate kubelet env 15:03:10 UTC success: [svrksmaster] 15:03:10 UTC [InitKubernetesModule] Generate kubeadm config 15:03:10 UTC success: [svrksmaster] 15:03:10 UTC [InitKubernetesModule] Init cluster using kubeadm 15:03:23 UTC stdout: [svrksmaster] W0419 15:03:10.317656 24544 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10] [init] Using Kubernetes version: v1.23.10 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost svrksmaster svrksmaster.cluster.local] and IPs [10.233.0.1 192.168.10.148 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 8.005574 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node svrksmaster as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node svrksmaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: keoas4.prltazgtc7mjdc95 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token keoas4.prltazgtc7mjdc95
--discovery-token-ca-cert-hash sha256:fcac088370e9053426cfe72dd3abc71d82de8502b716cf0dd1f25ab7ff4df766
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token keoas4.prltazgtc7mjdc95
--discovery-token-ca-cert-hash sha256:fcac088370e9053426cfe72dd3abc71d82de8502b716cf0dd1f25ab7ff4df766
15:03:23 UTC success: [svrksmaster]
15:03:23 UTC [InitKubernetesModule] Copy admin.conf to ~/.kube/config
15:03:23 UTC success: [svrksmaster]
15:03:23 UTC [InitKubernetesModule] Remove master taint
15:03:24 UTC stdout: [svrksmaster]
node/svrksmaster untainted
15:03:24 UTC stdout: [svrksmaster]
error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found
15:03:24 UTC [WARN] Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes svrksmaster node-role.kubernetes.io/control-plane=:NoSchedule-"
error: taint "node-role.kubernetes.io/control-plane:NoSchedule" not found: Process exited with status 1
15:03:24 UTC success: [svrksmaster]
15:03:24 UTC [InitKubernetesModule] Add worker label
15:03:24 UTC stdout: [svrksmaster]
node/svrksmaster labeled
15:03:24 UTC success: [svrksmaster]
15:03:24 UTC [ClusterDNSModule] Generate coredns service
15:03:24 UTC success: [svrksmaster]
15:03:24 UTC [ClusterDNSModule] Override coredns service
15:03:24 UTC stdout: [svrksmaster]
service "kube-dns" deleted
15:03:25 UTC stdout: [svrksmaster]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
15:03:25 UTC success: [svrksmaster]
15:03:25 UTC [ClusterDNSModule] Generate nodelocaldns
15:03:25 UTC success: [svrksmaster]
15:03:25 UTC [ClusterDNSModule] Deploy nodelocaldns
15:03:25 UTC stdout: [svrksmaster]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
15:03:25 UTC success: [svrksmaster]
15:03:25 UTC [ClusterDNSModule] Generate nodelocaldns configmap
15:03:26 UTC success: [svrksmaster]
15:03:26 UTC [ClusterDNSModule] Apply nodelocaldns configmap
15:03:26 UTC stdout: [svrksmaster]
configmap/nodelocaldns created
15:03:26 UTC success: [svrksmaster]
15:03:26 UTC [KubernetesStatusModule] Get kubernetes cluster status
15:03:26 UTC stdout: [svrksmaster]
v1.23.10
15:03:26 UTC stdout: [svrksmaster]
svrksmaster v1.23.10 [map[address:192.168.10.148 type:InternalIP] map[address:svrksmaster type:Hostname]]
15:03:27 UTC stdout: [svrksmaster]
I0419 15:03:27.187406 25796 version.go:255] remote version is much newer: v1.30.0; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
be5add979d6cac4f67cb8c8c007364a02fcd0bc44e8b4a923c90033fc785e1fe
15:03:27 UTC stdout: [svrksmaster]
secret/kubeadm-certs patched
15:03:27 UTC stdout: [svrksmaster]
secret/kubeadm-certs patched
15:03:27 UTC stdout: [svrksmaster]
secret/kubeadm-certs patched
15:03:27 UTC stdout: [svrksmaster]
ke4uy9.qso71iojeo3r8xkp
15:03:27 UTC success: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Generate kubeadm config
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Join control-plane node
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Join worker node
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Copy admin.conf to ~/.kube/config
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Remove master taint
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Add worker label to master
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Synchronize kube config to worker
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [JoinNodesModule] Add worker label to worker
15:03:27 UTC skipped: [svrksmaster]
15:03:27 UTC [DeployNetworkPluginModule] Generate calico
15:03:28 UTC success: [svrksmaster]
15:03:28 UTC [DeployNetworkPluginModule] Deploy calico
15:03:28 UTC stdout: [svrksmaster]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
15:03:28 UTC success: [svrksmaster]
15:03:28 UTC [ConfigureKubernetesModule] Configure kubernetes
15:03:28 UTC success: [svrksmaster]
15:03:28 UTC [ChownModule] Chown user $HOME/.kube dir
15:03:28 UTC success: [svrksmaster]
15:03:28 UTC [AutoRenewCertsModule] Generate k8s certs renew script
15:03:28 UTC success: [svrksmaster]
15:03:28 UTC [AutoRenewCertsModule] Generate k8s certs renew service
15:03:28 UTC success: [svrksmaster]
15:03:28 UTC [AutoRenewCertsModule] Generate k8s certs renew timer
15:03:29 UTC success: [svrksmaster]
15:03:29 UTC [AutoRenewCertsModule] Enable k8s certs renew service
15:03:29 UTC success: [svrksmaster]
15:03:29 UTC [SaveKubeConfigModule] Save kube config as a configmap
15:03:29 UTC success: [LocalHost]
15:03:29 UTC [AddonsModule] Install addons
15:03:29 UTC success: [LocalHost]
15:03:29 UTC Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.
Please check the result using the command:
kubectl get pod -A
kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-84897d7cdf-sw2gv 1/1 Running 0 98s kube-system calico-node-n4pnx 1/1 Running 0 98s kube-system coredns-b7c47bcdc-2tnch 1/1 Running 0 98s kube-system coredns-b7c47bcdc-559kv 1/1 Running 0 98s kube-system kube-apiserver-svrksmaster 1/1 Running 0 113s kube-system kube-controller-manager-svrksmaster 1/1 Running 0 110s kube-system kube-proxy-jssw6 1/1 Running 0 98s kube-system kube-scheduler-svrksmaster 1/1 Running 0 110s kube-system nodelocaldns-6qs4n 1/1 Running 0 98s
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template: template was: {.items[0].metadata.name} object given to jsonpath engine was: map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}