kubekey
kubekey copied to clipboard
Unable to connect to the server: x509: certificate signed by unknown authority0.0", GitCommit:"ff9d30b7", GitTreeState:"", GoVersion:"go1.17.7"}
What is version of KubeKey has the issue?
Can't create cluster issue: Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
What is your os environment?
ubuntu 20.04
KubeKey config file
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: k8s-master-1, address: 192.168.1.30, internalAddress: 192.168.1.30, port: 22, user: bbg, password: "123"}
- {name: k8s-master-2, address: 192.168.1.31, internalAddress: 192.168.1.31, port: 22, user: bbg, password: "123"}
- {name: k8s-worker-1, address: 192.168.1.40, internalAddress: 192.168.1.40, port: 22, user: bbg, password: "123"}
- {name: k8s-worker-2, address: 192.168.1.41, internalAddress: 192.168.1.41, port: 22, user: bbg, password: "123"}
roleGroups:
etcd:
- k8s-master-1
- k8s-master-2
control-plane:
- k8s-master-1
- k8s-master-2
worker:
- k8s-worker-1
- k8s-worker-2
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: 192.168.1.23
port: 6443
kubernetes:
version: v1.21.5
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
plainHTTP: false
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
A clear and concise description of what happend.
hello, i am new here i am just try to create cluster with kubesphere, i use vm virtualbox with 6 machine. 2 for LB, 2 for Master and 2 for worker. i have install this: Docker version 20.10.17 kubeadm=1.22.0-00 kubelet=1.22.0-00 kubectl=1.22.0-00
Relevant log output
20:28:05 WIB [KubernetesStatusModule] Get kubernetes cluster status
20:28:05 WIB stdout: [k8s-master-1]
v1.21.5
20:28:06 WIB stdout: [k8s-master-1]
k8s-master-1 v1.21.5 [map[address:192.168.1.30 type:InternalIP] map[address:k8s-master-1 type:Hostname]]
k8s-master-2 v1.21.5 [map[address:192.168.1.31 type:InternalIP] map[address:k8s-master-2 type:Hostname]]
20:28:10 WIB stdout: [k8s-master-1]
I0623 13:28:08.849947 13944 version.go:254] remote version is much newer: v1.24.2; falling back to: stable-1.21
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
716d20d0222b9b85132d7f8b1c430d7f06e0e458f0b015e252e4033643cd90d7
20:28:10 WIB stdout: [k8s-master-1]
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
20:28:10 WIB message: [k8s-master-1]
patch kubeadm secret failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl patch -n kube-system secret kubeadm-certs -p '{\"data\": {\"external-etcd-ca.crt\": \"\"}}'"
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"): Process exited with status 1
Additional information
No response
Hi @adityabbg
It looks like your environment had installed k8s before. Try to use ./kk delete cluster -f config.yaml
to clean the environment. And check there is no /etc/kubernetes
directory in your filesystem.
I have some errors again. 20:34:51 WIB success: [k8s-master-1] 20:34:51 WIB success: [k8s-master-2] 20:34:51 WIB success: [k8s-worker-1] 20:34:51 WIB success: [k8s-worker-2] 20:34:51 WIB [ConfigureOSModule] configure the ntp server for each node 20:34:51 WIB skipped: [k8s-worker-2] 20:34:51 WIB skipped: [k8s-master-2] 20:34:51 WIB skipped: [k8s-worker-1] 20:34:51 WIB skipped: [k8s-master-1] 20:34:51 WIB [KubernetesStatusModule] Get kubernetes cluster status 20:34:51 WIB stdout: [k8s-master-2] v1.21.5 20:35:20 WIB stdout: [k8s-master-2] Error from server: etcdserver: request timed out 20:35:20 WIB message: [k8s-master-2] get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl --no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses" Error from server: etcdserver: request timed out: Process exited with status 1 20:35:20 WIB retry: [k8s-master-2] 20:35:25 WIB stdout: [k8s-master-2] v1.21.5 20:35:50 WIB stdout: [k8s-master-2] Error from server: etcdserver: request timed out 20:35:50 WIB message: [k8s-master-2] get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl --no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses" Error from server: etcdserver: request timed out: Process exited with status 1 20:35:50 WIB retry: [k8s-master-2] 20:35:55 WIB stdout: [k8s-master-2] v1.21.5 20:36:36 WIB stdout: [k8s-master-2] Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes) 20:36:36 WIB message: [k8s-master-2] get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl --no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses" Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes): Process exited with status 1 20:36:36 WIB success: [k8s-master-1] 20:36:36 WIB failed: [k8s-master-2] error: Pipeline[CreateClusterPipeline] execute failed: Module[KubernetesStatusModule] exec failed: failed: [k8s-master-2] [GetClusterStatus] exec failed after 3 retires: get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl --no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses" Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes): Process exited with status 1
@adityabbg use 1.21@latest and retry install again.