flannel icon indicating copy to clipboard operation
flannel copied to clipboard

node flannel init fail with CrashLoopBackOff ...

Open gspgsp opened this issue 3 years ago • 9 comments

how to solve this ? Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-g66xp": dial tcp 10.96.0.1:443: connect: no route to host

notes: the maser and the node are not in the same nateworks(inner ip)

some output:

[root@k8smaster local]# kubectl get nodes
NAME             STATUS   ROLES    AGE    VERSION
k8smaster        Ready    master   6d6h   v1.18.6
vm-12-9-centos   Ready    <none>   6d5h   v1.18.6


[root@k8smaster local]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                READY   STATUS             RESTARTS   AGE    IP           NODE             NOMINATED NODE   READINESS GATES
kube-system   coredns-7ff77c879f-cpnj9            1/1     Running            0          6d6h   10.244.0.2   k8smaster        <none>           <none>
kube-system   coredns-7ff77c879f-fd4g8            1/1     Running            0          6d6h   10.244.0.3   k8smaster        <none>           <none>
kube-system   etcd-k8smaster                      1/1     Running            0          6d6h   172.17.0.7   k8smaster        <none>           <none>
kube-system   kube-apiserver-k8smaster            1/1     Running            0          6d6h   172.17.0.7   k8smaster        <none>           <none>
kube-system   kube-controller-manager-k8smaster   1/1     Running            0          6d6h   172.17.0.7   k8smaster        <none>           <none>
kube-system   kube-flannel-ds-4djp5               1/1     Running            0          6d6h   172.17.0.7   k8smaster        <none>           <none>
kube-system   kube-flannel-ds-g66xp               0/1     CrashLoopBackOff   6          6d5h   10.0.12.9    vm-12-9-centos   <none>           <none>
kube-system   kube-proxy-nn7bg                    1/1     Running            0          6d6h   172.17.0.7   k8smaster        <none>           <none>
kube-system   kube-proxy-sgmz2                    1/1     Running            0          6d5h   10.0.12.9    vm-12-9-centos   <none>           <none>
kube-system   kube-scheduler-k8smaster            1/1     Running            0          6d6h   172.17.0.7   k8smaster        <none>           <none>

[root@k8smaster local]# kubectl logs -n kube-system  kube-flannel-ds-g66xp
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I0710 14:45:33.794442       1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0710 14:45:33.794539       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
E0710 14:45:36.801296       1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-g66xp': Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-g66xp": dial tcp 10.96.0.1:443: connect: no route to host

[root@k8smaster local]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}    

gspgsp avatar Jul 10 '22 15:07 gspgsp

Hi can you try to curl 10.96.0.1:443 ? If no can you show the output of

iptables -t nat -S

Pseudow avatar Jul 10 '22 20:07 Pseudow

Hi can you try to curl 10.96.0.1:443 ? If no can you show the output of

iptables -t nat -S

thank you!...

the curl result is:

[root@k8smaster ~]# curl https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-g66xp
curl: (60) Peer's Certificate issuer is not recognized.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

the master iptables output is:

[root@k8smaster ~]# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-DROP
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-6E7XQMQ4RAYOWTTM
-N KUBE-SEP-H5RP2HMEF6RPTEP4
-N KUBE-SEP-IT2ZTR26TO4XFPTO
-N KUBE-SEP-N4G2XR5TDX7PQE7P
-N KUBE-SEP-YIL6JZP7A3QYXJU2
-N KUBE-SEP-ZP3FB6NMPNCO4VBJ
-N KUBE-SEP-ZXMNUKOKXUTL2MK2
-N KUBE-SERVICES
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-N cali-OUTPUT
-N cali-POSTROUTING
-N cali-PREROUTING
-N cali-fip-dnat
-N cali-fip-snat
-N cali-nat-outgoing
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -d 10.0.12.9/32 -j DNAT --to-destination 124.221.202.241
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SEP-H5RP2HMEF6RPTEP4 -s 172.17.0.7/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-H5RP2HMEF6RPTEP4 -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.17.0.7:6443
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-N4G2XR5TDX7PQE7P -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-N4G2XR5TDX7PQE7P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.2:9153
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.3:9153
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-ZXMNUKOKXUTL2MK2
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N4G2XR5TDX7PQE7P
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-ZP3FB6NMPNCO4VBJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-H5RP2HMEF6RPTEP4
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-6E7XQMQ4RAYOWTTM

the node iptables output is:

[root@VM-12-9-centos ~]# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-DROP
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-6E7XQMQ4RAYOWTTM
-N KUBE-SEP-H5RP2HMEF6RPTEP4
-N KUBE-SEP-IT2ZTR26TO4XFPTO
-N KUBE-SEP-N4G2XR5TDX7PQE7P
-N KUBE-SEP-YIL6JZP7A3QYXJU2
-N KUBE-SEP-ZP3FB6NMPNCO4VBJ
-N KUBE-SEP-ZXMNUKOKXUTL2MK2
-N KUBE-SERVICES
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -d 172.17.0.7/32 -j DNAT --to-destination 101.35.171.107
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SEP-H5RP2HMEF6RPTEP4 -s 172.17.0.7/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-H5RP2HMEF6RPTEP4 -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.17.0.7:6443
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-N4G2XR5TDX7PQE7P -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-N4G2XR5TDX7PQE7P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.2:9153
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.3:9153
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-ZXMNUKOKXUTL2MK2
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N4G2XR5TDX7PQE7P
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-ZP3FB6NMPNCO4VBJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-H5RP2HMEF6RPTEP4
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-6E7XQMQ4RAYOWTTM

gspgsp avatar Jul 12 '22 08:07 gspgsp

Are you using some firewall that blocks 443 port? I see some cali-* table on the master was there calico configured as cni before?

rbrtbnfgl avatar Jul 12 '22 08:07 rbrtbnfgl

Are you using some firewall that blocks 443 port? I see some cali-* table on the master was there calico configured as cni before?

firewall is closed

[root@k8smaster ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

yes, i remember  the first time i  set the k8s environment , i use calico with : 
2469  2022-07-02 23:57:22 kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

but it always say: 
networkPlugin cni failed to set up pod \"k8s-demo-674b8f6dd5-xpg78_default\" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"

then i execute these commands,just want to delete calico:
2491  2022-07-03 00:06:26 kubectl delete -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

kubeadm reset 
...
2492  2022-07-03 00:06:42 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
...

my ip addr is:
[root@k8smaster ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 52:54:00:c6:f4:ab brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.7/20 brd 172.17.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fec6:f4ab/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ad:c9:c1:ec brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:adff:fec9:c1ec/64 scope link 
       valid_lft forever preferred_lft forever
2355: veth05055e4@if2354: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether c2:4f:8e:8a:47:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::c04f:8eff:fe8a:47f3/64 scope link 
       valid_lft forever preferred_lft forever
2359: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 82:b5:4b:5c:85:d2 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::80b5:4bff:fe5c:85d2/64 scope link 
       valid_lft forever preferred_lft forever
2360: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:f6:48:10:0e:b5 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::cf6:48ff:fe10:eb5/64 scope link 
       valid_lft forever preferred_lft forever
2365: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.16.128/32 brd 192.168.16.128 scope global tunl0
       valid_lft forever preferred_lft forever
2370: vethd8b8ba02@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 6a:6a:9c:4c:f7:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::686a:9cff:fe4c:f7a9/64 scope link 
       valid_lft forever preferred_lft forever
2371: veth5893b3c2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 7a:5b:07:21:b9:2c brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::785b:7ff:fe21:b92c/64 scope link 
       valid_lft forever preferred_lft forever

what to do next ? wait....

gspgsp avatar Jul 13 '22 03:07 gspgsp

Are you using kubeadm to configure the cluster? How did you run the init? Did you specify --pod-network-cidr --service-cidr? Are --pod-network-cidr the same configured on net-conf.json?

rbrtbnfgl avatar Jul 13 '22 09:07 rbrtbnfgl

Are you using kubeadm to configure the cluster? How did you run the init? Did you specify --pod-network-cidr --service-cidr? Are --pod-network-cidr the same configured on net-conf.json?

i use kubeadm to configure the cluster

the init command:
kubeadm init --kubernetes-version=1.18.6 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16 \
--upload-certs | tee kubeadm-init.log


--pod-network-cidr is the same configured on kube-flannel.yml
i dont know how to set --service-cidr , the document say it will set default value, if not set.

gspgsp avatar Jul 13 '22 09:07 gspgsp

yes the servic CIDR should be 10.96.0.0, but I don't think is related to the service IPs the issue. On your PODs list I see that some PODs have an IP on the 10.244.0.0 net and others on 10.0.12.0 network. The master has the right config the agent should have 10.244.1.0 as network for its POD I don't know it gets a wrong network. Could you check the files on /etc/cni/net.d on both nodes?

rbrtbnfgl avatar Jul 13 '22 10:07 rbrtbnfgl

yes the servic CIDR should be 10.96.0.0, but I don't think is related to the service IPs the issue. On your PODs list I see that some PODs have an IP on the 10.244.0.0 net and others on 10.0.12.0 network. The master has the right config the agent should have 10.244.1.0 as network for its POD I don't know it gets a wrong network. Could you check the files on /etc/cni/net.d on both nodes?

ok, i set --service-cidr=10.96.0.0/16

kubeadm init --kubernetes-version=1.18.6
--image-repository registry.aliyuncs.com/google_containers
--service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
--upload-certs | tee kubeadm-init.log

but the result is: [root@k8smaster .kube]# kubectl logs -n kube-system kube-flannel-ds-fdntg Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init) I0713 10:33:00.700378 1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true} W0713 10:33:00.700452 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. E0713 10:33:03.797340 1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-fdntg': Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-fdntg": dial tcp 10.96.0.1:443: connect: no route to host

seem not work.

[root@k8smaster .kube]# cd /etc/cni/net.d [root@k8smaster net.d]# ll total 4 -rw-r--r-- 1 root root 292 Jul 13 18:31 10-flannel.conflist

[root@VM-12-9-centos cni]# cd /etc/cni/net.d [root@VM-12-9-centos net.d]# ll total 4 -rw-r--r-- 1 root root 292 Jul 13 18:31 10-flannel.conflist

both nodes has 10-flannel.conflist file

the content is: { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }

both no difference.

gspgsp avatar Jul 13 '22 10:07 gspgsp

Hi. Could you share the ip addr and the ip route of the not working node? Maybe there are some conflicts between its IP address and the PODs CIDR.

rbrtbnfgl avatar Jul 14 '22 07:07 rbrtbnfgl

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jan 25 '23 20:01 stale[bot]