flannel icon indicating copy to clipboard operation
flannel copied to clipboard

podCIDR is not respected, invalid ip allocated to a pod on worker node

Open avoidik opened this issue 3 years ago • 8 comments

Expected Behavior

flannel should be allocating ip address from the correct cidr space on the worker node

Current Behavior

flannel has been allocating ip address from the incorrect cidr space on the worker node

there are two pods running, curllatest is running on the worker node, which has the following podCIDR:

$ kubectl describe node k8s-worker | grep CIDR
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24

and the master node is using the following podCIDR

$ kubectl describe node k8s-master | grep CIDR
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24

if I check allocated ips

$ kubectl get pods -A -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
default       curllatest                           1/1     Running   0          5m42s   10.244.0.2       k8s-worker   <none>           <none>
kube-system   coredns-74ff55c5b-snpb5              1/1     Running   0          25m     10.244.0.32      k8s-master   <none>           <none>

I see that curllatest pod is using ip 10.244.0.2 which belongs to the master node, but for some reason is running on the worker node

Possible Solution

N/A

Steps to Reproduce (for bugs)

nothing special, standard deployment using kubeadm init / kubeadm join with the default pod-network-cidr and service-cidr parameters, flannel-ds has --iface=eth1 flag

Context

  • communication between host and pod on another node is broken
  • pod to pod communication between distinct nodes is broken

Your Environment

  • Flannel version: v0.14.0-rc1
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version: 3.4.13-0
  • Kubernetes version (if used): 1.20.6
  • Operating System and version: ubuntu 20.04.1 lts
  • Link to your project (optional):

I'm using containerd v1.5.1 and cri-containerd-cni v1.5.1 https://github.com/containerd/containerd/blob/master/docs/cri/installation.md

Master node interfaces
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:14:86:db brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 83220sec preferred_lft 83220sec
    inet6 fe80::a00:27ff:fe14:86db/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:b0:71:c7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.101/24 brd 192.168.50.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb0:71c7/64 scope link
       valid_lft forever preferred_lft forever
36: cni0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 66:63:e6:2d:e8:da brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/16 brd 10.244.255.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::6463:e6ff:fe2d:e8da/64 scope link
       valid_lft forever preferred_lft forever
38: vethfa76d9cf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
    link/ether 92:6f:a9:24:4e:ae brd ff:ff:ff:ff:ff:ff link-netns cni-ee285212-858d-6aa2-2c54-2bea2bf5dab4
    inet6 fe80::906f:a9ff:fe24:4eae/64 scope link
       valid_lft forever preferred_lft forever
39: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 8a:5a:74:99:af:7d brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 brd 10.244.0.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::885a:74ff:fe99:af7d/64 scope link
       valid_lft forever preferred_lft forever
Master node iptables
$ iptables-save
# Generated by iptables-save v1.8.4 on Fri May 14 22:28:49 2021
*mangle
:PREROUTING ACCEPT [475556:103133992]
:INPUT ACCEPT [475550:103133226]
:FORWARD ACCEPT [6:766]
:OUTPUT ACCEPT [474094:83442896]
:POSTROUTING ACCEPT [474100:83443662]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Fri May 14 22:28:49 2021
# Generated by iptables-save v1.8.4 on Fri May 14 22:28:49 2021
*filter
:INPUT ACCEPT [296597:48965834]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [296917:50414907]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Fri May 14 22:28:49 2021
# Generated by iptables-save v1.8.4 on Fri May 14 22:28:49 2021
*nat
:PREROUTING ACCEPT [29:2215]
:INPUT ACCEPT [29:2215]
:OUTPUT ACCEPT [3292:197753]
:POSTROUTING ACCEPT [3292:197753]
:CNI-251bf99ad7a72a0483055bfb - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-4LVENGRQPIZXECUW - [0:0]
:KUBE-SEP-MMQX5WT32FKRJIXH - [0:0]
:KUBE-SEP-PD7ZQ5AD4WY5POF5 - [0:0]
:KUBE-SEP-QBKQZR66AFJUHFSU - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.244.0.32/32 -m comment --comment "name: \"containerd-net\" id: \"b2ad4054262129c9ceeb2e6ae722da44b782b60db461b760e5b5324aec1af256\"" -j CNI-251bf99ad7a72a0483055bfb
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE --random-fully
-A CNI-251bf99ad7a72a0483055bfb -d 10.244.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"b2ad4054262129c9ceeb2e6ae722da44b782b60db461b760e5b5324aec1af256\"" -j ACCEPT
-A CNI-251bf99ad7a72a0483055bfb ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"b2ad4054262129c9ceeb2e6ae722da44b782b60db461b760e5b5324aec1af256\"" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-4LVENGRQPIZXECUW -s 192.168.50.101/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4LVENGRQPIZXECUW -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.50.101:6443
-A KUBE-SEP-MMQX5WT32FKRJIXH -s 10.244.0.32/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-MMQX5WT32FKRJIXH -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.32:9153
-A KUBE-SEP-PD7ZQ5AD4WY5POF5 -s 10.244.0.32/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-PD7ZQ5AD4WY5POF5 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.32:53
-A KUBE-SEP-QBKQZR66AFJUHFSU -s 10.244.0.32/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-QBKQZR66AFJUHFSU -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.32:53
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-QBKQZR66AFJUHFSU
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-MMQX5WT32FKRJIXH
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4LVENGRQPIZXECUW
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-PD7ZQ5AD4WY5POF5
COMMIT
# Completed on Fri May 14 22:28:49 2021
Worker node interfaces
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:14:86:db brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 84547sec preferred_lft 84547sec
    inet6 fe80::a00:27ff:fe14:86db/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:e6:bf:a1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.102/24 brd 192.168.50.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fee6:bfa1/64 scope link
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 16:0e:6f:d4:f2:9a brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 brd 10.244.1.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::140e:6fff:fed4:f29a/64 scope link
       valid_lft forever preferred_lft forever
7: cni0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 92:b1:95:9c:13:81 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/16 brd 10.244.255.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::90b1:95ff:fe9c:1381/64 scope link
       valid_lft forever preferred_lft forever
8: vethafaaa495@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
    link/ether 06:f7:b5:93:a2:5f brd ff:ff:ff:ff:ff:ff link-netns cni-9182760d-5803-784d-d716-5cd4c6810200
    inet6 fe80::4f7:b5ff:fe93:a25f/64 scope link
       valid_lft forever preferred_lft forever
Worker node iptables
$ sudo iptables-save
# Generated by iptables-save v1.8.4 on Fri May 14 22:31:32 2021
*mangle
:PREROUTING ACCEPT [8669:26128075]
:INPUT ACCEPT [8669:26128075]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [7110:609025]
:POSTROUTING ACCEPT [7110:609025]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Fri May 14 22:31:32 2021
# Generated by iptables-save v1.8.4 on Fri May 14 22:31:32 2021
*filter
:INPUT ACCEPT [5383:4641784]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [4725:484469]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Fri May 14 22:31:32 2021
# Generated by iptables-save v1.8.4 on Fri May 14 22:31:32 2021
*nat
:PREROUTING ACCEPT [4:562]
:INPUT ACCEPT [4:562]
:OUTPUT ACCEPT [2:100]
:POSTROUTING ACCEPT [2:100]
:CNI-c4efec0f8668ea8406c49342 - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-4LVENGRQPIZXECUW - [0:0]
:KUBE-SEP-MMQX5WT32FKRJIXH - [0:0]
:KUBE-SEP-PD7ZQ5AD4WY5POF5 - [0:0]
:KUBE-SEP-QBKQZR66AFJUHFSU - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE --random-fully
-A POSTROUTING -s 10.244.0.2/32 -m comment --comment "name: \"containerd-net\" id: \"1976b956900d26271af75e10b50990c651340b3e503506c4455435c64704988d\"" -j CNI-c4efec0f8668ea8406c49342
-A CNI-c4efec0f8668ea8406c49342 -d 10.244.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"1976b956900d26271af75e10b50990c651340b3e503506c4455435c64704988d\"" -j ACCEPT
-A CNI-c4efec0f8668ea8406c49342 ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"1976b956900d26271af75e10b50990c651340b3e503506c4455435c64704988d\"" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-4LVENGRQPIZXECUW -s 192.168.50.101/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4LVENGRQPIZXECUW -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.50.101:6443
-A KUBE-SEP-MMQX5WT32FKRJIXH -s 10.244.0.32/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-MMQX5WT32FKRJIXH -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.32:9153
-A KUBE-SEP-PD7ZQ5AD4WY5POF5 -s 10.244.0.32/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-PD7ZQ5AD4WY5POF5 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.32:53
-A KUBE-SEP-QBKQZR66AFJUHFSU -s 10.244.0.32/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-QBKQZR66AFJUHFSU -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.32:53
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-QBKQZR66AFJUHFSU
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-MMQX5WT32FKRJIXH
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4LVENGRQPIZXECUW
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-PD7ZQ5AD4WY5POF5
COMMIT
# Completed on Fri May 14 22:31:32 2021
Master node subnet.env
$ kubectl exec -it -n kube-system kube-flannel-ds-wcpk5 -- cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Worker node subnet.env
$ kubectl exec -it -n kube-system kube-flannel-ds-fx5s9 -- cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

avoidik avatar May 14 '21 22:05 avoidik

sometimes conflicting ips possible

NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
default       curllatest                           1/1     Running   0          25s   10.244.0.2       k8s-worker   <none>           <none>
kube-system   coredns-74ff55c5b-7hbd2              1/1     Running   0          41m   10.244.0.2       k8s-master   <none>           <none>

avoidik avatar May 15 '21 11:05 avoidik

I was using containerd cri, the solution is to delete default configuration file

rm -f /etc/cni/net.d/10-containerd-net.conflist

or do not extract it at all

tar --no-overwrite-dir -C / -xzf cri-containerd-cni-1.5.1-linux-amd64.tar.gz --exclude='etc/cni/net.d/10-containerd-net.conflist'

avoidik avatar May 16 '21 13:05 avoidik

Same here with CRI-O, Flannel and host-local IPAM plugin.

Is there any suggestion for this?

oglok avatar Jun 22 '21 10:06 oglok

@oglok you can find the solution in my previous comment

avoidik avatar Jun 22 '21 11:06 avoidik

@avoidik that's a workaround specific to containerd. I'm using CRI-O.

oglok avatar Jun 22 '21 12:06 oglok

@oglok you may try to troubleshoot it by following the same way, like check /etc/cni/net.d directory for any configuration files, keep only needed

avoidik avatar Jun 22 '21 12:06 avoidik

Using a more recent CNI spec (0.4.0) seems to overcome the existing incompatibility.

oglok avatar Jun 22 '21 14:06 oglok

Using a more recent CNI spec (0.4.0) seems to overcome the existing incompatibility.

run this command,paste the result:

ls -l /var/lib/cni/
ls -l /var/lib/cni/networks/
ls -l /var/lib/cni/flannel/

zhangguanzhang avatar Jul 07 '21 08:07 zhangguanzhang

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jan 25 '23 21:01 stale[bot]