weave
weave copied to clipboard
Node stops resolving hostnames of services on another node
What you expected to happen?
Things seemed to be running fine. We have a two server cluster set up with a database and API service on the main node (control plane node) and the other server runs clients that routinely connect and query the API.
This has been running fine for a while until suddenly I have been seeing the client nodes fail to resolve the hostname of the API server (running on the main node)
What happened?
- At 2021-10-13 02:16:54,611 UTC i see my clients fail to resolve the API servers hostname
- The weave-net pod shows the following leading up to that moment:
Note:
assuredashboard
is the name of the main node,node01
is the name of the node running clients to the main node.
{"log":"INFO: 2021/10/13 02:11:29.329896 Removed unreachable peer 02:c9:5e:30:ee:ba(assuredashboard)\n","stream":"stderr","time":"2021-10-13T02:11:32.245686897Z"}
{"log":"INFO: 2021/10/13 02:11:29.346403 -\u003e[192.168.60.59:6783] attempting connection\n","stream":"stderr","time":"2021-10-13T02:11:32.24569708Z"}
{"log":"INFO: 2021/10/13 02:11:33.013107 -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: connection ready; using protocol version 2\n","stream":"stderr","time":"2021-10-13T02:11:33.854380002Z"}
{"log":"INFO: 2021/10/13 02:11:33.646350 overlay_switch -\u003e[02:c9:5e:30:ee:ba(assuredashboard)] using fastdp\n","stream":"stderr","time":"2021-10-13T02:11:34.444979006Z"}
{"log":"INFO: 2021/10/13 02:11:33.712870 -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: connection added (new peer)\n","stream":"stderr","time":"2021-10-13T02:11:34.445025253Z"}
{"log":"INFO: 2021/10/13 02:11:35.316801 -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: connection fully established\n","stream":"stderr","time":"2021-10-13T02:11:36.051679584Z"}
{"log":"INFO: 2021/10/13 02:11:35.317621 sleeve -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: Effective MTU verified at 1438\n","stream":"stderr","time":"2021-10-13T02:11:36.351582929Z"}
{"log":"INFO: 2021/10/13 02:13:37.259786 -\u003e[192.168.60.59:36789] connection accepted\n","stream":"stderr","time":"2021-10-13T02:13:42.527787968Z"}
{"log":"INFO: 2021/10/13 02:13:48.548401 -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: connection shutting down due to error: write tcp 192.168.60.134:35603-\u003e192.168.60.59:6783: write: connection reset by peer\n","stream":"stderr","time":"2021-10-13T02:13:49.130382471Z"}
{"log":"INFO: 2021/10/13 02:13:48.575037 -\u003e[192.168.60.59:45071] connection accepted\n","stream":"stderr","time":"2021-10-13T02:13:50.103546907Z"}
{"log":"INFO: 2021/10/13 02:13:48.653067 overlay_switch -\u003e[02:c9:5e:30:ee:ba(assuredashboard)] using sleeve\n","stream":"stderr","time":"2021-10-13T02:13:51.8798397Z"}
{"log":"INFO: 2021/10/13 02:13:48.656492 overlay_switch -\u003e[02:c9:5e:30:ee:ba(assuredashboard)] sleeve timed out waiting for UDP heartbeat\n","stream":"stderr","time":"2021-10-13T02:13:52.342380187Z"}
{"log":"INFO: 2021/10/13 02:13:48.660150 -\u003e[192.168.60.59:36789] connection shutting down due to error during handshake: write tcp 192.168.60.134:6783-\u003e192.168.60.59:36789: write: connection reset by peer\n","stream":"stderr","time":"2021-10-13T02:13:52.349468213Z"}
{"log":"INFO: 2021/10/13 02:13:48.783214 -\u003e[192.168.60.59:45071|02:c9:5e:30:ee:ba(assuredashboard)]: connection ready; using protocol version 2\n","stream":"stderr","time":"2021-10-13T02:13:52.974616011Z"}
{"log":"INFO: 2021/10/13 02:13:49.596969 -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: connection deleted\n","stream":"stderr","time":"2021-10-13T02:13:57.980024623Z"}
{"log":"INFO: 2021/10/13 02:13:49.630541 overlay_switch -\u003e[02:c9:5e:30:ee:ba(assuredashboard)] using fastdp\n","stream":"stderr","time":"2021-10-13T02:14:00.638361241Z"}
{"log":"INFO: 2021/10/13 02:13:50.195243 -\u003e[192.168.60.59:6783] attempting connection\n","stream":"stderr","time":"2021-10-13T02:14:03.407894436Z"}
{"log":"INFO: 2021/10/13 02:13:50.267787 -\u003e[192.168.60.59:45071|02:c9:5e:30:ee:ba(assuredashboard)]: connection added (new peer)\n","stream":"stderr","time":"2021-10-13T02:14:13.451754189Z"}
{"log":"INFO: 2021/10/13 02:13:51.542007 -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: connection ready; using protocol version 2\n","stream":"stderr","time":"2021-10-13T02:14:33.038114551Z"}
{"log":"INFO: 2021/10/13 02:13:51.736722 overlay_switch -\u003e[02:c9:5e:30:ee:ba(assuredashboard)] using fastdp\n","stream":"stderr","time":"2021-10-13T02:14:40.278002249Z"}
{"log":"INFO: 2021/10/13 02:13:51.893942 -\u003e[192.168.60.59:6783|02:c9:5e:30:ee:ba(assuredashboard)]: connection shutting down due to error: Multiple connections to 02:c9:5e:30:ee:ba(assuredashboard) added to 9a:43:13:13:10:68(node01)\n","stream":"stderr","time":"2021-10-13T02:14:47.937164375Z"}
{"log":"INFO: 2021/10/13 02:16:01.717495 overlay_switch -\u003e[02:c9:5e:30:ee:ba(assuredashboard)] sleeve timed out waiting for UDP heartbeat\n","stream":"stderr","time":"2021-10-13T02:16:02.274244197Z"}
{"log":"INFO: 2021/10/13 02:16:01.773460 -\u003e[192.168.60.59:45071|02:c9:5e:30:ee:ba(assuredashboard)]: connection shutting down due to error: no working forwarders to 02:c9:5e:30:ee:ba(assuredashboard)\n","stream":"stderr","time":"2021-10-13T02:16:02.371271934Z"}
{"log":"INFO: 2021/10/13 02:16:03.996214 -\u003e[192.168.60.59:41237] connection accepted\n","stream":"stderr","time":"2021-10-13T02:16:19.707014448Z"}
{"log":"INFO: 2021/10/13 02:16:26.773855 -\u003e[192.168.60.59:45071|02:c9:5e:30:ee:ba(assuredashboard)]: connection deleted\n","stream":"stderr","time":"2021-10-13T02:16:30.453266489Z"}
{"log":"INFO: 2021/10/13 02:16:50.754802 -\u003e[192.168.60.59:36449] connection accepted\n","stream":"stderr","time":"2021-10-13T02:17:10.269537367Z"}
{"log":"INFO: 2021/10/13 02:17:46.958415 -\u003e[192.168.60.59:33335] connection accepted\n","stream":"stderr","time":"2021-10-13T02:18:03.616205503Z"}
{"log":"INFO: 2021/10/13 02:18:46.721638 -\u003e[192.168.60.59:41237] connection shutting down due to error during handshake: write tcp 192.168.60.134:6783-\u003e192.168.60.59:41237: write: connection reset by peer\n","stream":"stderr","time":"2021-10-13T02:18:48.271303562Z"}
{"log":"INFO: 2021/10/13 02:18:46.750060 -\u003e[192.168.60.59:40673] connection accepted\n","stream":"stderr","time":"2021-10-13T02:18:48.46746282Z"}
{"log":"INFO: 2021/10/13 02:18:46.750190 -\u003e[192.168.60.59:41159] connection accepted\n","stream":"stderr","time":"2021-10-13T02:18:48.467512004Z"}
{"log":"INFO: 2021/10/13 02:18:46.750277 -\u003e[192.168.60.59:46333] connection accepted\n","stream":"stderr","time":"2021-10-13T02:18:48.467524259Z"}
{"log":"INFO: 2021/10/13 02:18:46.750340 -\u003e[192.168.60.59:39155] connection accepted\n","stream":"stderr","time":"2021-10-13T02:18:48.467534659Z"}
{"log":"INFO: 2021/10/13 02:18:46.750399 -\u003e[192.168.60.59:60515] connection accepted\n","stream":"stderr","time":"2021-10-13T02:18:48.467544701Z"}
{"log":"INFO: 2021/10/13 02:18:46.773179 Removed unreachable peer 02:c9:5e:30:ee:ba(assuredashboard)\n","stream":"stderr","time":"2021-10-13T02:18:48.467554649Z"}
{"log":"INFO: 2021/10/13 02:18:46.789740 -\u003e[192.168.60.59:46333] connection shutting down due to error during handshake: write tcp 192.168.60.134:6783-\u003e192.168.60.59:46333: write: connection reset by peer\n","stream":"stderr","time":"2021-10-13T02:18:48.467565206Z"}
{"log":"INFO: 2021/10/13 02:18:46.789853 -\u003e[192.168.60.59:36449] connection shutting down due to error during handshake: write tcp 192.168.60.134:6783-\u003e192.168.60.59:36449: write: connection reset by peer\n","stream":"stderr","time":"2021-10-13T02:18:48.467576256Z"}
{"log":"INFO: 2021/10/13 02:18:46.789970 -\u003e[192.168.60.59:33335] connection shutting down due to error during handshake: write tcp 192.168.60.134:6783-\u003e192.168.60.59:33335: write: connection reset by peer\n","stream":"stderr","time":"2021-10-13T02:18:48.467586973Z"}
How to reproduce it?
Reboot the node with the clients and wait a few days.
Anything else we need to know?
- Both are Dell production grade servers
- kubernetes was set up via YAML
- Ubuntu 20.04 LTS
Versions:
$ weave version: `Weaveworks NPC 2.8.1`
$ docker version:
Client: Docker Engine - Community
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d
Built: Fri Jul 30 19:54:27 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:33 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
$ uname -a:
Linux assuredashboard 5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Linux node01 5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ kubectl version:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:39:34Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Logs:
weave-net pod running on node01 (client node) weave-net-2fmz7_kube-system_weave-npc-ff8953b02f1db9ff4dddaf80648fc3b9170f954818386168bdaa0519bbfc222f.log
Network:
$ ip route
default via 192.168.60.1 d
ev eno3 proto dhcp src 192.168.60.59 metric 100
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.158.88.0/24 dev eno2 proto kernel scope link src 192.158.88.11
192.168.60.0/22 dev eno3 proto kernel scope link src 192.168.60.59
192.168.60.1 dev eno3 proto dhcp scope link src 192.168.60.59 metric 100
$ ip -4 -o addr
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
3: eno3 inet 192.168.60.59/22 brd 192.168.63.255 scope global dynamic eno3\ valid_lft 262695sec preferred_lft 262695sec
4: eno2 inet 192.158.88.11/24 brd 192.158.88.255 scope global eno2\ valid_lft forever preferred_lft forever
6: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever
9: weave inet 10.32.0.1/12 brd 10.47.255.255 scope global weave\ valid_lft forever preferred_lft forever
$ sudo iptables-save
# Generated by iptables-save v1.8.4 on Wed Oct 13 15:59:28 2021
*mangle
:PREROUTING ACCEPT [514158399:3421691683959]
:INPUT ACCEPT [269714368:1734487621058]
:FORWARD ACCEPT [244436516:1687202220983]
:OUTPUT ACCEPT [214865931:36650771125]
:POSTROUTING ACCEPT [459302422:1723852990940]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:WEAVE-CANARY - [0:0]
COMMIT
# Completed on Wed Oct 13 15:59:28 2021
# Generated by iptables-save v1.8.4 on Wed Oct 13 15:59:28 2021
*filter
:INPUT ACCEPT [532158:96853387]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [550461:147643716]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-CANARY - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 6784 -m addrtype ! --src-type LOCAL -m conntrack ! --ctstate RELATED,ESTABLISHED -m comment --comment "Block non-local access to Weave Net control port" -j DROP
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.104.11.229/32 -p tcp -m comment --comment "lens-metrics/kube-state-metrics:metrics has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-b=i6GNzikzHXB4@m2e/Go=$oA dst -m comment --comment "DefaultAllow ingress isolation for namespace: lens-metrics" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-node-lease" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-bOS37]LfuuUp~C|)I6J**L[.{ src -m comment --comment "DefaultAllow egress isolation for namespace: lens-metrics" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-bOS37]LfuuUp~C|)I6J**L[.{ src -m comment --comment "DefaultAllow egress isolation for namespace: lens-metrics" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
COMMIT
# Completed on Wed Oct 13 15:59:28 2021
# Generated by iptables-save v1.8.4 on Wed Oct 13 15:59:28 2021
*nat
:PREROUTING ACCEPT [5288:1474277]
:INPUT ACCEPT [4937:1423774]
:OUTPUT ACCEPT [8994:568495]
:POSTROUTING ACCEPT [8994:568495]
:DOCKER - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-4GYYP3LQG34KP5D7 - [0:0]
:KUBE-SEP-5CFZ3AUTJOZBEZOL - [0:0]
:KUBE-SEP-5CGXH7DASKB62Q4K - [0:0]
:KUBE-SEP-5UEYBSUGHSTA6QAD - [0:0]
:KUBE-SEP-6RCO76BFBCSQVBLZ - [0:0]
:KUBE-SEP-ATSRDY3IOFOVVAPG - [0:0]
:KUBE-SEP-EBOWWFOH72TOJWTM - [0:0]
:KUBE-SEP-FH7D2NMVVKLXNRAQ - [0:0]
:KUBE-SEP-HMY5UMPYQIVCXKGN - [0:0]
:KUBE-SEP-KDBEQII3UJLMM3X3 - [0:0]
:KUBE-SEP-LLLB6FGXBLX6PZF7 - [0:0]
:KUBE-SEP-NRR327RNAE2FQYB4 - [0:0]
:KUBE-SEP-UBDOLDSIEXT433GN - [0:0]
:KUBE-SEP-UF7CBN4YEUTNQQME - [0:0]
:KUBE-SEP-URBQC3ENGFHBZVZP - [0:0]
:KUBE-SEP-W5GDWBGHD5IKDWRL - [0:0]
:KUBE-SEP-WRE3MG3RYMTYWFJZ - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ABZKSXDVUQEG7FHF - [0:0]
:KUBE-SVC-EHK5QUW6GBH6NXAZ - [0:0]
:KUBE-SVC-EKGV7LDCILA3JVOA - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-KJ3GGA5QR537XWXB - [0:0]
:KUBE-SVC-KL5DBAD57QU3K45W - [0:0]
:KUBE-SVC-LHQHAHAGDWLXQWQV - [0:0]
:KUBE-SVC-MOZMMOD3XZX35IET - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-SRNXP4JNS2EQLOND - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TWG3KRRUDXQDBGSU - [0:0]
:KUBE-SVC-X5RIHESLPDKRO2KR - [0:0]
:WEAVE - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/assure-svc:ftp2" -m tcp --dport 32002 -j KUBE-SVC-ABZKSXDVUQEG7FHF
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/assure-svc:ftp4" -m tcp --dport 32004 -j KUBE-SVC-EKGV7LDCILA3JVOA
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/assure-svc:ftp" -m tcp --dport 30000 -j KUBE-SVC-X5RIHESLPDKRO2KR
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/registry-svc" -m tcp --dport 32222 -j KUBE-SVC-KJ3GGA5QR537XWXB
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/assure-svc:ftp1" -m tcp --dport 32001 -j KUBE-SVC-EHK5QUW6GBH6NXAZ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/assure-svc:assure-api" -m tcp --dport 30001 -j KUBE-SVC-KL5DBAD57QU3K45W
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web-svc" -m tcp --dport 30002 -j KUBE-SVC-SRNXP4JNS2EQLOND
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/assure-svc:ftp3" -m tcp --dport 32003 -j KUBE-SVC-TWG3KRRUDXQDBGSU
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/grafana-svc" -m tcp --dport 30004 -j KUBE-SVC-LHQHAHAGDWLXQWQV
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-4GYYP3LQG34KP5D7 -s 10.32.0.8/32 -m comment --comment "default/assure-svc:ftp3" -j KUBE-MARK-MASQ
-A KUBE-SEP-4GYYP3LQG34KP5D7 -p tcp -m comment --comment "default/assure-svc:ftp3" -m tcp -j DNAT --to-destination 10.32.0.8:32003
-A KUBE-SEP-5CFZ3AUTJOZBEZOL -s 10.32.0.6/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-5CFZ3AUTJOZBEZOL -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.6:9153
-A KUBE-SEP-5CGXH7DASKB62Q4K -s 10.32.0.5/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-5CGXH7DASKB62Q4K -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.5:53
-A KUBE-SEP-5UEYBSUGHSTA6QAD -s 10.32.0.8/32 -m comment --comment "default/assure-svc:ftp2" -j KUBE-MARK-MASQ
-A KUBE-SEP-5UEYBSUGHSTA6QAD -p tcp -m comment --comment "default/assure-svc:ftp2" -m tcp -j DNAT --to-destination 10.32.0.8:32002
-A KUBE-SEP-6RCO76BFBCSQVBLZ -s 10.32.0.5/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-6RCO76BFBCSQVBLZ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.5:9153
-A KUBE-SEP-ATSRDY3IOFOVVAPG -s 10.32.0.8/32 -m comment --comment "default/grafana-svc" -j KUBE-MARK-MASQ
-A KUBE-SEP-ATSRDY3IOFOVVAPG -p tcp -m comment --comment "default/grafana-svc" -m tcp -j DNAT --to-destination 10.32.0.8:3000
-A KUBE-SEP-EBOWWFOH72TOJWTM -s 10.32.0.3/32 -m comment --comment "default/registry-svc" -j KUBE-MARK-MASQ
-A KUBE-SEP-EBOWWFOH72TOJWTM -p tcp -m comment --comment "default/registry-svc" -m tcp -j DNAT --to-destination 10.32.0.3:6000
-A KUBE-SEP-FH7D2NMVVKLXNRAQ -s 10.32.0.8/32 -m comment --comment "default/assure-svc:ftp4" -j KUBE-MARK-MASQ
-A KUBE-SEP-FH7D2NMVVKLXNRAQ -p tcp -m comment --comment "default/assure-svc:ftp4" -m tcp -j DNAT --to-destination 10.32.0.8:32004
-A KUBE-SEP-HMY5UMPYQIVCXKGN -s 10.32.0.8/32 -m comment --comment "default/assure-svc:ftp1" -j KUBE-MARK-MASQ
-A KUBE-SEP-HMY5UMPYQIVCXKGN -p tcp -m comment --comment "default/assure-svc:ftp1" -m tcp -j DNAT --to-destination 10.32.0.8:32001
-A KUBE-SEP-KDBEQII3UJLMM3X3 -s 10.32.0.4/32 -m comment --comment "default/web-svc" -j KUBE-MARK-MASQ
-A KUBE-SEP-KDBEQII3UJLMM3X3 -p tcp -m comment --comment "default/web-svc" -m tcp -j DNAT --to-destination 10.32.0.4:5000
-A KUBE-SEP-LLLB6FGXBLX6PZF7 -s 10.32.0.6/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-LLLB6FGXBLX6PZF7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.6:53
-A KUBE-SEP-NRR327RNAE2FQYB4 -s 10.32.0.6/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-NRR327RNAE2FQYB4 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.6:53
-A KUBE-SEP-UBDOLDSIEXT433GN -s 10.32.0.8/32 -m comment --comment "default/assure-svc:assure-api" -j KUBE-MARK-MASQ
-A KUBE-SEP-UBDOLDSIEXT433GN -p tcp -m comment --comment "default/assure-svc:assure-api" -m tcp -j DNAT --to-destination 10.32.0.8:50051
-A KUBE-SEP-UF7CBN4YEUTNQQME -s 192.168.60.59/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-UF7CBN4YEUTNQQME -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.60.59:6443
-A KUBE-SEP-URBQC3ENGFHBZVZP -s 10.32.0.8/32 -m comment --comment "default/assure-svc:ftp" -j KUBE-MARK-MASQ
-A KUBE-SEP-URBQC3ENGFHBZVZP -p tcp -m comment --comment "default/assure-svc:ftp" -m tcp -j DNAT --to-destination 10.32.0.8:30000
-A KUBE-SEP-W5GDWBGHD5IKDWRL -s 10.32.0.10/32 -m comment --comment "lens-metrics/prometheus:web" -j KUBE-MARK-MASQ
-A KUBE-SEP-W5GDWBGHD5IKDWRL -p tcp -m comment --comment "lens-metrics/prometheus:web" -m tcp -j DNAT --to-destination 10.32.0.10:9090
-A KUBE-SEP-WRE3MG3RYMTYWFJZ -s 10.32.0.5/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-WRE3MG3RYMTYWFJZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.5:53
-A KUBE-SERVICES -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp2 cluster IP" -m tcp --dport 32002 -j KUBE-SVC-ABZKSXDVUQEG7FHF
-A KUBE-SERVICES -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp4 cluster IP" -m tcp --dport 32004 -j KUBE-SVC-EKGV7LDCILA3JVOA
-A KUBE-SERVICES -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp cluster IP" -m tcp --dport 30000 -j KUBE-SVC-X5RIHESLPDKRO2KR
-A KUBE-SERVICES -d 10.97.177.136/32 -p tcp -m comment --comment "default/registry-svc cluster IP" -m tcp --dport 6000 -j KUBE-SVC-KJ3GGA5QR537XWXB
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp1 cluster IP" -m tcp --dport 32001 -j KUBE-SVC-EHK5QUW6GBH6NXAZ
-A KUBE-SERVICES -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:assure-api cluster IP" -m tcp --dport 50051 -j KUBE-SVC-KL5DBAD57QU3K45W
-A KUBE-SERVICES -d 10.99.148.91/32 -p tcp -m comment --comment "default/web-svc cluster IP" -m tcp --dport 5000 -j KUBE-SVC-SRNXP4JNS2EQLOND
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.100.190.92/32 -p tcp -m comment --comment "lens-metrics/prometheus:web cluster IP" -m tcp --dport 80 -j KUBE-SVC-MOZMMOD3XZX35IET
-A KUBE-SERVICES -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp3 cluster IP" -m tcp --dport 32003 -j KUBE-SVC-TWG3KRRUDXQDBGSU
-A KUBE-SERVICES -d 10.107.90.196/32 -p tcp -m comment --comment "default/grafana-svc cluster IP" -m tcp --dport 3000 -j KUBE-SVC-LHQHAHAGDWLXQWQV
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ABZKSXDVUQEG7FHF ! -s 10.244.0.0/16 -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp2 cluster IP" -m tcp --dport 32002 -j KUBE-MARK-MASQ
-A KUBE-SVC-ABZKSXDVUQEG7FHF -p tcp -m comment --comment "default/assure-svc:ftp2" -m tcp --dport 32002 -j KUBE-MARK-MASQ
-A KUBE-SVC-ABZKSXDVUQEG7FHF -m comment --comment "default/assure-svc:ftp2" -j KUBE-SEP-5UEYBSUGHSTA6QAD
-A KUBE-SVC-EHK5QUW6GBH6NXAZ ! -s 10.244.0.0/16 -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp1 cluster IP" -m tcp --dport 32001 -j KUBE-MARK-MASQ
-A KUBE-SVC-EHK5QUW6GBH6NXAZ -p tcp -m comment --comment "default/assure-svc:ftp1" -m tcp --dport 32001 -j KUBE-MARK-MASQ
-A KUBE-SVC-EHK5QUW6GBH6NXAZ -m comment --comment "default/assure-svc:ftp1" -j KUBE-SEP-HMY5UMPYQIVCXKGN
-A KUBE-SVC-EKGV7LDCILA3JVOA ! -s 10.244.0.0/16 -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp4 cluster IP" -m tcp --dport 32004 -j KUBE-MARK-MASQ
-A KUBE-SVC-EKGV7LDCILA3JVOA -p tcp -m comment --comment "default/assure-svc:ftp4" -m tcp --dport 32004 -j KUBE-MARK-MASQ
-A KUBE-SVC-EKGV7LDCILA3JVOA -m comment --comment "default/assure-svc:ftp4" -j KUBE-SEP-FH7D2NMVVKLXNRAQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-5CGXH7DASKB62Q4K
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-NRR327RNAE2FQYB4
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-6RCO76BFBCSQVBLZ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-5CFZ3AUTJOZBEZOL
-A KUBE-SVC-KJ3GGA5QR537XWXB ! -s 10.244.0.0/16 -d 10.97.177.136/32 -p tcp -m comment --comment "default/registry-svc cluster IP" -m tcp --dport 6000 -j KUBE-MARK-MASQ
-A KUBE-SVC-KJ3GGA5QR537XWXB -p tcp -m comment --comment "default/registry-svc" -m tcp --dport 32222 -j KUBE-MARK-MASQ
-A KUBE-SVC-KJ3GGA5QR537XWXB -m comment --comment "default/registry-svc" -j KUBE-SEP-EBOWWFOH72TOJWTM
-A KUBE-SVC-KL5DBAD57QU3K45W ! -s 10.244.0.0/16 -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:assure-api cluster IP" -m tcp --dport 50051 -j KUBE-MARK-MASQ
-A KUBE-SVC-KL5DBAD57QU3K45W -p tcp -m comment --comment "default/assure-svc:assure-api" -m tcp --dport 30001 -j KUBE-MARK-MASQ
-A KUBE-SVC-KL5DBAD57QU3K45W -m comment --comment "default/assure-svc:assure-api" -j KUBE-SEP-UBDOLDSIEXT433GN
-A KUBE-SVC-LHQHAHAGDWLXQWQV ! -s 10.244.0.0/16 -d 10.107.90.196/32 -p tcp -m comment --comment "default/grafana-svc cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SVC-LHQHAHAGDWLXQWQV -p tcp -m comment --comment "default/grafana-svc" -m tcp --dport 30004 -j KUBE-MARK-MASQ
-A KUBE-SVC-LHQHAHAGDWLXQWQV -m comment --comment "default/grafana-svc" -j KUBE-SEP-ATSRDY3IOFOVVAPG
-A KUBE-SVC-MOZMMOD3XZX35IET ! -s 10.244.0.0/16 -d 10.100.190.92/32 -p tcp -m comment --comment "lens-metrics/prometheus:web cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-MOZMMOD3XZX35IET -m comment --comment "lens-metrics/prometheus:web" -j KUBE-SEP-W5GDWBGHD5IKDWRL
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-UF7CBN4YEUTNQQME
-A KUBE-SVC-SRNXP4JNS2EQLOND ! -s 10.244.0.0/16 -d 10.99.148.91/32 -p tcp -m comment --comment "default/web-svc cluster IP" -m tcp --dport 5000 -j KUBE-MARK-MASQ
-A KUBE-SVC-SRNXP4JNS2EQLOND -p tcp -m comment --comment "default/web-svc" -m tcp --dport 30002 -j KUBE-MARK-MASQ
-A KUBE-SVC-SRNXP4JNS2EQLOND -m comment --comment "default/web-svc" -j KUBE-SEP-KDBEQII3UJLMM3X3
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WRE3MG3RYMTYWFJZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-LLLB6FGXBLX6PZF7
-A KUBE-SVC-TWG3KRRUDXQDBGSU ! -s 10.244.0.0/16 -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp3 cluster IP" -m tcp --dport 32003 -j KUBE-MARK-MASQ
-A KUBE-SVC-TWG3KRRUDXQDBGSU -p tcp -m comment --comment "default/assure-svc:ftp3" -m tcp --dport 32003 -j KUBE-MARK-MASQ
-A KUBE-SVC-TWG3KRRUDXQDBGSU -m comment --comment "default/assure-svc:ftp3" -j KUBE-SEP-4GYYP3LQG34KP5D7
-A KUBE-SVC-X5RIHESLPDKRO2KR ! -s 10.244.0.0/16 -d 10.111.144.54/32 -p tcp -m comment --comment "default/assure-svc:ftp cluster IP" -m tcp --dport 30000 -j KUBE-MARK-MASQ
-A KUBE-SVC-X5RIHESLPDKRO2KR -p tcp -m comment --comment "default/assure-svc:ftp" -m tcp --dport 30000 -j KUBE-MARK-MASQ
-A KUBE-SVC-X5RIHESLPDKRO2KR -m comment --comment "default/assure-svc:ftp" -j KUBE-SEP-URBQC3ENGFHBZVZP
-A WEAVE -m set --match-set weaver-no-masq-local dst -m comment --comment "Prevent SNAT to locally running containers" -j RETURN
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Wed Oct 13 15:59:28 2021
I wonder if this is caused by https://github.com/weaveworks/weave/issues/3824 . Are you seeing "martian source" warnings in the system logs?
We're about to drop the Weave CNI since we've been suffering with that issue for a very long time now and it still isn't resolved.
Also looks like https://github.com/weaveworks/weave/issues/3915 is similar to this one.
P.S. I can't help with this one, but wanted to connect some of the potential dots here while checking to see if there's any resolution yet.
It looks we have the same problem. It's been a year since the last release is this project still maintained?
Hi, sorry for the late reply, but this is a combination of CoreDNS and Weave CNI issues:
- core DNS tries to resolve an upstream DNS by default and fails if there is no upstream DNS server- this can be fixed by removing the
resolved
reference in the CoreDNS configmap - If you are running on a static network, kubelet needs to have the
node-ip
set in its service configuration, kind of like this:
sudo sed -i '0,/Environment="[^"]*/ s@Environment="[^"]*@& --node-ip='"${NODE_IP}"'@g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- in addition, a route must be manually set up on the static interface for the kubernetes api to be reachable, kind of like this:
sudo ip route add 10.96.0.0/12 via "${NODE_IP}" dev "${INTERFACE_NAME}"
Where10.96.0.0/12
is the subnet for the kubernetes api, typically a default.
Same problem Kubernetes v1.25.2
/home/weave # ./weave --local status
Version: git-34de0b10a69c (up to date; next check at 2022/11/29 13:04:57)
Service: router
Protocol: weave 1..2
Name: 3e:8d:4b:59:75:fd(vmk8smasterprdqc01)
Encryption: disabled
PeerDiscovery: enabled
Targets: 4
Connections: 4 (4 established)
Peers: 5 (with 20 established connections)
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12