kubernetes-the-hard-way
kubernetes-the-hard-way copied to clipboard
Local service-pod access needs net.bridge.bridge-nf-call-iptables=1
When the "bridge" CNI-plugin is used the sysctl must be set;
modprobe br-netfilter
sysctl -w net.bridge.bridge-nf-call-iptables=1
If not, NAT will not work when accessing a service where the endpoint is on the local node. A common case is DNS-queries failing when the "coredns" pod happens to be on the same node.
This may be default on some systems, but not all so it should be documented. Please see; https://github.com/kubernetes/kubernetes/issues/87426
For Ubuntu Server
modprobe br-netfilter
echo "br-netfilter" >> /etc/modules-load.d/modules.conf
Those two commands worked for me. They should be added here: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#configure-cni-networking
I came from issue #87426, @uablrek, your solution worked in my case as I following Kubernetes the Hard Way guide with flannel as CNI plugin. Now I'm able to both curl service IP from pod within same node, cross node communication already worked, and query dns kubernetes and/or my services with "coredns". Here my versions are listed in my environment.
nodes: Ubuntu 20.04.6 LTS with kernel version 5.15.0-1056-aws containerd v1.7.14 => runs as systemd service on worker nodes kubectl, kube-proxy, kubelet => v1.28.3 runs as systemd services on worker nodes kube-apiserver, kube-controller-manager, kube-scheduler v1.28.3 => runs as systemd services on controller nodes flannel container image version: 0.24.3, deployed as DaemonSet coredns container image version: 1.7.0, deployed as Deployment
flannel backend type "vxlan", since my nodes running on AWS, setting other backend type options not suitable for me. kube-proxy uses "iptables"
For anyone who experience same/similar issue it is worthy to try enabling br_netfilter module in kernel.