host to service network not working after reboot/join after upgrade from v3.26.3 to v3.27.3 (v3.28.0) ebpf dataplane/vxlan/no kube-proxy/dsr
host to service network not working after reboot/join after upgrade from v3.26.3 to v3.27.3 ebpfdataplane/vxlan/no kube-proxy/dsr killing calico-node pod immediately fixing the problem
Expected Behavior
host to service network working well after node reboot/join
Current Behavior
host to kube service network not working after reboot/join node unless you kill calico-node pod it start working after it
Possible Solution
kill calico-node pod on restarted or joined node
kubectl delete pod -n calico-system calico-node-xxxxx
Steps to Reproduce (for bugs)
- reboot node
- try to connect from the rebooted host to any kube service (kube-dns like example)
nslookup example.com {kube-dns svc ip}
- fail
Context
Any pods with a host network would fail to start after rebooting/joining node Cluster network works fine
Your Environment
- Calico version: v3.27.3 v3.28.0
- Helm chart: projectcalico/tigera-operator:v3.28.0
- chart: projectcalico/tigera-operator
version: v3.28.0
name: calico
namespace: tigera-operator
values:
- installation:
calicoNetwork:
linuxDataplane: BPF
mtu: 8950
bgp: Disabled
ipPools:
- cidr: 10.244.0.0/14
blockSize: 20
encapsulation: VXLAN
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: "172.24.1.15"
KUBERNETES_SERVICE_PORT: "6443"
apiVersion: projectcalico.org/v3
kind: FelixConfiguration
metadata:
annotations:
operator.tigera.io/bpfEnabled: "true"
creationTimestamp: "2023-11-09T09:16:01Z"
generation: 1
name: default
namespace: calico-system
resourceVersion: "65934135"
uid: 5d0de475-8603-4e7b-9282-baecc231e48e
spec:
bpfEnabled: true
bpfExternalServiceMode: DSR
bpfLogLevel: ""
floatingIPs: Disabled
healthPort: 9099
logSeverityScreen: Info
reportingInterval: 0s
vxlanMTU: 8950
vxlanVNI: 4096
# /etc/NetworkManager/conf.d/calico.conf
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:bpf*.cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
certificateKey: ""
skipPhases:
- addon/kube-proxy
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
serviceSubnet: "10.243.0.0/16"
podSubnet: "10.244.0.0/14"
dnsDomain: "l8s.local"
controllerManager:
extraArgs:
"node-cidr-mask-size": "20"
"allocate-node-cidrs": "false"
apiServer:
certSANs:
- "172.24.1.15"
- "172.24.1.16"
- "k8clust-lon01.l8s.space"
clusterName: "k8clust-lon01"
controlPlaneEndpoint: "k8clust-lon01.l8s.space:6443"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 4000
cgroupDriver: systemd
serverTLSBootstrap: true
- Orchestrator version: kubernetes 1.29.5
- Operating System and version:
# uname -r
5.14.0-452.el9.x86_64
# cat /etc/os-release
NAME="CentOS Stream"
VERSION="9"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="9"
PLATFORM_ID="platform:el9"
PRETTY_NAME="CentOS Stream 9"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:centos:centos:9"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://issues.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
calico-node log after new node join calico-node install-cni log after new node join calico-node log after killling pod ccalico-node install-cni log after killling pod
killing calico-node pod immediately fixing the problem
What changes after killing the pod. Could you share your routing table before/after?
Did you have ctlb disabled before upgrade?
Did you have ctlb disabled before upgrade?
Nope Can you provide additional parameters which I need to set up? I think it is something with
bpfConnectTimeLoadBalancingEnabled bpfConnectTimeLoadBalancing bpfHostNetworkedNATWithoutCTLB
It's all not set up currently
killing calico-node pod immediately fixing the problem
What changes after killing the pod. Could you share your routing table before/after?
Routes almost did not change I am checkins via dns resolviong against kube-dns service and it is there after reboot (when it is not working) and after killing pod (when it is working).
10.243.0.10 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
killing calico-node pod immediately fixing the problem
What changes after killing the pod. Could you share your routing table before/after?
Routes almost did not change I am checkins via dns resolviong against kube-dns service and it is there after reboot (when it is not working) and after killing pod (when it is working).
10.243.0.10 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
That route is correct. Since iirc 3.27 we route traffic from host to UDP service via that device by default.
I wonder is some routes caching is in play. Could you dump ip route show cached ?
I wonder is some routes caching is in play. Could you dump
ip route show cached?
ip route show cached
it is empty before and after killing the pod
BTW Apparently, I have a problem only with services pinned in the route table And apparently, they are all UDP services (dns services) There are no tcp services in the route table and they are working fine (tested kube api service via service IP and nginx ingress)
Just to confirm, do you see the same problem from host-networked pods/processes or from regular pods as well?
I tried to reproduce the issue, I created a cluster in gcp with kubeadm and installed calico 3.26.4 and upgraded to 3.28 and my DNS did worked just fine.
Would you be able to tcpdump whether your traffic is reaching the service, what kind of packets are exiting from bpfout.cali on your test node?
If your cluster is not a production cluster we could dig deeper with enabling bpf logging to get some more useful logs. Ideally we could sync at calico users slack.
Just to confirm, do you see the same problem from host-networked pods/processes or from regular pods as well?
Regular pods don't have a problem. They work fine. Tested it to be sure.
So only host-networked pods/processes have problem
Would you be able to tcpdump whether your traffic is reaching the service, what kind of packets are exiting from
bpfout.calion your test node?
So fresh vm node with single interface joined to cluster I tested with
nslookup openebs-api-rest.openebs.svc.l8s.local. 10.243.0.10
When it's working well after calico-node pod kill bpfout.workin.pcap.gz
That doesn't seem to be a problem :arrow_up: Do you see packets returning to the client in both cases? 10.243.0.10 is a local pod or remote?
you could also enable :arrow_down: in default felixconfiguration and provide bpf logs from the node using bpftool prog tracelog > log.txt for the case when it does not work. That should give us good insight.
bpfLogLevel: Debug
bpfLogFilters:
- all: host 172.24.1.29 and udp port 53
10.243.0.10is a local pod or remote?
it's a service IP pods behind that IP are remote hostNetwork -> pod IP has no issue
bpfLogLevel: Debug bpfLogFilters:
- all: host 172.24.1.29 and udp port 53
that does not work these changes have been accepted by API
bpfLogLevel: Debug
bpfLogFilters:
all: host 172.24.1.29 and udp port 53
however bpfLogFilters property disappeared from the object so anyway here is a log probably not filtered
BTW I found a repeated error in the tigera operator probably not related to this issue
{"level":"error","ts":"2024-06-12T08:07:07Z","logger":"controller_ippool","msg":"Cannot update an IP pool not owned by the operator","Request.Namespace":"","Request.Name":"periodic-5m0s-reconcile-event","reason":"ResourceValidationError","stacktrace":"github.com/tigera/operator/pkg/controller/status.(*statusManager).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/status/status.go:356\ngithub.com/tigera/operator/pkg/controller/ippool.(*Reconciler).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/ippool/pool_controller.go:291\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:314\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:265\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:226"}
Thanks for the logs, helpful. It seems like the packets from bpfout.cali do not make it to any other device. Perhaps worth verifying with tcpdump. They seem to be eaten by the host network stack. They either have a wrong csum (unlikely that would not get fixed by calico-node restarting) or they get dropped by RPF (could you check the value in /proc/sys/net/ipv4/conf/bpfout.cali/rp_filter) which could be strict and then fixed after the restart. Or it gets dropped by iptables. Do you have a default route? I will give it some more try to reproduce it.
Before pod killing (after node restart when we have the problem)
cat /proc/sys/net/ipv4/conf/bpfout.cali/rp_filter
1
route
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.24.1.1 0.0.0.0 UG 100 0 0 eth0
10.243.0.10 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.8.170 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.27.94 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.38.83 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.51.78 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.54.252 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.61.248 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.70.167 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.77.103 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.94.254 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.117.56 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.121.189 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.125.134 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.140.20 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.150.161 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.157.165 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.157.183 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.158.119 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.164.105 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.181.122 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.185.212 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.194.22 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.230.225 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.251.8 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.254.169 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.255.22 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.244.16.0 10.244.16.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.244.32.0 10.244.32.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.244.48.0 0.0.0.0 255.255.240.0 U 0 0 0 *
10.244.48.6 0.0.0.0 255.255.255.255 UH 0 0 0 califee8cfb24e3
10.244.48.7 0.0.0.0 255.255.255.255 UH 0 0 0 calie89ffdb4633
10.244.48.8 0.0.0.0 255.255.255.255 UH 0 0 0 cali193e57c628e
10.244.48.9 0.0.0.0 255.255.255.255 UH 0 0 0 calid31be247766
10.244.192.0 10.244.192.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.245.80.0 10.245.80.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.246.96.0 10.246.96.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.247.80.0 10.247.80.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.247.112.0 10.247.112.0 255.255.240.0 UG 0 0 0 vxlan.calico
169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 bpfin.cali
172.24.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
iptables
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:zkuE8qdwsVpH6Kd2 */ /* Accept packets from flows that pre-date BPF. */ mark match 0x5000000/0x5000000 ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:XQL0mC-L6wldZdgN */ /* Drop packets from unknown flows. */ mark match 0x5000000/0x5000000
ACCEPT all -- anywhere anywhere /* cali:pbFdTFCLcV-MVLSS */ mark match 0x1000000/0x1000000
DROP all -- anywhere anywhere /* cali:u_TyW7ph8QsYnThE */ mark match ! 0x1000000/0x1000000
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:umcmOn0WnTNOKJrp */ /* Pre-approved by BPF programs. */ mark match 0x3000000/0x3000000
DROP all -- anywhere anywhere /* cali:NnQ109Z-tVFkJGc1 */ /* From workload without BPF seen mark */ mark match ! 0x1000000/0x1000000
MARK all -- anywhere anywhere /* cali:YmI_zfAgHIHbINEV */ /* Mark pre-established flows. */ ctstate RELATED,ESTABLISHED MARK or 0x8000000
cali-to-wl-dispatch all -- anywhere anywhere /* cali:-EFgmtwMJVO64q9s */ /* To workload, check workload is known. */
ACCEPT all -- anywhere anywhere /* cali:wP1i1sEU71uRzM5d */ /* To workload, mark has already been verified. */
ACCEPT all -- anywhere anywhere /* cali:3t5V_2xe4DFVlHBq */ /* From */ /* bpfout.cali */ /* device, mark verified, accept. */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:CD7jZCSqPP_KjGsd */ /* Mark pre-established flows. */ ctstate RELATED,ESTABLISHED MARK or 0x8000000
KUBE-FIREWALL all -- anywhere anywhere
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain cali-to-wl-dispatch (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:cEwZ48PLVj36YM8T */
ACCEPT all -- anywhere anywhere /* cali:Geyg9JmnnDNPlLHX */
ACCEPT all -- anywhere anywhere /* cali:ICRLMI_1Qq8A0HWR */
ACCEPT all -- anywhere anywhere /* cali:bhG4cOJs_TnufY0G */
DROP all -- anywhere anywhere /* cali:k6ZOE-XDbClrqbFe */ /* Unknown interface */
After pod killing (when it is working well)
cat /proc/sys/net/ipv4/conf/bpfout.cali/rp_filter
0
route
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.24.1.1 0.0.0.0 UG 100 0 0 eth0
10.243.0.10 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.8.170 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.27.94 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.38.83 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.51.78 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.54.252 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.61.248 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.70.167 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.77.103 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.94.254 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.117.56 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.121.189 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.125.134 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.140.20 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.150.161 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.157.165 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.157.183 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.158.119 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.164.105 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.181.122 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.185.212 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.194.22 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.230.225 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.251.8 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.254.169 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.243.255.22 169.254.1.1 255.255.255.255 UGH 0 0 0 bpfin.cali
10.244.16.0 10.244.16.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.244.32.0 10.244.32.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.244.48.0 0.0.0.0 255.255.240.0 U 0 0 0 *
10.244.48.6 0.0.0.0 255.255.255.255 UH 0 0 0 califee8cfb24e3
10.244.48.7 0.0.0.0 255.255.255.255 UH 0 0 0 calie89ffdb4633
10.244.48.8 0.0.0.0 255.255.255.255 UH 0 0 0 cali193e57c628e
10.244.48.9 0.0.0.0 255.255.255.255 UH 0 0 0 calid31be247766
10.244.192.0 10.244.192.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.245.80.0 10.245.80.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.246.96.0 10.246.96.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.247.80.0 10.247.80.0 255.255.240.0 UG 0 0 0 vxlan.calico
10.247.112.0 10.247.112.0 255.255.240.0 UG 0 0 0 vxlan.calico
169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 bpfin.cali
172.24.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
iptables
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:zkuE8qdwsVpH6Kd2 */ /* Accept packets from flows that pre-date BPF. */ mark match 0x5000000/0x5000000 ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:XQL0mC-L6wldZdgN */ /* Drop packets from unknown flows. */ mark match 0x5000000/0x5000000
ACCEPT all -- anywhere anywhere /* cali:pbFdTFCLcV-MVLSS */ mark match 0x1000000/0x1000000
DROP all -- anywhere anywhere /* cali:u_TyW7ph8QsYnThE */ mark match ! 0x1000000/0x1000000
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:umcmOn0WnTNOKJrp */ /* Pre-approved by BPF programs. */ mark match 0x3000000/0x3000000
DROP all -- anywhere anywhere /* cali:NnQ109Z-tVFkJGc1 */ /* From workload without BPF seen mark */ mark match ! 0x1000000/0x1000000
MARK all -- anywhere anywhere /* cali:YmI_zfAgHIHbINEV */ /* Mark pre-established flows. */ ctstate RELATED,ESTABLISHED MARK or 0x8000000
cali-to-wl-dispatch all -- anywhere anywhere /* cali:-EFgmtwMJVO64q9s */ /* To workload, check workload is known. */
ACCEPT all -- anywhere anywhere /* cali:wP1i1sEU71uRzM5d */ /* To workload, mark has already been verified. */
ACCEPT all -- anywhere anywhere /* cali:3t5V_2xe4DFVlHBq */ /* From */ /* bpfout.cali */ /* device, mark verified, accept. */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:CD7jZCSqPP_KjGsd */ /* Mark pre-established flows. */ ctstate RELATED,ESTABLISHED MARK or 0x8000000
KUBE-FIREWALL all -- anywhere anywhere
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain cali-to-wl-dispatch (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:cEwZ48PLVj36YM8T */
ACCEPT all -- anywhere anywhere /* cali:Geyg9JmnnDNPlLHX */
ACCEPT all -- anywhere anywhere /* cali:ICRLMI_1Qq8A0HWR */
ACCEPT all -- anywhere anywhere /* cali:bhG4cOJs_TnufY0G */
DROP all -- anywhere anywhere /* cali:k6ZOE-XDbClrqbFe */ /* Unknown interface */
cat /proc/sys/net/ipv4/conf/bpfout.cali/rp_filter -> 1 vs cat /proc/sys/net/ipv4/conf/bpfout.cali/rp_filter -> 0
That is the problem. Something sets it to 1(strict) and when calico-node restarts, is sets it back to 0. The something is probably your systemd which applies configuration when a new device is added. Seems like the issue is present with systemd 245+ What is your linux distro (which I should have asked a while ago)?
What is your linux distro (which I should have asked a while ago)?
# uname -r
5.14.0-452.el9.x86_64
# cat /etc/os-release
NAME="CentOS Stream"
VERSION="9"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="9"
PLATFORM_ID="platform:el9"
PRETTY_NAME="CentOS Stream 9"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:centos:centos:9"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://issues.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
he something is probably your systemd
in sysctl setting it is set as
net.ipv4.conf.all.rp_filter=0
cat /etc/sysctl.conf
fs.inotify.max_user_instances=1048576
fs.inotify.max_user_watches=1048576
fs.inotify.max_queued_events=16384
fs.aio-max-nr=1048576
vm.max_map_count=262144
net.ipv4.ip_nonlocal_bind=1
net.ipv4.ip_forward=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv4.neigh.default.gc_thresh1=8192
net.ipv4.neigh.default.gc_thresh2=12228
net.ipv4.neigh.default.gc_thresh3=24456
net.core.somaxconn=65535
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.accept_local=1
kernel.panic=30
kernel.panic_on_oops=1
vm.overcommit_memory=2
vm.panic_on_oom=0
@a-sorokin-sdg do you still see the issue? Have you figured what is changing the rpf? Closing now, but feel free to reopen if you have any new info.
Yes, still have the issue. I have tried to catch who change it via audit without success.
Haven't cought calico right on bpf.cali eth setting rp_filter But cought it with pod eth rp_filter setting
2024-11-13 09:40:43.865 [ERROR][9990] felix/bpf_ep_mgr.go 3062: Failed to set /proc/sys/net/ipv4/conf/calie2b07182c0a/rp_filter to 2 err=open /proc/sys/net/ipv4/conf/calie2b07182c0a/rp_filter: no such file or directory
2024-11-13 09:40:43.865 [WARNING][9990] felix/bpf_ep_mgr.go 1075: Failed to set rp_filter for calie2b07182c0a. error=open /proc/sys/net/ipv4/conf/calie2b07182c0a/rp_filter: no such file or directory
calico 1.29 new feature in FelixConfiguration bPFEnforceRPF: "Disabled" solved the problem for me now /proc/sys/net/ipv4/conf/bpfout.cali/rp_filter corectly set up after reboot