kube-static-egress-ip
kube-static-egress-ip copied to clipboard
Traffic is not properly routed after configuring StaticEgressIP
I would like to use the static egress functionality.
CNI: calico
I installed the CRD, RBAC, gateway-manager and controller just like the readme described.
Test env, 2 ubuntu replicas along a headless service for discovery:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment-deployment
spec:
selector:
matchLabels:
app: test-deployment
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: test-deployment
spec:
containers:
- name: test-deployment
image: ubuntu:bionic
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
clusterIP: None
selector:
app: test-deployment
ports:
- port: 80
targetPort: 80
Afterwards, I configured the following StaticEgressIP
:
apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
name: test-egress
spec:
rules:
- egressip: 51.15.136.12
service-name: frontend
cidr: 151.115.41.82/32
When StaticEgressIP
resource is in place, traffic no longer reaches the target machine, running traceroute
shows:
root@test-deployment-deployment-7469d4b659-lqxcr:/# traceroute 151.115.41.82
traceroute to 151.115.41.82 (151.115.41.82), 30 hops max, 60 byte packets
1 10.64.24.117 (10.64.24.117) 0.084 ms 0.112 ms 0.051 ms
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
Without it, traceroute successfully reaches the target machine:
root@test-deployment-deployment-7469d4b659-lqxcr:/# traceroute 151.115.41.82
traceroute to 151.115.41.82 (151.115.41.82), 30 hops max, 60 byte packets
1 10.64.24.117 (10.64.24.117) 0.143 ms 0.025 ms 0.022 ms
2 10.64.24.116 (10.64.24.116) 0.699 ms 0.606 ms 0.561 ms
3 10.66.0.1 (10.66.0.1) 1.004 ms 0.974 ms 0.949 ms
4 * * *
5 10.194.0.8 (10.194.0.8) 0.863 ms 10.194.0.10 (10.194.0.10) 0.918 ms 10.194.0.12 (10.194.0.12) 0.889 ms
6 212.47.225.212 (212.47.225.212) 1.182 ms 212.47.225.242 (212.47.225.242) 0.995 ms 212.47.225.196 (212.47.225.196) 0.862 ms
7 51.158.8.177 (51.158.8.177) 0.925 ms 51.158.8.181 (51.158.8.181) 1.260 ms 51.158.8.177 (51.158.8.177) 1.130 ms
8 be4751.rcr21.b022890-0.par04.atlas.cogentco.com (149.6.164.41) 1.374 ms 1.363 ms be4752.rcr21.b039311-0.par04.atlas.cogentco.com (149.6.165.65) 1.331 ms
9 * be3739.ccr31.par04.atlas.cogentco.com (154.54.60.185) 2.036 ms 2.006 ms
10 be2102.ccr41.par01.atlas.cogentco.com (154.54.61.17) 2.022 ms be3184.ccr42.par01.atlas.cogentco.com (154.54.38.157) 1.941 ms be2103.ccr42.par01.atlas.cogentco.com (154.54.61.21) 2.154 ms
11 be12266.ccr42.ams03.atlas.cogentco.com (154.54.56.173) 13.727 ms 13.694 ms 13.710 ms
12 be2815.ccr41.ham01.atlas.cogentco.com (154.54.38.206) 20.503 ms be2816.ccr42.ham01.atlas.cogentco.com (154.54.38.210) 20.788 ms be2815.ccr41.ham01.atlas.cogentco.com (154.54.38.206) 20.467 ms
13 be2483.ccr21.waw01.atlas.cogentco.com (130.117.51.61) 32.825 ms 32.705 ms 33.101 ms
14 be2486.rcr21.b016833-0.waw01.atlas.cogentco.com (154.54.37.42) 32.946 ms 33.318 ms 34.252 ms
15 be174.waw1dc1-net-bb02.scaleway.com (149.14.232.242) 34.141 ms 34.108 ms be174.waw1dc1-net-bb01.scaleway.com (149.14.232.234) 34.077 ms
16 151.115.2.9 (151.115.2.9) 33.449 ms 151.115.2.3 (151.115.2.3) 33.371 ms 33.488 ms
17 * * *
18 * * *
19 * * *
20 * * *
21 82-41-115-151.instances.scw.cloud (151.115.41.82) 33.627 ms 33.707 ms 33.599 ms
My kube-system
looks like this:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7d7d7cdc47-8tkzx 1/1 Running 0 15h 100.64.186.0 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
calico-node-jsl7j 1/1 Running 0 15h 10.70.118.71 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
calico-node-llqbb 1/1 Running 0 15h 10.73.152.13 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
calico-node-s6ns8 1/1 Running 0 15h 10.64.24.117 scw-k8s-musing-lamport-default-649f6dc7bc2c43d <none> <none>
coredns-565d4499db-5ztj2 1/1 Running 0 15h 100.64.185.192 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
csi-node-5ch57 2/2 Running 0 15h 10.64.24.117 scw-k8s-musing-lamport-default-649f6dc7bc2c43d <none> <none>
csi-node-df82j 2/2 Running 0 15h 10.70.118.71 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
csi-node-msgn4 2/2 Running 0 15h 10.73.152.13 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
konnectivity-agent-477pp 1/1 Running 0 15h 10.64.24.117 scw-k8s-musing-lamport-default-649f6dc7bc2c43d <none> <none>
konnectivity-agent-9dxx5 1/1 Running 0 15h 10.73.152.13 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
konnectivity-agent-c85d5 1/1 Running 0 15h 10.70.118.71 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
kube-proxy-cwz69 1/1 Running 0 15h 10.64.24.117 scw-k8s-musing-lamport-default-649f6dc7bc2c43d <none> <none>
kube-proxy-dzlxm 1/1 Running 0 15h 10.70.118.71 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
kube-proxy-gfxmg 1/1 Running 0 15h 10.73.152.13 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
metrics-server-c6ffb4c7c-dhwgc 1/1 Running 0 15h 100.64.185.193 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
node-problem-detector-l276k 1/1 Running 0 15h 100.64.185.195 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
node-problem-detector-mwt74 1/1 Running 0 15h 100.65.226.1 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
node-problem-detector-sfpb9 1/1 Running 0 15h 100.64.46.193 scw-k8s-musing-lamport-default-649f6dc7bc2c43d <none> <none>
static-egressip-controller-59k2v 1/1 Running 0 14h 10.64.24.117 scw-k8s-musing-lamport-default-649f6dc7bc2c43d <none> <none>
static-egressip-controller-7f7md 1/1 Running 0 14h 10.70.118.71 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
static-egressip-controller-cgrcl 1/1 Running 0 14h 10.73.152.13 scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe <none> <none>
static-egressip-gateway-manager-56d44c7959-5f5cl 1/1 Running 0 14h 100.64.46.199 scw-k8s-musing-lamport-default-649f6dc7bc2c43d <none> <none>
static-egressip-gateway-manager-56d44c7959-fwx92 1/1 Running 0 14h 100.65.226.3 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
static-egressip-gateway-manager-56d44c7959-glhpq 1/1 Running 0 14h 100.65.226.4 scw-k8s-musing-lamport-default-994c6503cacc4bc <none> <none>
Some logs from the controller
:
...
I0331 10:02:38.819257 1 director.go:114] Created ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:02:38.823511 1 director.go:123] Added ips [100.64.46.198 100.65.226.2] to the ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:02:38.825738 1 director.go:139] iptables rule in mangle table PREROUTING chain to match src to ipset
I0331 10:02:38.835630 1 director.go:188] added routing entry in custom routing table to forward destinationIP to egressGateway
I0331 10:02:38.836271 1 controller.go:216] Successfully synced 'default/test-egress'
I0331 10:03:08.796713 1 controller.go:396] Updating StaticEgressIP: default/test-egress
I0331 10:03:08.801994 1 controller.go:250] Processing update to StaticEgressIP: default/test-egress
I0331 10:03:08.838108 1 director.go:114] Created ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:03:08.841882 1 director.go:123] Added ips [100.64.46.198 100.65.226.2] to the ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:03:08.845503 1 director.go:139] iptables rule in mangle table PREROUTING chain to match src to ipset
I0331 10:03:08.856632 1 director.go:188] added routing entry in custom routing table to forward destinationIP to egressGateway
I0331 10:03:08.856673 1 controller.go:216] Successfully synced 'default/test-egress'
Logs from the selected gateway-manager
:
...
2021/03/31 10:04:42 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress
2021/03/31 10:04:47 Current gateway node: scw-k8s-musing-lamport-default-994c6503cacc4bc is ready so keeping same node as gateway
2021/03/31 10:04:47 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress
2021/03/31 10:04:52 Current gateway node: scw-k8s-musing-lamport-default-994c6503cacc4bc is ready so keeping same node as gateway
2021/03/31 10:04:52 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress
2021/03/31 10:04:57 Current gateway node: scw-k8s-musing-lamport-default-994c6503cacc4bc is ready so keeping same node as gateway
2021/03/31 10:04:57 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress
Is something wrong with my configuration?
From the last part of the readme "operator has to manually choose a node to act of Gateway by annotating the node". Which annotation should be used on which node? Also what gateway Ip should be
I tried doing this without any success (traffic is still routed trough 10.64.24.117
):
kubectl annotate --overwrite node scw-k8s-musing-lamport-default-994c6503cacc4bc "nirmata.io/staticegressips-gateway=10.70.118.71"
Describing the EgressStaticIP
resource yields:
Name: test-egress
Namespace: default
Labels: <none>
Annotations: <none>
API Version: staticegressips.nirmata.io/v1alpha1
Kind: StaticEgressIP
Metadata:
Creation Timestamp: 2021-03-31T14:08:31Z
Generation: 2
Managed Fields:
API Version: staticegressips.nirmata.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:rules:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-03-31T14:08:31Z
API Version: staticegressips.nirmata.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:gateway-ip:
f:gateway-node:
Manager: static-egressip-gateway-manager
Operation: Update
Time: 2021-03-31T14:08:32Z
Resource Version: 10516862791
UID: a1a45fdf-30a2-4be9-85c7-ee1f2741b2df
Spec:
Rules:
Cidr: 151.115.41.82/32
Egressip: 51.15.136.12
Service - Name: frontend
Status:
Gateway - Ip: 10.70.118.71
Gateway - Node: dabdf368-d079-4f50-a9e6-47e4a324d2c2
Events: <none>