kube-static-egress-ip
kube-static-egress-ip copied to clipboard
Allow rules to specify egress cidr to allow egress from multiple nodes.
Right now a director is required because a single static egress IP is desired. The downside being the director itself is a point of failure. What if there is a situation where I would rather have traffic go out from whatever node the source pod is on, and am less picky about the specific IP?
In my case, each kube node has multiple interfaces available. All egress traffic by default will go out as the public IP address calico is managing, but particular services I would like to use a secondary range.
What if the StaticEgressIP
crd allowed a cidr instead of a single IP?
apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
name: eip
spec:
rules:
- egresscidr: 100.137.146.0/24
service-name: frontend
cidr: 4.2.2.2/32
The routing rules would then be a bit simpler:
- If the local node has an interface with a matching IP, send traffic from the
frontend
pods to the destination cidr on that interface. - If not, forward to the director as before.
I feel like such a solution could be a fairly painless way to relax the requirement of a single director for most traffic in some environments.
Does this seem like a possibility?
thanks @captncraig for your thoughts.
What if there is a situation where I would rather have traffic go out from whatever node the source pod is on, and am less picky about the specific IP?
Could you please elaborate? What you meant by less picky about the specific IP
?
One of the key requirement is to have single/static IP for the subset of the workloads (by namespace, by service selector, by pod selectors etc).
Just tested with docker ee/k8s/calico nodes but it is not working, no SNAT is happening on the Gateway node. The tcpdump outside the cluster shows the address of the node as source IP where the pod is running. Do you have any experience with that on calico? Could you please help? Thanks!
@ztoth123 I have not tested out with calico yet. I am in middle of doing major update to the project followed by testing with direct routing (Flanner hostgw, calico etc) and vxlan overlays (flannel vxlan, weave)
I will update once i finish my testing.
same problem (cni calico)