k8s-multicluster-ingress icon indicating copy to clipboard operation
k8s-multicluster-ingress copied to clipboard

Delegate creating firewall rule to ingress-gce

Open nikhiljindal opened this issue 7 years ago • 6 comments

Similar to instance groups, we can let the ingress-gce controller running in each cluster to manage the firewall rules in those clusters.

Firewall rules are independent in each cluster and do not require any shared logic between controllers in those clusters.

The advantage is that ingress-gce controllers running in clusters have information about default network name and whether XPN is on and hence can make a more informed decision about creating the firewall rule.

cc @nicksardo @bowei @csbell @madhusudancs @G-Harmon thoughts?

nikhiljindal avatar Jan 23 '18 19:01 nikhiljindal

As long as there is no collision: [1] If ingress-gce continues to use provider-uid to set up the FW rules, they will be unique. [2] If ingress-gce uses uid to set up the FW rules, they will be unique unless a user has brough up the federation control plane at some point in the past.

If we continue with [1], I see no issue. If we move to [2] we'll need a way to "unpoison" a user's project if they tried federation at any point in the past (either through docs or complicated scripting). I'd rather not have [2].

csbell avatar Jan 23 '18 20:01 csbell

If the user has/had ran the federation control plane, then MCI clusters will today be vulnerable to the federation-ingress bug (instance groups using uid instead of provider-uid) which would mean two MC clusters cannot exist in the same zone. If that should be a supported use-case, then unpoisoning is unavoidable or require non-federated clusters.

On a side note, @bowei and I have talked about whitelisting the entire nodeport range instead of pigeonholing each nodeport. There seems to be a limit of 100 allowed ports/ranges in the firewall rule, and that's problematic for customers with 100+ services served by ingress. I'm not seeing a reason to whitelist each nodeport if the source ranges are google proxies and health checkers. Thoughts?

nicksardo avatar Jan 23 '18 22:01 nicksardo

Does this mean glbc installs/syncs a single k8s-fw-internal-lbs-hc-only rule for the entire relevant NP range? I'm confused about the 100 limit. Is this a limit an enumeration limit of 100 or is it okay to whitelist 30000-32000?

csbell avatar Jan 23 '18 22:01 csbell

Suggestion is to whitelist entire NP range.

There is a limit in complexity of firewall within a single rule as well as # of rules. To support 100+ Ingresses, we will need to do some kind of sharding of node ports across multiple rules (if we wish to go that direction).

bowei avatar Jan 23 '18 23:01 bowei

Where can I learn more about uid vs provider-uid?

G-Harmon avatar Jan 24 '18 00:01 G-Harmon

https://github.com/kubernetes/kubernetes/issues/37306 https://github.com/kubernetes/ingress-nginx/pull/278 https://github.com/kubernetes/ingress-gce/blob/7ba514140aa080a45b37681a6c04760372784406/cmd/glbc/app/namer.go#L81-L84

nicksardo avatar Jan 24 '18 01:01 nicksardo