kubermatic
kubermatic copied to clipboard
IP whitelist setup for user clusters
Description of the feature you would like to add / User story
As a user cluster administrator I would like to add IP whitelist for accessing their cluster API in order to secure reaching the cluster from the internet
Solution details
- Should be independent of the expose strategy used (it can be changed)
- The allowance list should be set up from user cluster admin level
- On the KKP UI, I go to my User cluster pane, and I create a whitelist of IP addresses
- It should only affect that user cluster where the setup had been modified
- It's a self-service function of user cluster administrators to avoid doing any settings on seed level for reaching this functionality
Alternative approaches
- Setting it up manually on the seed level based on submitted tickets by the user cluster administrators to KKP admins. But its complexity it depends on the expose strategy in use which also should be changeable. And this way it's fully manual.
Use cases
- User cluster owner don’t have to access Kubernetes API from internet, only from intranet (VPN) and having it not accessible from anywhere else would strengthen the security
- Different organizations are using different user clusters in one KKP which is managed by a hosting company
Additional information
- We will only provide this feature for LoadBalancer expose strategy
- We will enable migrating the existing clusters to another expose strategy from their current expose strategy, and rolling it out to worker nodes. The only thing it requires is downloading the new kubeconfig file after the migration.
- We will add an option to the dashboard to change only the user-cluster expose strategy in order to be able to only use this feature for one user cluster, but keep the global expose strategy the same (and non-supported for this feature like NodePort expose strategy)
- We recommend doing the latter only for new user clusters and not changing the existing user clusters' expose strategy
Technical details on implementation feasibility for individual expose strategies:
LoadBalancer Expose Strategy
As each cluster's apiserver is exposed via a dedicated LoadBalancer k8s service (front-loadbalancer
), we can just use service's .spec.loadBalancerSourceRanges
to configure the allowlist. This should be supported by most relevant cloud providers.
NodePort / Tunneling Expose Strategy
Since for these expose strategies we reuse the same LoadBalancer k8s service (nodeport-proxy
) for multiple clusters in the same seed, the service's .spec.loadBalancerSourceRanges
approach cannot be used here. We can do the filtering on the Envoy proxy (nodeport-proxy) running in seed, or using k8s NetworkPolicies in seed, but only if the cloud-provider does not perform source-NAT on the load-balancer level (which would hide the actual client IP behind the LoadBalancer IP).
Examples of cloud-providers that do NOT perform SNAT and would be compatible with this approach are: AWS, Azure, GCP.
Examples of cloud-providers that always perform SNAT and would NOT be compatible with this approach are: OpenStack, Hetzner.
A potential workaround for the SNAT on the load-balancer level would be to use the PROXY protocol between the load-balancer and nodeport-proxy. Unfortunately, that approach would break the apiserver access from within the seed cluster itself, due to the fact that Envoy cannot serve proxy and non-proxy requests at the same time & issue described in https://github.com/kubernetes/kubernetes/issues/66607. Until the KEP 1860-kube-proxy-IP-node-binding is implemented, using PROXY is not feasible.
Summary
- It is possible to easily implement this feature for the LoadBalancer expose strategy.
- It is possible to implement this feature also for the NodePort & Tunneling expose strategies (albeit with a bit higher complexity), but it would not work for seeds running on some cloud providers (e.g. OpenStack, Hetzner).
- We will only provide this feature for LoadBalancer expose strategy
- We will enable migrating the existing clusters to another expose strategy from their current expose strategy, and rolling it out to worker nodes. The only thing it requires is downloading the new kubeconfig file after the migration.
- We will add an option to the dashboard to change only the user-cluster expose strategy in order to be able to only use this feature for one user cluster, but keep the global expose strategy the same (and non-supported for this feature like NodePort expose strategy)
- We recommend doing the latter only for new user clusters and not changing the existing user clusters' expose strategy
As discussed within and following up on
- https://github.com/kubermatic/kubermatic/discussions/10743
it might be a good choice here to adapt the wording, in so that the conversation revolves around an allow-list instead.
Implemented