cluster-api-provider-hetzner
cluster-api-provider-hetzner copied to clipboard
support hcloud firewall
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.]
I think it would be nice to add support for the HCloud Firewalls. Inside the HetznerCluster you could specify the firewall(s) with a machine selector or with static/hardcoded IPs and ports. This can be useful if you are using a hcloud cluster with public ips and want to control the access to the node-port services for example.
Example configuration inside the HetznerCluster
:
spec:
hcloudFirewalls:
- name: example-api-in-with-selector
assignment:
labelSelector:
machine_type: control_plane
rules:
in:
- description: allow all cluster nodes to access port 6443 on all control-plane nodes
port: 6443
fromMachineSelector:
caph-cluster-example: owned
- name: example-node-port-in-with-fixed-ip
assignment:
labelSelector:
machine_type: worker
rules:
in:
- description: allow two ip addresses to access node port services from worker nodes
port: 30000-32767
from:
- 1.2.3.4/32
- 5.6.7.8/32
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
The above example should only be seen as example, I am open to discuss the implementation.
Environment:
- cluster-api-provider-hetzner version: v1.0.0-beta.26
- Kubernetes version: (use
kubectl version
) v1.28.3 - OS (e.g. from
/etc/os-release
): Flatcar Container Linux by Kinvolk 3602.2.1 (Oklo)
It would be great if you could configure the hcloud firewall in a declarative way with a Kubernetes Custom Resource (CRD).
The question is: Should this be implemented inside cluster api provider hetzner?
My current gut feeling: Yes, a CRD to manage hcloud firewall would be great, but it should be a standalone project.
But maybe I did not understand some of your ideas. Maybe there is a need to implement this in caph.
@simonostendorf what do you think about this?
Yes, I was also thinking about a separate project, but my idea about integration within the capi provider was that you could easily use the reconcile loops that are already there to update the firewall.
I use a cluster with public ipv4 and ipv6 with a hetzner firewall to block access to 6443 if it is not the API LB or other nodes. Every time I do a node rotation I have to update the firewall and add the ip address of the new HCloudMachine. So my thought was to integrate it here so that the machine creation event and machine ip can be easily used.
I hope you understand what I am trying to say.
I use a cluster with public ipv4 and ipv6 with a hetzner firewall to block access to 6443 if it is not the API LB or other nodes.
First I thought "Yes, that makes sense". But thinking about it again, I see it like this:
You want to block this:
case1: evil-client --> node:6443
But what about this:
case2: evil-client --> LB --> node:6443
With the firewall you will block case1, but not case2.
But maybe I am missing something.
Which benefit would your FW bring, if accessing the port via LB works?
Alternative solution: you can help yourself with a daemonset which creates the desired firewall rule via a curl
command. This will be executed on each new machine.
Which benefit would your FW bring, if accessing the port via LB works.
Blocking port 6443 for api was just an example. You are right that blocking api accessible via LB is not the solution to lock out evil clients, but it allows me to send all traffic to api lb and not directly to nodes.
Another example would be node port services (e.g. ingress) that I only want to be accessible via ingress-lb and not via the exposed node-port service.
Alternative solution: you can help yourself with a daemonset which creates the desired firewall rule via a
curl
command. This will be executed on each new machine.
I think a custom controller would be better that runs on the mgmt cluster and reconciles if a HCloudMachine object changes.
@simonostendorf an alternative solution: Cilium Cluster Wide Network Policy
@simonostendorf we are interested in your use-case. Please write a message here or directly if you have found a suitable solution.
You could use label selectors on the firewall to automatically apply it to every node in the cluster. CAPH adds a label caph-cluster-$CLUSTER_NAME=owned
to every node it creates.
Its also possible to use either Flux with tf-controller (for Terraform) or Crossplane to setup the Firewall in Kubernetes.
@simonostendorf an alternative solution: Cilium Cluster Wide Network Policy
Thanks, the cilium firewalls are a good starting point.
@simonostendorf we are interested in your use-case. Please write a message here or directly if you have found a suitable solution.
I need some time to evaluate whether this is a good solution for me, because I accidentally destroyed my test cluster by blocking traffic to 6443 from all sources. :D
Cilium Cluster Wide Network Policy can do a lot :). However I have not found a solution for etcd on the controlplane nodes.
when you roll the controlplan nodes etcd needs to be synced before cillium knows that a new node is part of the cluster.
Sooner or later we will implement private IPs. I close this issue.
Feel free to create a new issue if you have concrete ideas how to improve caph.
Sooner or later we will implement private IPs. I close this issue. @guettli
Private IPs have nothing to do with my initial feature request "support hcloud firewalls".
Cilium Clusterwide Network Policies are okay to use, but I don't think this is "closed as completed".
@simonostendorf yes, you are right. I use "Close as not planned" now.