haproxy-ingress
haproxy-ingress copied to clipboard
Support for annotation "ingress.global-static-ip-name"
Add support for "ingress.global-static-ip-name" annotation which can be used to derive the static ip for the ingress.
Reference: https://github.com/kelseyhightower/ingress-with-static-ip
This sounds to be related with GCP or GKE? You can use a static IP on HAProxy and bare metal using, eg, podSpec.nodeSelector
, or as a standalone deployment outside Kubernetes.
I have not yet done the whole setup myself. I need to move from my dev env to aws/gce/third party hosting solution very soon.
My assumption is that ingress controller (with platform specific adapter) should automatically map the nodes on which ingress (haproxy) is deployed with the underlying platform (aws, gce, e.t.c) with the static ip.
Else, the user need to manually dedicate machines for load balancer & worry about the high availability of these individual machines on which haproxy is running.
If the ingress controller can manage this, the complete management on any environment should be much simpler. If this can be done, we can skip using load balancers on GCE as their customisation is quite limited
If the above is too much of work, would you mind sharing the details on how to map the static ip to the pods running haproxy, without compromising high availability.
Please let me know your views.
ingress controller (with platform specific adapter) should automatically map the nodes on which ingress (haproxy) is deployed
I didn't understand. Ingress (haproxy) is the ingress controller, whose loadbalancer (HAProxy) receives external requests and proxy them to the pods.
If using HAProxy Ingress you'll end up with two or more hosts in order to have some sort of ha. Perhaps what you are looking for is GCE Ingress controller which configures GCE loadbalancer to talk to your pods. No need to use HAProxy Ingress here.
On GCP, after having reserved a static external ip, you can set controller.service.loadBalancerIP
to the value of the reserved static ip.
Make sure to reserve the 'correct' ip address (i.e. global vs regional) depending on the underlying resources in order to make it work (see SO)
How would you control the public/static IP of the load balancer in order to setup DNS without this option? Maybe I'm missing something here, but the first ingress controller gets first ip in pool .100 as an example and the second ingress controller would get .101, but if the controllers were spun up in reverse order then they would be allocated and any associated dns entries wrong.
I know this items is tagged as an enhancement but curious if there is a different way to solve this problem?
How would you control the public/static IP of the load balancer in order to setup DNS without this option?
Daemonset with node-selector, host network, each node with static public IP.
This is still a pending enhancement due to the lack of my knowledge with ingress on cloud providers, we started some cloud based clusters just a few weeks ago.
ps There is also --publish-service
command-line option, which should help depending on your deployment.
If I were to take a step back and look at my non-k8 haproxy setup, I have 2 vm's running haproxy and using ucarp to manage a float IP between the 2 machines - which this float IP is used in all the DNS entries for all the various web applications.
We have an on-premise cluster and any LoadBalancer service is assigned an IP from the pool (first come first served) so if different ingress controllers are created out of order, they are assigned different IPs (now not matching what is defined in DNS). So if we turn up all production services first time, everything lines up, but that does not seem like the correct way to do this.
I looked at --publish-service and tried a few config tests, but perhaps not understanding the intent. I would assume the LoadBalancer IP assigned would be assigned based on the dns resolution of fqdn defined in the service of the publish-service ? I don't think external-DNS is a solution for us as we are trying to make use of our internal AD/DNS infrastructure.
I looked into Daemonset, so replacing type=Deployment to type=DaemonSet (and the haproxy pod starts on each worker node) but how using a node-selector/each node with a static ip would work - is this not the actual function of the LoadBalancer? to distribute inbound traffic to the service?
I feel I must be missing something as everything else in terms of configuration has been ironed out except the easy part of what to setup in DNS and ensure addressing does not change.
Please tell me I'm doing something wrong :)
Hi, most of our deployments are as ugly as this: a couple of hosts with public static IP, daemonset, host network, node selector and the DNS resolving to every single static IP.
We didn't take the time yet to:
- explore cloud options in order to provide better docs and perhaps smarter config options;
- design how to place some tcp proxies in front of the ingress in order to, again, provide better deployment docs and another config options which makes this new topology viable.
- ... what more?
controller.service.loadBalancerIP appears to achieve what I'm after (found on a different github repo, but appears to work with this fork as well) - so each time the haproxy service(s) are recreated they are assigned the same loadbalancer IP (associated with internal DNS).
Sample service yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: prod-haproxy
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
loadBalancerIP: 10.10.10.100
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: staging-haproxy
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
loadBalancerIP: 10.10.10.200
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024