airbyte-platform
airbyte-platform copied to clipboard
Add loadBalancerIP to service spec in APIs charts
What
Hi !
I'd like to add loadBalancerIP to service spec to allow use cases such as Internal Load Balancers on Google Kubernetes Engine with static internal IP reserved in advance.
It helps with:
- Increasing security by not exposing the service with an external IP.
- Allowing more advanced automation in provisioning infrastructure.
How
Tested on GKE v1.27.8-gke.1067004 STABLE Release Channel.
service:
type: LoadBalancer
loadBalancerIP: 10.42.0.30
annotations:
networking.gke.io/load-balancer-type: "Internal"
Can this PR be safely reverted / rolled back?
If unsure, leave it blank.
- [x] YES 💚
- [ ] NO ❌
🚨 User Impact 🚨
Nothing breaking for anyone, only adds more use cases.
Quality Gate passed
Issues
0 New issues
0 Accepted issues
Measures
0 Security Hotspots
No data about Coverage
No data about Duplication
Hi, it's perfect, I also need this for my use case. Thx
@pdemagny For what it's worth, I believe this can already be achieved by adding the following annotation on the webapp.ingress.annotations object—but it really does need to be made much easier since GKE (including Autopilot) is popular:
kubernetes.io/ingress.global-static-ip-name: <your-reserved-ip-name>
But to make this actually work on GKE, it requires quite a bit of fiddling. This is what I've found (mostly) works for a private GKE Autopilot cluster:
webapp:
service:
annotations:
cloud.google.com/neg: '{"ingress": true}'
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: <reserved-ip-name>
networking.gke.io/managed-certificates: <managed-cert-name(s)>
hosts:
- host: <your-hostname-matching-cert>
paths:
- path: /*
pathType: ImplementationSpecific
Explanations:
cloud.google.com/neg: '{"ingress": true}'allows GKE to create an Ingress from a ClusterIP Service 2,kubernetes.io/ingress.class: gcegets it to create an actual External HTTP/S LB- Managed certs are self-explanatory, but you can also use the
ingress.gcp.kubernetes.io/pre-shared-certannotation if you want to use non-managed cert(s) - I've also experimented using
kubectl applywithBackendConfigandFrontEndConfigresources, then referencing them in the annotations usingcloud.google.com/backend-config: '{"default": "<your-backend-config-name>"}'(inwebapp.service.annotations) andnetworking.gke.io/v1beta1.FrontendConfig: <your-frontend-config-name>(inwebapp.ingress.annotations). This lets you do things like adjust the LB timeout (which defaults to 30 seconds) and enable Cloud IAP or Cloud Armor. - There's no good way right now to automatically provision things like the managed cert, reserved IP, or frontend/backend configs. Some of that maybe belongs in Terraform, but the GKE-specific resources would be nice to have within the deployment.
This can still be a little quirky at times, but at least gets most of the objects provisioned (although sometimes I get two sets of backends for some reason). With IAP specifically, I often have to toggle it off and then on again, but I think that's due to the improper validation of those settings (right now it says the oAuth key is required, but with Google-managed it shouldn't be). Some of that may also be that I'm behind a Shared VPC (but honestly for a production deployment, almost everyone should use Shared VPC + Cloud NAT + IAP from a security posture standpoint).
Overall I feel that using the named values via annotations in this case is more appropriate . . . do you have a case where you would want to pass the IP literals around that couldn't be covered using annotations? (other than just ease of configuration?)
And again, this is REALLY hard to find correct info for, and much has changed over time. So I do think there needs to be a way to simplify configuration, especially for GKE.
So +1 for making this simpler than all that.
@pdemagny Apologies, I just realized that you're talking about the API charts, not webapp. I should really read more often!
With that said, it seems to make sense that whatever config happens here is made more consistent with the settings for webapp—having completely different configs to expose them seems confusing. (I'm not sure one is better than the other, but it would be nice if they both had the same options exposed and created consistent cluster config either way.)