cloud-provider-azure
cloud-provider-azure copied to clipboard
How can I bind an ILB with a load-balancer through private link service?
What would you like to be added:
I am using the following annotations when I deploy my nginx ingress controller on my AKS Cluster:
controller:
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-pls-create: "true"
service.beta.kubernetes.io/azure-pls-proxy-protocol: "false"
service.beta.kubernetes.io/azure-pls-visibility: "*"
externalTrafficPolicy: Local
After that, I create myself my own private endpoint that links with the private link service created. And that works perfectly fine.
Now, I am creating two AKS Clusters in the same Azure region for the sake of redundancy. Looking at the documentation of how to create a private link service, I see that it is possible to define so-called "Outbound settings", where we can input a load-balancer. Is there already a way to achieve that with your annotations? or in another way? I was not able to find anything on your documentation related to this use-case, but I might've overseen it.
Also, unfortunately, once a private link service has been created, I can't seem to be able to add outbound settings (e.g. from the azure portal). Hence it seems like the outbound settings should be defined during PLS creation.
Why is this needed:
We want to deploy multiple AKS clusters within the same region for the sake of redundancy. All those clusters would be load-balanced by a private load-balancer. So essentially, each AKS cluster produces its own internal load balancer, associated with a private link service. We would like to load-balance those internal load-balancers.
When you create the PLS via annotations on the LoadBalancer service, we automatically bind the private link service against the front-end IP of the LB Service in question. You cannot use this support to create a PLS for a different IP that's not hosted in a Kubernetes service.
You can add other annotations like service.beta.kubernetes.io/azure-pls-proxy-protocol: true
to turn on things that are controlled in private link service outbound settings, like PROXY protocol.
Maybe my original post wasn't clear. I am not trying to create a PLS for a different IP that's not hosted in an AKS. I am trying to put up two different private AKS in the same azure region, and I want to load-balance them. The two AKS are created with the available annotations. The problem is that there does not seem to be a way to then load-balance the two AKS clusters. For example, I could have an API Management instance that routes requests to a private load-balancer that would then load-balance the two AKS Clusters. That amounts to have a private load-balancer in front of both ILBs in front of the workers of both AKS clusters. That is not easy to do because the private endpoint to the private load-balancer needs to be in the same VNet as the private endpoints to the ILBs. Hence I asked if it was possible to get the private endpoint of the ILB to be located in another VNet than that of the AKS Cluster. That's what currently is not supported (or seems to be unsupported). Another way would be to deploy both AKS clusters to the same VNet, in which case the problem would be trivially solved, but I don't like that solution, as my networks are not well-separated between the two AKS Clusters.
However, my problem can be solved by spreading my AKS nodes to various availability zones, which is far easier. I am not sure my original problem really is a relevant use-case in view of the existence of the availability zones. What I want to achieve is higher availability and higher reliability.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@zadigus you are correct, there's no way to put the private endpoints behind ILB at this time. If you're specifically wanting to put it behind APIM, though... they recently GA'd support for load-balanced backend pools that would allow you to create your private endpoints to your AKS cluster and just put multiple PE IPs into a backend pool inside APIM itself.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.