aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Support using the existing NLB in service resources
Is your feature request related to a problem?
Kubernetes Service resources of type LoadBalancer will create a new NLB with an instance or ip target type.
Now, I have another service with a different port. I don't want to create a new NLB for it, what I want is to use the existing NLB by adding a new listener and a new target group binding to this new service.
I have looked through the doc but not find the answer. If I miss something and the controller already has the ability to do this, please tell me how to do it.
Describe the solution you'd like
Add an annotation like this
service.beta.kubernetes.io/aws-load-balancer-nlb-arn: ${nlb_arn}
Then the controller can create a new listener with the new target and a new target group binding in this existing NLB
Describe alternatives you've considered A description of any alternative solutions or features you've considered.
@shiyuhang0 Hi, we have in our roadmap to support existing ALB/NLB for the LBC. We're tracking it in https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/228 https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2638
Hi @oliviassss! I see #2638 has been closed due to "not planned" and #228 only about ALB, but no word about NLB. Also i can't see smth related to NLB in 2.7 plans. Could you please clarify situation around this issue? There is few discussions / issues related to abilitiy to reuse NLB, but no answers :)
Oof, I just ran into this. I have exactly the same use-case: I'm using API Gateway, and one of my integrations needs to target a private service in my EKS cluster. I don't want a pointless "me too" post so I'll elaborate on why this bugs me:
This features is important for better IAC because an NLB that gets auto-created by the AWS Load Balancer Controller has to be considered ephemeral. It is also unknown to Terraform, so generally, I have to copypasta into my Terraform vars. Even if I fully deploy my Kubernetes services / ingresses first, it's painful to perform NLB ARN lookup in Terraform so that I can use it in the API Gateway - this requires two chained terraform data resources:
- The first queries Kubernetes for the service and parse the external IP
- The second queries AWS for a load balancer that matches the external IP and returns its ARN
But it's brittle anyways, because the NLB can disappear and get replaced by a new one with a different ARN, breaking my API Gateway integrations. But it also forces me to fully deploy my Kubernetes services first and then not mess with them. Generally, we try to deploy all infrastructure and then the CICD can deploy services to Kubernetes. This forces us to break up our operations so that teams are dependent on each other, which causes delays. "You go first, then I'll do this bit, while you twiddle your thumbs, and when I'm done I'll let you know you can continue... " etc
If it were possible to get LBC to manage targets on an existing NLB without destroying it, I could simply deploy the NLB along with API Gateway in a single terraform apply, and the service team could reference the NBL in FluxCD deployments.
Possible workaround? I need to experiment with this, but from other issues related to this, I think the main problem is that the AWS Load Balancer Controller deletes the NLB if all target groups are removed. As a workaround, could we modify the IAM role for the LBC service account, denying it the right to delete the NLB? This would of course cause errors, but should prevent the load balancer from getting deleted
I see this was planned for 1.6.0. Is it now planned for 1.7.0?
any plan for this one? currently ,we are faceing the same issue. the existing nlb created by terraform. if we want to re-use it ,we need add tags to this nlb, but when we try to destroy the vernemq service, the nlb also force deleted by aws-load-balancer-controller even we enable nlb delection-protection .
FYI we had some success attaching existing nlbs to services by making their aws tags match what the controller expects + setting the name override when necessary. If you do the same tag setting for the target-group it also stays around. For 0-downtime prefill the TG with the new ips before cutting over.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten